1. Preliminaries¶
1.1. Benchmark challenges¶
When evaluating the performance of a conventional MIP solver, benchmark sets, such as MIPLIB and TSPLIB, are typically used. The performance of newly proposed solvers or solution techniques are typically measured as the average (or total) running time the solver takes to solve the entire benchmark set. For Learning-Enhanced MIP solvers, it is also necessary to specify what instances should the solver be trained on (the training instances) before solving the actual set of instances we are interested in (the test instances). If the training instances are very similar to the test instances, we would expect a Learning-Enhanced Solver to present stronger perfomance benefits.
In MIPLearn, each optimization problem comes with a set of benchmark challenges, which specify how should the training and test instances be generated. The first challenges are typically easier, in the sense that training and test instances are very similar. Later challenges gradually make the sets more distinct, and therefore harder to learn from.
1.2. Baseline results¶
To illustrate the performance of LearningSolver
, and to set a baseline for newly proposed techniques, we present in this page, for each benchmark challenge, a small set of computational results measuring the solution speed of the solver and the solution quality with default parameters. For more detailed computational studies, see references. We compare three solvers:
baseline: Gurobi 9.1 with default settings (a conventional state-of-the-art MIP solver)
ml-exact:
LearningSolver
with default settings, using Gurobi 9.0 as internal MIP solverml-heuristic: Same as above, but with
mode="heuristic"
All experiments presented here were performed on a Linux server (Ubuntu Linux 18.04 LTS) with Intel Xeon Gold 6230s (2 processors, 40 cores, 80 threads) and 256 GB RAM (DDR4, 2933 MHz). All solvers were restricted to use 4 threads, with no time limits, and 10 instances were solved simultaneously at a time.