14 Commits
v0.4 ... v0.4.3

21 changed files with 223 additions and 160 deletions

View File

@@ -3,7 +3,22 @@
All notable changes to this project will be documented in this file. All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to
[Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.4.3] - 2025-05-10
## Changed
- Update dependency: Gurobi 12
## [0.4.2] - 2024-12-10
## Changed
- H5File: Use float64 precision instead of float32
- LearningSolver: optimize now returns (model, stats) instead of just stats
- Update dependency: Gurobi 11
## [0.4.0] - 2024-02-06 ## [0.4.0] - 2024-02-06
@@ -15,31 +30,41 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Changed ### Changed
- LearningSolver.solve no longer generates HDF5 files; use a collector instead. - LearningSolver.solve no longer generates HDF5 files; use a collector instead.
- Add `_gurobipy` suffix to all `build_model` functions; implement some `_pyomo` and `_jump` functions. - Add `_gurobipy` suffix to all `build_model` functions; implement some `_pyomo`
and `_jump` functions.
## [0.3.0] - 2023-06-08 ## [0.3.0] - 2023-06-08
This is a complete rewrite of the original prototype package, with an entirely new API, focused on performance, scalability and flexibility. This is a complete rewrite of the original prototype package, with an entirely
new API, focused on performance, scalability and flexibility.
### Added ### Added
- Add support for Python/Gurobipy and Julia/JuMP, in addition to the existing Python/Pyomo interface. - Add support for Python/Gurobipy and Julia/JuMP, in addition to the existing
- Add six new random instance generators (bin packing, capacitated p-median, set cover, set packing, unit commitment, vertex cover), in addition to the three existing generators (multiknapsack, stable set, tsp). Python/Pyomo interface.
- Collect some additional raw training data (e.g. basis status, reduced costs, etc) - Add six new random instance generators (bin packing, capacitated p-median, set
- Add new primal solution ML strategies (memorizing, independent vars and joint vars) cover, set packing, unit commitment, vertex cover), in addition to the three
- Add new primal solution actions (set warm start, fix variables, enforce proximity) existing generators (multiknapsack, stable set, tsp).
- Collect some additional raw training data (e.g. basis status, reduced costs,
etc)
- Add new primal solution ML strategies (memorizing, independent vars and joint
vars)
- Add new primal solution actions (set warm start, fix variables, enforce
proximity)
- Add runnable tutorials and user guides to the documentation. - Add runnable tutorials and user guides to the documentation.
### Changed ### Changed
- To support large-scale problems and datasets, switch from an in-memory architecture to a file-based architecture, using HDF5 files. - To support large-scale problems and datasets, switch from an in-memory
- To accelerate development cycle, split training data collection from feature extraction. architecture to a file-based architecture, using HDF5 files.
- To accelerate development cycle, split training data collection from feature
extraction.
### Removed ### Removed
- Temporarily remove ML strategies for lazy constraints - Temporarily remove ML strategies for lazy constraints
- Remove benchmarks from documentation. These will be published in a separate paper. - Remove benchmarks from documentation. These will be published in a separate
paper.
## [0.1.0] - 2020-11-23 ## [0.1.0] - 2020-11-23

View File

@@ -11,7 +11,7 @@
"\n", "\n",
"Benchmark sets such as [MIPLIB](https://miplib.zib.de/) or [TSPLIB](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/) are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, however, make existing benchmark sets less suitable for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.\n", "Benchmark sets such as [MIPLIB](https://miplib.zib.de/) or [TSPLIB](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/) are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, however, make existing benchmark sets less suitable for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.\n",
"\n", "\n",
"To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. As of MIPLearn 0.3, nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.\n", "To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. Nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.\n",
"\n", "\n",
"In the following, we describe the problems included in the library, their MIP formulation and the generation algorithm." "In the following, we describe the problems included in the library, their MIP formulation and the generation algorithm."
] ]

View File

@@ -61,7 +61,7 @@ Citing MIPLearn
If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows: If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.3)*. Zenodo (2023). DOI: https://doi.org/10.5281/zenodo.4287567 * **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.4)*. Zenodo (2024). DOI: https://doi.org/10.5281/zenodo.4287567
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed: If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:

View File

@@ -45,16 +45,10 @@
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
"- Julia version, compatible with the JuMP modeling language.\n", "- Julia version, compatible with the JuMP modeling language.\n",
"\n", "\n",
"In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n", "In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
"\n", "\n",
"```\n", "```\n",
"$ pip install MIPLearn==0.3\n", "$ pip install MIPLearn~=0.4\n",
"```\n",
"\n",
"In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems.\n",
"\n",
"```\n",
"$ pip install 'gurobipy>=10,<10.1'\n",
"```" "```"
] ]
}, },
@@ -220,11 +214,12 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Set parameter Threads to value 1\n",
"Restricted license - for non-production use only - expires 2024-10-28\n", "Restricted license - for non-production use only - expires 2024-10-28\n",
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
"\n", "\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n", "\n",
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n", "Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
"Model fingerprint: 0x58dfdd53\n", "Model fingerprint: 0x58dfdd53\n",
@@ -250,12 +245,14 @@
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n", "* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
"\n", "\n",
"Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n", "Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 20 (of 20 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 2: 1320 1400 \n", "Solution count 2: 1320 1400 \n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n", "Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 371, time in user-callback 0.00 sec\n",
"obj = 1320.0\n", "obj = 1320.0\n",
"x = [-0.0, 1.0, 1.0]\n", "x = [-0.0, 1.0, 1.0]\n",
"y = [0.0, 60.0, 40.0]\n" "y = [0.0, 60.0, 40.0]\n"
@@ -401,7 +398,7 @@
"from miplearn.collectors.basic import BasicCollector\n", "from miplearn.collectors.basic import BasicCollector\n",
"\n", "\n",
"bc = BasicCollector()\n", "bc = BasicCollector()\n",
"bc.collect(train_data, build_uc_model, n_jobs=4)" "bc.collect(train_data, build_uc_model)"
] ]
}, },
{ {
@@ -483,7 +480,7 @@
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
"\n", "\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa8b70287\n", "Model fingerprint: 0xa8b70287\n",
@@ -493,22 +490,24 @@
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n", " RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n", "Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.01s\n", "Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n", "Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n", "\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n", "Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n",
" 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n",
"\n", "\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", "Solved in 1 iterations and 0.00 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n", "Optimal objective 8.290621916e+09\n",
"\n",
"User-callback calls 56, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
"\n", "\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xcf27855a\n", "Model fingerprint: 0x892e56b2\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n", "Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n", " Matrix range [1e+00, 2e+06]\n",
@@ -538,19 +537,21 @@
" Gomory: 1\n", " Gomory: 1\n",
" Flow cover: 2\n", " Flow cover: 2\n",
"\n", "\n",
"Explored 1 nodes (565 simplex iterations) in 0.03 seconds (0.01 work units)\n", "Explored 1 nodes (565 simplex iterations) in 0.02 seconds (0.01 work units)\n",
"Thread count was 20 (of 20 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 1: 8.29153e+09 \n", "Solution count 1: 8.29153e+09 \n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%\n" "Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%\n",
"\n",
"User-callback calls 193, time in user-callback 0.00 sec\n"
] ]
}, },
{ {
"data": { "data": {
"text/plain": [ "text/plain": [
"{'WS: Count': 1, 'WS: Number of variables set': 482.0}" "{'WS: Count': 1, 'WS: Number of variables set': 477.0}"
] ]
}, },
"execution_count": 8, "execution_count": 8,
@@ -592,7 +593,7 @@
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
"\n", "\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa8b70287\n", "Model fingerprint: 0xa8b70287\n",
@@ -611,10 +612,12 @@
"\n", "\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n", "Optimal objective 8.290621916e+09\n",
"\n",
"User-callback calls 56, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
"\n", "\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x4cbbf7c7\n", "Model fingerprint: 0x4cbbf7c7\n",
@@ -643,39 +646,36 @@
" 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", " 0 2 8.2908e+09 0 2 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", "H 9 9 8.292131e+09 8.2908e+09 0.02% 1.0 0s\n",
"H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n", "H 132 88 8.292121e+09 8.2908e+09 0.02% 2.0 0s\n",
"* 133 88 28 8.292121e+09 8.2908e+09 0.02% 2.2 0s\n",
"H 216 136 8.291918e+09 8.2909e+09 0.01% 2.4 0s\n",
"* 232 136 28 8.291664e+09 8.2909e+09 0.01% 2.4 0s\n",
"\n", "\n",
"Cutting planes:\n", "Cutting planes:\n",
" Gomory: 2\n", " Gomory: 2\n",
" Cover: 1\n",
" MIR: 1\n", " MIR: 1\n",
" Inf proof: 3\n",
"\n", "\n",
"Explored 1 nodes (1031 simplex iterations) in 0.15 seconds (0.03 work units)\n", "Explored 233 nodes (1577 simplex iterations) in 0.09 seconds (0.06 work units)\n",
"Thread count was 20 (of 20 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n", "Solution count 7: 8.29166e+09 8.29192e+09 8.29212e+09 ... 9.75713e+09\n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n" "Best objective 8.291663722826e+09, best bound 8.290885027548e+09, gap 0.0094%\n",
"\n",
"User-callback calls 708, time in user-callback 0.00 sec\n"
] ]
},
{
"data": {
"text/plain": [
"{}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
} }
], ],
"source": [ "source": [
"solver_baseline = LearningSolver(components=[])\n", "solver_baseline = LearningSolver(components=[])\n",
"solver_baseline.fit(train_data)\n", "solver_baseline.fit(train_data)\n",
"solver_baseline.optimize(test_data[0], build_uc_model)" "solver_baseline.optimize(test_data[0], build_uc_model);"
] ]
}, },
{ {
@@ -716,7 +716,7 @@
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
"\n", "\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x19042f12\n", "Model fingerprint: 0x19042f12\n",
@@ -735,13 +735,15 @@
"\n", "\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.253596777e+09\n", "Optimal objective 8.253596777e+09\n",
"\n",
"User-callback calls 56, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n", "Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
"\n", "\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xf97cde91\n", "Model fingerprint: 0x6926c32f\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n", "Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n", " Matrix range [1e+00, 2e+06]\n",
@@ -749,14 +751,15 @@
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n", " RHS range [3e+08, 3e+08]\n",
"\n", "\n",
"User MIP start produced solution with objective 8.25814e+09 (0.00s)\n", "User MIP start produced solution with objective 8.25814e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25512e+09 (0.01s)\n", "User MIP start produced solution with objective 8.25512e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", "User MIP start produced solution with objective 8.2551e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", "User MIP start produced solution with objective 8.25508e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25483e+09 (0.01s)\n", "User MIP start produced solution with objective 8.25508e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", "User MIP start produced solution with objective 8.25499e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.01s)\n", "User MIP start produced solution with objective 8.25448e+09 (0.02s)\n",
"Loaded user MIP start with objective 8.25459e+09\n", "User MIP start produced solution with objective 8.25448e+09 (0.02s)\n",
"Loaded user MIP start with objective 8.25448e+09\n",
"\n", "\n",
"Presolve time: 0.00s\n", "Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
@@ -767,29 +770,23 @@
" Nodes | Current Node | Objective Bounds | Work\n", " Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n", "\n",
" 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n", " 0 0 8.2536e+09 0 1 8.2545e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n", " 0 0 8.2537e+09 0 3 8.2545e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n",
"\n", "\n",
"Cutting planes:\n", "Cutting planes:\n",
" Cover: 1\n", " Cover: 1\n",
" MIR: 2\n", " Flow cover: 2\n",
" StrongCG: 1\n",
" Flow cover: 1\n",
"\n", "\n",
"Explored 1 nodes (575 simplex iterations) in 0.05 seconds (0.01 work units)\n", "Explored 1 nodes (515 simplex iterations) in 0.03 seconds (0.02 work units)\n",
"Thread count was 20 (of 20 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 4: 8.25459e+09 8.25483e+09 8.25512e+09 8.25814e+09 \n", "Solution count 6: 8.25448e+09 8.25499e+09 8.25508e+09 ... 8.25814e+09\n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n", "Best objective 8.254479145594e+09, best bound 8.253689731796e+09, gap 0.0096%\n",
"obj = 8254590409.969726\n", "\n",
"User-callback calls 203, time in user-callback 0.00 sec\n",
"obj = 8254479145.594168\n",
"x = [1.0, 1.0, 0.0]\n", "x = [1.0, 1.0, 0.0]\n",
"y = [935662.0949262811, 1604270.0218116897, 0.0]\n" "y = [935662.0949262811, 1604270.0218116897, 0.0]\n"
] ]

View File

@@ -41,7 +41,7 @@
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the [official Julia website for more instructions](https://julialang.org/downloads/). After Julia is installed, launch the Julia REPL, type `]` to enter package mode, then install MIPLearn:\n", "In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the [official Julia website for more instructions](https://julialang.org/downloads/). After Julia is installed, launch the Julia REPL, type `]` to enter package mode, then install MIPLearn:\n",
"\n", "\n",
"```\n", "```\n",
"pkg> add MIPLearn@0.3\n", "pkg> add MIPLearn@0.4\n",
"```" "```"
] ]
}, },

View File

@@ -45,16 +45,10 @@
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
"- Julia version, compatible with the JuMP modeling language.\n", "- Julia version, compatible with the JuMP modeling language.\n",
"\n", "\n",
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n", "In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
"\n", "\n",
"```\n", "```\n",
"$ pip install MIPLearn==0.3\n", "$ pip install MIPLearn~=0.4\n",
"```\n",
"\n",
"In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems.\n",
"\n",
"```\n",
"$ pip install 'gurobipy>=10,<10.1'\n",
"```" "```"
] ]
}, },

BIN
miplearn/.io.py.swp Normal file

Binary file not shown.

View File

@@ -9,6 +9,7 @@ import sys
from io import StringIO from io import StringIO
from os.path import exists from os.path import exists
from typing import Callable, List, Any from typing import Callable, List, Any
import traceback
from ..h5 import H5File from ..h5 import H5File
from ..io import _RedirectOutput, gzip, _to_h5_filename from ..io import _RedirectOutput, gzip, _to_h5_filename
@@ -29,52 +30,56 @@ class BasicCollector:
verbose: bool = False, verbose: bool = False,
) -> None: ) -> None:
def _collect(data_filename: str) -> None: def _collect(data_filename: str) -> None:
h5_filename = _to_h5_filename(data_filename) try:
mps_filename = h5_filename.replace(".h5", ".mps") h5_filename = _to_h5_filename(data_filename)
mps_filename = h5_filename.replace(".h5", ".mps")
if exists(h5_filename): if exists(h5_filename):
# Try to read optimal solution # Try to read optimal solution
mip_var_values = None mip_var_values = None
try: try:
with H5File(h5_filename, "r") as h5: with H5File(h5_filename, "r") as h5:
mip_var_values = h5.get_array("mip_var_values") mip_var_values = h5.get_array("mip_var_values")
except: except:
pass pass
if mip_var_values is None: if mip_var_values is None:
print(f"Removing empty/corrupted h5 file: {h5_filename}") print(f"Removing empty/corrupted h5 file: {h5_filename}")
os.remove(h5_filename) os.remove(h5_filename)
else: else:
return return
with H5File(h5_filename, "w") as h5: with H5File(h5_filename, "w") as h5:
streams: List[Any] = [StringIO()] streams: List[Any] = [StringIO()]
if verbose: if verbose:
streams += [sys.stdout] streams += [sys.stdout]
with _RedirectOutput(streams): with _RedirectOutput(streams):
# Load and extract static features # Load and extract static features
model = build_model(data_filename) model = build_model(data_filename)
model.extract_after_load(h5) model.extract_after_load(h5)
if not self.skip_lp: if not self.skip_lp:
# Solve LP relaxation # Solve LP relaxation
relaxed = model.relax() relaxed = model.relax()
relaxed.optimize() relaxed.optimize()
relaxed.extract_after_lp(h5) relaxed.extract_after_lp(h5)
# Solve MIP # Solve MIP
model.optimize() model.optimize()
model.extract_after_mip(h5) model.extract_after_mip(h5)
if self.write_mps: if self.write_mps:
# Add lazy constraints to model # Add lazy constraints to model
model._lazy_enforce_collected() model._lazy_enforce_collected()
# Save MPS file # Save MPS file
model.write(mps_filename) model.write(mps_filename)
gzip(mps_filename) gzip(mps_filename)
h5.put_scalar("mip_log", streams[0].getvalue()) h5.put_scalar("mip_log", streams[0].getvalue())
except:
print(f"Error processing: data_filename")
traceback.print_exc()
if n_jobs > 1: if n_jobs > 1:
p_umap( p_umap(

View File

@@ -1,29 +1,53 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization # MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from typing import Tuple from typing import Tuple, List
import numpy as np import numpy as np
from miplearn.h5 import H5File from miplearn.h5 import H5File
def _extract_bin_var_names_values( def _extract_var_names_values(
h5: H5File, h5: H5File,
selected_var_types: List[bytes],
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
bin_var_names, bin_var_indices = _extract_bin_var_names(h5) bin_var_names, bin_var_indices = _extract_var_names(h5, selected_var_types)
var_values = h5.get_array("mip_var_values") var_values = h5.get_array("mip_var_values")
assert var_values is not None assert var_values is not None
bin_var_values = var_values[bin_var_indices].astype(int) bin_var_values = var_values[bin_var_indices].astype(int)
return bin_var_names, bin_var_values, bin_var_indices return bin_var_names, bin_var_values, bin_var_indices
def _extract_bin_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]: def _extract_var_names(
h5: H5File,
selected_var_types: List[bytes],
) -> Tuple[np.ndarray, np.ndarray]:
var_types = h5.get_array("static_var_types") var_types = h5.get_array("static_var_types")
var_names = h5.get_array("static_var_names") var_names = h5.get_array("static_var_names")
assert var_types is not None assert var_types is not None
assert var_names is not None assert var_names is not None
bin_var_indices = np.where(var_types == b"B")[0] bin_var_indices = np.where(np.isin(var_types, selected_var_types))[0]
bin_var_names = var_names[bin_var_indices] bin_var_names = var_names[bin_var_indices]
assert len(bin_var_names.shape) == 1 assert len(bin_var_names.shape) == 1
return bin_var_names, bin_var_indices return bin_var_names, bin_var_indices
def _extract_bin_var_names_values(
h5: H5File,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
return _extract_var_names_values(h5, [b"B"])
def _extract_bin_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
return _extract_var_names(h5, [b"B"])
def _extract_int_var_names_values(
h5: H5File,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
return _extract_var_names_values(h5, [b"B", b"I"])
def _extract_int_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
return _extract_var_names(h5, [b"B", b"I"])

View File

@@ -5,7 +5,7 @@
import logging import logging
from typing import Any, Dict, List from typing import Any, Dict, List
from . import _extract_bin_var_names_values from . import _extract_int_var_names_values
from .actions import PrimalComponentAction from .actions import PrimalComponentAction
from ...solvers.abstract import AbstractModel from ...solvers.abstract import AbstractModel
from ...h5 import H5File from ...h5 import H5File
@@ -28,5 +28,5 @@ class ExpertPrimalComponent:
self, test_h5: str, model: AbstractModel, stats: Dict[str, Any] self, test_h5: str, model: AbstractModel, stats: Dict[str, Any]
) -> None: ) -> None:
with H5File(test_h5, "r") as h5: with H5File(test_h5, "r") as h5:
names, values, _ = _extract_bin_var_names_values(h5) names, values, _ = _extract_int_var_names_values(h5)
self.action.perform(model, names, values.reshape(1, -1), stats) self.action.perform(model, names, values.reshape(1, -1), stats)

View File

@@ -28,4 +28,5 @@ class ExpertBranchPriorityComponent:
for var_idx, var_name in enumerate(var_names): for var_idx, var_name in enumerate(var_names):
if np.isfinite(var_priority[var_idx]): if np.isfinite(var_priority[var_idx]):
var = model.getVarByName(var_name.decode()) var = model.getVarByName(var_name.decode())
var.branchPriority = int(log(1 + var_priority[var_idx])) assert var is not None, f"unknown var: {var_name}"
var.BranchPriority = int(log(1 + var_priority[var_idx]))

View File

@@ -68,7 +68,7 @@ class H5File:
return return
self._assert_is_array(value) self._assert_is_array(value)
if value.dtype.kind == "f": if value.dtype.kind == "f":
value = value.astype("float32") value = value.astype("float64")
if key in self.file: if key in self.file:
del self.file[key] del self.file[key]
return self.file.create_dataset(key, data=value, compression="gzip") return self.file.create_dataset(key, data=value, compression="gzip")

View File

@@ -87,7 +87,10 @@ def read_pkl_gz(filename: str) -> Any:
def _to_h5_filename(data_filename: str) -> str: def _to_h5_filename(data_filename: str) -> str:
output = f"{data_filename}.h5" output = f"{data_filename}.h5"
output = output.replace(".gz.h5", ".h5") output = output.replace(".gz.h5", ".h5")
output = output.replace(".json.h5", ".h5") output = output.replace(".csv.h5", ".h5")
output = output.replace(".pkl.h5", ".h5")
output = output.replace(".jld2.h5", ".h5") output = output.replace(".jld2.h5", ".h5")
output = output.replace(".json.h5", ".h5")
output = output.replace(".lp.h5", ".h5")
output = output.replace(".mps.h5", ".h5")
output = output.replace(".pkl.h5", ".h5")
return output return output

View File

@@ -8,7 +8,7 @@ from typing import List, Union
import gurobipy as gp import gurobipy as gp
import numpy as np import numpy as np
import pyomo.environ as pe import pyomo.environ as pe
from gurobipy.gurobipy import GRB from gurobipy import GRB
from scipy.stats import uniform, randint from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen from scipy.stats.distributions import rv_frozen

View File

@@ -7,7 +7,7 @@ from typing import List, Union
import gurobipy as gp import gurobipy as gp
import numpy as np import numpy as np
from gurobipy.gurobipy import GRB from gurobipy import GRB
from scipy.stats import uniform, randint from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen from scipy.stats.distributions import rv_frozen

View File

@@ -105,7 +105,8 @@ def build_stab_model_gurobipy(
model.addConstr(x[i1] + x[i2] <= 1) model.addConstr(x[i1] + x[i2] <= 1)
def cuts_separate(m: GurobiModel) -> List[Hashable]: def cuts_separate(m: GurobiModel) -> List[Hashable]:
x_val = m.inner.cbGetNodeRel(x) x_val_dict = m.inner.cbGetNodeRel(x)
x_val = [x_val_dict[i] for i in nodes]
return _stab_separate(data, x_val) return _stab_separate(data, x_val)
def cuts_enforce(m: GurobiModel, violations: List[Any]) -> None: def cuts_enforce(m: GurobiModel, violations: List[Any]) -> None:

View File

@@ -4,10 +4,10 @@
import logging import logging
import json import json
from typing import Dict, Optional, Callable, Any, List from typing import Dict, Optional, Callable, Any, List, Sequence
import gurobipy as gp import gurobipy as gp
from gurobipy import GRB, GurobiError from gurobipy import GRB, GurobiError, Var
import numpy as np import numpy as np
from scipy.sparse import lil_matrix from scipy.sparse import lil_matrix
@@ -109,7 +109,11 @@ class GurobiModel(AbstractModel):
assert constrs_sense.shape == (nconstrs,) assert constrs_sense.shape == (nconstrs,)
assert constrs_rhs.shape == (nconstrs,) assert constrs_rhs.shape == (nconstrs,)
gp_vars = [self.inner.getVarByName(var_name.decode()) for var_name in var_names] gp_vars: list[Var] = []
for var_name in var_names:
v = self.inner.getVarByName(var_name.decode())
assert v is not None, f"unknown var: {var_name}"
gp_vars.append(v)
self.inner.addMConstr(constrs_lhs, gp_vars, constrs_sense, constrs_rhs) self.inner.addMConstr(constrs_lhs, gp_vars, constrs_sense, constrs_rhs)
if stats is not None: if stats is not None:
@@ -188,9 +192,10 @@ class GurobiModel(AbstractModel):
var_val = var_values[var_idx] var_val = var_values[var_idx]
if np.isfinite(var_val): if np.isfinite(var_val):
var = self.inner.getVarByName(var_name.decode()) var = self.inner.getVarByName(var_name.decode())
var.vtype = "C" assert var is not None, f"unknown var: {var_name}"
var.lb = var_val var.VType = "c"
var.ub = var_val var.LB = var_val
var.UB = var_val
n_fixed += 1 n_fixed += 1
if stats is not None: if stats is not None:
stats["Fixed variables"] = n_fixed stats["Fixed variables"] = n_fixed
@@ -213,7 +218,7 @@ class GurobiModel(AbstractModel):
return GurobiModel(self.inner.relax()) return GurobiModel(self.inner.relax())
def set_time_limit(self, time_limit_sec: float) -> None: def set_time_limit(self, time_limit_sec: float) -> None:
self.inner.params.timeLimit = time_limit_sec self.inner.params.TimeLimit = time_limit_sec
def set_warm_starts( def set_warm_starts(
self, self,
@@ -228,12 +233,13 @@ class GurobiModel(AbstractModel):
self.inner.numStart = n_starts self.inner.numStart = n_starts
for start_idx in range(n_starts): for start_idx in range(n_starts):
self.inner.params.startNumber = start_idx self.inner.params.StartNumber = start_idx
for var_idx, var_name in enumerate(var_names): for var_idx, var_name in enumerate(var_names):
var_val = var_values[start_idx, var_idx] var_val = var_values[start_idx, var_idx]
if np.isfinite(var_val): if np.isfinite(var_val):
var = self.inner.getVarByName(var_name.decode()) var = self.inner.getVarByName(var_name.decode())
var.start = var_val assert var is not None, f"unknown var: {var_name}"
var.Start = var_val
if stats is not None: if stats is not None:
stats["WS: Count"] = n_starts stats["WS: Count"] = n_starts

View File

@@ -3,7 +3,7 @@
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from os.path import exists from os.path import exists
from tempfile import NamedTemporaryFile from tempfile import NamedTemporaryFile
from typing import List, Any, Union, Dict, Callable, Optional from typing import List, Any, Union, Dict, Callable, Optional, Tuple
from miplearn.h5 import H5File from miplearn.h5 import H5File
from miplearn.io import _to_h5_filename from miplearn.io import _to_h5_filename
@@ -25,7 +25,7 @@ class LearningSolver:
self, self,
model: Union[str, AbstractModel], model: Union[str, AbstractModel],
build_model: Optional[Callable] = None, build_model: Optional[Callable] = None,
) -> Dict[str, Any]: ) -> Tuple[AbstractModel, Dict[str, Any]]:
h5_filename, mode = NamedTemporaryFile().name, "w" h5_filename, mode = NamedTemporaryFile().name, "w"
if isinstance(model, str): if isinstance(model, str):
assert build_model is not None assert build_model is not None
@@ -47,8 +47,10 @@ class LearningSolver:
relaxed.optimize() relaxed.optimize()
relaxed.extract_after_lp(h5) relaxed.extract_after_lp(h5)
for comp in self.components: for comp in self.components:
comp.before_mip(h5_filename, model, stats) comp_stats = comp.before_mip(h5_filename, model, stats)
if comp_stats is not None:
stats.update(comp_stats)
model.optimize() model.optimize()
model.extract_after_mip(h5) model.extract_after_mip(h5)
return stats return model, stats

View File

@@ -6,7 +6,7 @@ from setuptools import setup, find_namespace_packages
setup( setup(
name="miplearn", name="miplearn",
version="0.4.0", version="0.4.3",
author="Alinson S. Xavier", author="Alinson S. Xavier",
author_email="axavier@anl.gov", author_email="axavier@anl.gov",
description="Extensible Framework for Learning-Enhanced Mixed-Integer Optimization", description="Extensible Framework for Learning-Enhanced Mixed-Integer Optimization",
@@ -15,7 +15,7 @@ setup(
python_requires=">=3.9", python_requires=">=3.9",
install_requires=[ install_requires=[
"Jinja2<3.1", "Jinja2<3.1",
"gurobipy>=10,<11", "gurobipy>=12,<13",
"h5py>=3,<4", "h5py>=3,<4",
"networkx>=2,<3", "networkx>=2,<3",
"numpy>=1,<2", "numpy>=1,<2",
@@ -36,8 +36,13 @@ setup(
"pyflakes==2.5.0", "pyflakes==2.5.0",
"pytest>=7,<8", "pytest>=7,<8",
"sphinx-book-theme==0.1.0", "sphinx-book-theme==0.1.0",
"sphinxcontrib-applehelp==1.0.4",
"sphinxcontrib-devhelp==1.0.2",
"sphinxcontrib-htmlhelp==2.0.1",
"sphinxcontrib-serializinghtml==1.1.5",
"sphinxcontrib-qthelp==1.0.3",
"sphinx-multitoc-numbering>=0.1,<0.2", "sphinx-multitoc-numbering>=0.1,<0.2",
"twine>=4,<5", "twine>=6,<7",
] ]
}, },
) )

View File

@@ -71,5 +71,5 @@ def test_usage_stab(
comp = MemorizingCutsComponent(clf=clf, extractor=default_extractor) comp = MemorizingCutsComponent(clf=clf, extractor=default_extractor)
solver = LearningSolver(components=[comp]) solver = LearningSolver(components=[comp])
solver.fit(data_filenames) solver.fit(data_filenames)
stats = solver.optimize(data_filenames[0], build_model) # type: ignore model, stats = solver.optimize(data_filenames[0], build_model) # type: ignore
assert stats["Cuts: AOT"] > 0 assert stats["Cuts: AOT"] > 0

View File

@@ -65,5 +65,5 @@ def test_usage_tsp(
comp = MemorizingLazyComponent(clf=clf, extractor=default_extractor) comp = MemorizingLazyComponent(clf=clf, extractor=default_extractor)
solver = LearningSolver(components=[comp]) solver = LearningSolver(components=[comp])
solver.fit(data_filenames) solver.fit(data_filenames)
stats = solver.optimize(data_filenames[0], build_model) # type: ignore model, stats = solver.optimize(data_filenames[0], build_model) # type: ignore
assert stats["Lazy Constraints: AOT"] > 0 assert stats["Lazy Constraints: AOT"] > 0