56 Commits
v0.3.0 ... dev

Author SHA1 Message Date
aa291410d8 docs: Minor updates 2025-09-24 10:31:08 -05:00
ca05429203 uc: Add quadratic terms 2025-09-23 11:39:39 -05:00
4eeb1c1ab3 Add maxcut to problems.ipynb 2025-09-23 11:27:03 -05:00
bfaae7c005 BasicCollector: Make log file optional 2025-07-22 12:25:47 -05:00
596f41c477 BasicCollector: save solver log to file 2025-06-12 11:16:16 -05:00
19e1f52b4f BasicCollector: store data_filename in HDF5 file 2025-06-12 11:15:09 -05:00
7ed213d4ce MaxCut: add w_jitter parameter to control edge weight randomization 2025-06-12 10:55:40 -05:00
daa801b5e9 Pyomo: implement build_maxcut_model; add support for quadratic objectives 2025-06-11 14:23:10 -05:00
2ca2794457 GurobiModel: Capture static_var_obj_coeffs_quad 2025-06-11 13:19:36 -05:00
1c6912cc51 Add MaxCut problem 2025-06-11 11:58:57 -05:00
eb914a4bdd Replace NamedTemporaryFile with TemporaryDirectory in tests for better compatibility 2025-06-11 11:14:34 -05:00
a306f0df26 Update docs dependencies; re-run notebooks 2025-06-10 12:28:39 -05:00
e0b4181579 Fix pyomo warning 2025-06-10 11:48:37 -05:00
332b2b9fca Update CHANGELOG 2025-06-10 11:31:32 -05:00
af65069202 Bump version to 0.4.3 2025-06-10 11:29:03 -05:00
dadd2216f1 Make compatible with Gurobi 12 2025-06-10 11:27:02 -05:00
5fefb49566 Update to Gurobi 11 2025-06-10 11:27:02 -05:00
3775c3f780 Update docs; fix Sphinx deps; bump to 0.4.2 2024-12-10 12:15:24 -06:00
e66e6d7660 Update CHANGELOG 2024-12-10 11:04:40 -06:00
8e05a69351 Update dependency: Gurobi 11 2024-12-10 10:58:15 -06:00
7ccb7875b9 Allow components to return stats, instead of modifying in-place
Added for compatibility with Julia.
2024-08-20 16:46:20 -05:00
f085ab538b LearningSolver: return model 2024-05-31 11:53:56 -05:00
7f273ebb70 expert primal: Set value for int variables 2024-05-31 11:48:41 -05:00
26cfab0ebd h5: Store values using float64 2024-05-31 11:16:47 -05:00
52ed34784d Docs: Use single-thread example 2024-05-08 09:19:52 -05:00
0534d50af3 BasicCollector: Do not crash on exception 2024-02-26 16:41:50 -06:00
8a02e22a35 Update docs 2024-02-07 09:17:09 -06:00
702824a3b5 Bump version to 0.4 2024-02-06 16:17:27 -06:00
752885660d Update CHANGELOG 2024-02-06 16:10:22 -06:00
b55554d410 Add _gurobipy suffix to all build_model functions 2024-02-06 16:08:24 -06:00
fb3f219ea8 Add tutorial: Cuts and lazy constraints 2024-02-06 15:59:11 -06:00
714904ea35 Implement ExpertCutsComponent and ExpertLazyComponent 2024-02-06 11:57:11 -06:00
cec56cbd7b AbstractSolver: Fix field name 2024-02-06 11:56:54 -06:00
e75850fab8 LearningSolver: Keep original H5 file unmodified 2024-02-02 14:37:53 -06:00
687c271d4d Bump version to 0.4.0 2024-02-02 10:19:44 -06:00
60d9a68485 Solver: Make attributes private; ensure we're not calling them directly
Helps with Julia/JuMP integration.
2024-02-02 10:15:06 -06:00
33f2cb3d9e Cuts: Do not access attributes directly 2024-02-01 12:02:39 -06:00
5b28595b0b BasicCollector: Make LP and MPS optional 2024-02-01 12:02:23 -06:00
60c7222fbe Cuts: Call set_cuts instead of setting cuts_aot_ directly 2024-02-01 10:18:24 -06:00
281508f44c Store cuts and lazy constraints as JSON in H5 2024-02-01 10:06:21 -06:00
2774edae8c tsp: Remove some code duplication 2024-01-30 16:32:39 -06:00
25bbe20748 Make lazy constr component compatible with Pyomo+Gurobi 2024-01-30 16:25:46 -06:00
c9eef36c4e Make cuts component compatible with Pyomo+Gurobi 2024-01-29 00:41:29 -06:00
d2faa15079 Reformat; remove unused imports 2024-01-28 20:47:16 -06:00
8c2c45417b Update mypy 2024-01-28 20:30:18 -06:00
8805a83c1c Implement MemorizingCutsComponent; STAB: switch to edge formulation 2023-11-07 15:36:31 -06:00
b81815d35b Lazy: Minor fixes; make it compatible with Pyomo 2023-10-27 10:44:21 -05:00
a42cd5ae35 Lazy: Simplify method signature; switch to AbstractModel 2023-10-27 09:14:51 -05:00
7079a36203 Lazy: Rename fields 2023-10-27 08:53:38 -05:00
c1adc0b79e Implement MemorizingLazyConstrComponent 2023-10-26 15:37:05 -05:00
2d07a44f7d Fix mypy errors 2023-10-26 13:41:50 -05:00
e555dffc0c Reformat source code 2023-10-26 13:40:09 -05:00
cd32b0e70d Add test fixtures 2023-10-26 13:39:39 -05:00
40c7f2ffb5 io: Simplify more extensions 2023-06-09 10:57:54 -05:00
25728f5512 Small updates to Makefile 2023-06-09 10:57:41 -05:00
8dd5bb416b Minor fixes to docs and setup.py 2023-06-08 12:37:11 -05:00
119 changed files with 2997 additions and 1020 deletions

View File

@@ -3,32 +3,69 @@
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
and this project adheres to
[Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.3.0] - 2023-06-08
## [0.4.3] - 2025-05-10
This is a complete rewrite of the original prototype package, with an entirely new API, focused on performance, scalability and flexibility.
## Changed
- Update dependency: Gurobi 12
## [0.4.2] - 2024-12-10
## Changed
- H5File: Use float64 precision instead of float32
- LearningSolver: optimize now returns (model, stats) instead of just stats
- Update dependency: Gurobi 11
## [0.4.0] - 2024-02-06
### Added
- Add support for Python/Gurobipy and Julia/JuMP, in addition to the existing Python/Pyomo interface.
- Add six new random instance generators (bin packing, capacitated p-median, set cover, set packing, unit commitment, vertex cover), in addition to the three existing generators (multiknapsack, stable set, tsp).
- Collect some additional raw training data (e.g. basis status, reduced costs, etc)
- Add new primal solution ML strategies (memorizing, independent vars and joint vars)
- Add new primal solution actions (set warm start, fix variables, enforce proximity)
- Add ML strategies for user cuts
- Add ML strategies for lazy constraints
### Changed
- LearningSolver.solve no longer generates HDF5 files; use a collector instead.
- Add `_gurobipy` suffix to all `build_model` functions; implement some `_pyomo`
and `_jump` functions.
## [0.3.0] - 2023-06-08
This is a complete rewrite of the original prototype package, with an entirely
new API, focused on performance, scalability and flexibility.
### Added
- Add support for Python/Gurobipy and Julia/JuMP, in addition to the existing
Python/Pyomo interface.
- Add six new random instance generators (bin packing, capacitated p-median, set
cover, set packing, unit commitment, vertex cover), in addition to the three
existing generators (multiknapsack, stable set, tsp).
- Collect some additional raw training data (e.g. basis status, reduced costs,
etc)
- Add new primal solution ML strategies (memorizing, independent vars and joint
vars)
- Add new primal solution actions (set warm start, fix variables, enforce
proximity)
- Add runnable tutorials and user guides to the documentation.
### Changed
- To support large-scale problems and datasets, switch from an in-memory architecture to a file-based architecture, using HDF5 files.
- To accelerate development cycle, split training data collection from feature extraction.
- To support large-scale problems and datasets, switch from an in-memory
architecture to a file-based architecture, using HDF5 files.
- To accelerate development cycle, split training data collection from feature
extraction.
### Removed
- Temporarily remove ML strategies for lazy constraints
- Remove benchmarks from documentation. These will be published in a separate paper.
- Remove benchmarks from documentation. These will be published in a separate
paper.
## [0.1.0] - 2020-11-23
- Initial public release
- Initial public release

View File

@@ -3,10 +3,14 @@ PYTEST := pytest
PIP := $(PYTHON) -m pip
MYPY := $(PYTHON) -m mypy
PYTEST_ARGS := -W ignore::DeprecationWarning -vv --log-level=DEBUG
VERSION := 0.3
VERSION := 0.4
all: docs test
conda-create:
conda env remove -n miplearn
conda create -n miplearn python=3.12
clean:
rm -rf build/* dist/*
@@ -21,8 +25,8 @@ dist-upload:
docs:
rm -rf ../docs/$(VERSION)
cd docs; make clean; make dirhtml
rsync -avP --delete-after docs/_build/dirhtml/ ../docs/$(VERSION)
cd docs; make dirhtml
rsync -avP --delete-after docs/_build/dirhtml/ ../docs/$(VERSION)/
install-deps:
$(PIP) install --upgrade pip

View File

@@ -14,7 +14,7 @@
</a>
</p>
**MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Linear Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS.
**MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS.
Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits.
@@ -22,21 +22,22 @@ Documentation
-------------
- Tutorials:
1. [Getting started (Pyomo)](https://anl-ceeesa.github.io/MIPLearn/0.3/tutorials/getting-started-pyomo/)
2. [Getting started (Gurobipy)](https://anl-ceeesa.github.io/MIPLearn/0.3/tutorials/getting-started-gurobipy/)
3. [Getting started (JuMP)](https://anl-ceeesa.github.io/MIPLearn/0.3/tutorials/getting-started-jump/)
1. [Getting started (Pyomo)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-pyomo/)
2. [Getting started (Gurobipy)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-gurobipy/)
3. [Getting started (JuMP)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-jump/)
4. [User cuts and lazy constraints](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/cuts-gurobipy/)
- User Guide
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/problems/)
2. [Training data collectors](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/collectors/)
3. [Feature extractors](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/features/)
4. [Primal components](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/primal/)
5. [Learning solver](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/solvers/)
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/problems/)
2. [Training data collectors](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/collectors/)
3. [Feature extractors](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/features/)
4. [Primal components](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/primal/)
5. [Learning solver](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/solvers/)
- Python API Reference
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.3/api/problems/)
2. [Collectors & extractors](https://anl-ceeesa.github.io/MIPLearn/0.3/api/collectors/)
3. [Components](https://anl-ceeesa.github.io/MIPLearn/0.3/api/components/)
4. [Solvers](https://anl-ceeesa.github.io/MIPLearn/0.3/api/solvers/)
5. [Helpers](https://anl-ceeesa.github.io/MIPLearn/0.3/api/helpers/)
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.4/api/problems/)
2. [Collectors & extractors](https://anl-ceeesa.github.io/MIPLearn/0.4/api/collectors/)
3. [Components](https://anl-ceeesa.github.io/MIPLearn/0.4/api/components/)
4. [Solvers](https://anl-ceeesa.github.io/MIPLearn/0.4/api/solvers/)
5. [Helpers](https://anl-ceeesa.github.io/MIPLearn/0.4/api/helpers/)
Authors
-------
@@ -58,7 +59,7 @@ Citing MIPLearn
If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.3)*. Zenodo (2023). DOI: [10.5281/zenodo.4287567](https://doi.org/10.5281/zenodo.4287567)
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.4)*. Zenodo (2024). DOI: [10.5281/zenodo.4287567](https://doi.org/10.5281/zenodo.4287567)
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:

View File

@@ -118,3 +118,13 @@ table tr:last-child {
border-bottom: 0;
}
@media (min-width: 960px) {
.bd-page-width {
max-width: 100rem;
}
}
.bd-sidebar-primary .sidebar-primary-items__end {
margin-bottom: 0;
margin-top: 0;
}

View File

@@ -55,3 +55,9 @@ miplearn.problems.vertexcover
.. automodule:: miplearn.problems.vertexcover
:members:
miplearn.problems.maxcut
-----------------------------
.. automodule:: miplearn.problems.maxcut
:members:

View File

@@ -1,7 +1,7 @@
project = "MIPLearn"
copyright = "2020-2023, UChicago Argonne, LLC"
author = ""
release = "0.3"
release = "0.4"
extensions = [
"myst_parser",
"nbsphinx",

View File

@@ -14,7 +14,7 @@
"\n",
"## HDF5 Format\n",
"\n",
"MIPLearn stores all training data in [HDF5](HDF5) (Hierarchical Data Format, Version 5) files. The HDF format was originally developed by the [National Center for Supercomputing Applications][NCSA] (NCSA) for storing and organizing large amounts of data, and supports a variety of data types, including integers, floating-point numbers, strings, and arrays. Compared to other formats, such as CSV, JSON or SQLite, the HDF5 format provides several advantages for MIPLearn, including:\n",
"MIPLearn stores all training data in [HDF5][HDF5] (Hierarchical Data Format, Version 5) files. The HDF format was originally developed by the [National Center for Supercomputing Applications][NCSA] (NCSA) for storing and organizing large amounts of data, and supports a variety of data types, including integers, floating-point numbers, strings, and arrays. Compared to other formats, such as CSV, JSON or SQLite, the HDF5 format provides several advantages for MIPLearn, including:\n",
"\n",
"- *Storage of multiple scalars, vectors and matrices in a single file* --- This allows MIPLearn to store all training data related to a given problem instance in a single file, which makes training data easier to store, organize and transfer.\n",
"- *High-performance partial I/O* --- Partial I/O allows MIPLearn to read a single element from the training data (e.g. value of the optimal solution) without loading the entire file to memory or reading it from beginning to end, which dramatically improves performance and reduces memory requirements. This is especially important when processing a large number of training data files.\n",
@@ -38,9 +38,13 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "f906fe9c",
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-30T22:19:30.826123021Z",
"start_time": "2024-01-30T22:19:30.766066926Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -54,21 +58,21 @@
"x1 = 1\n",
"x2 = hello world\n",
"x3 = [1 2 3]\n",
"x4 = [[0.37454012 0.9507143 0.7319939 ]\n",
" [0.5986585 0.15601864 0.15599452]\n",
" [0.05808361 0.8661761 0.601115 ]]\n",
"x5 = (2, 3)\t0.68030757\n",
" (3, 2)\t0.45049927\n",
" (4, 0)\t0.013264962\n",
" (0, 2)\t0.94220173\n",
" (4, 2)\t0.5632882\n",
" (2, 1)\t0.3854165\n",
" (1, 1)\t0.015966251\n",
" (3, 0)\t0.23089382\n",
" (4, 4)\t0.24102546\n",
" (1, 3)\t0.68326354\n",
" (3, 1)\t0.6099967\n",
" (0, 3)\t0.8331949\n"
"x4 = [[0.37454012 0.95071431 0.73199394]\n",
" [0.59865848 0.15601864 0.15599452]\n",
" [0.05808361 0.86617615 0.60111501]]\n",
"x5 = (3, 2)\t0.6803075385877797\n",
" (2, 3)\t0.450499251969543\n",
" (0, 4)\t0.013264961159866528\n",
" (2, 0)\t0.9422017556848528\n",
" (2, 4)\t0.5632882178455393\n",
" (1, 2)\t0.3854165025399161\n",
" (1, 1)\t0.015966252220214194\n",
" (0, 3)\t0.230893825622149\n",
" (4, 4)\t0.24102546602601171\n",
" (3, 1)\t0.6832635188254582\n",
" (1, 3)\t0.6099966577826209\n",
" (3, 0)\t0.8331949117361643\n"
]
}
],
@@ -104,12 +108,6 @@
" print(\"x5 =\", h5.get_sparse(\"x5\"))"
]
},
{
"cell_type": "markdown",
"id": "50441907",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "d0000c8d",
@@ -179,9 +177,13 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "ac6f8c6f",
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-30T22:19:30.826707866Z",
"start_time": "2024-01-30T22:19:30.825940503Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -205,7 +207,7 @@
"\n",
"from miplearn.problems.tsp import (\n",
" TravelingSalesmanGenerator,\n",
" build_tsp_model,\n",
" build_tsp_model_gurobipy,\n",
")\n",
"from miplearn.io import write_pkl_gz\n",
"from miplearn.h5 import H5File\n",
@@ -231,7 +233,7 @@
"# Solve all instances and collect basic solution information.\n",
"# Process at most four instances in parallel.\n",
"bc = BasicCollector()\n",
"bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model, n_jobs=4)\n",
"bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model_gurobipy, n_jobs=4)\n",
"\n",
"# Read and print some training data for the first instance.\n",
"with H5File(\"data/tsp/00000.h5\", \"r\") as h5:\n",
@@ -244,6 +246,9 @@
"execution_count": null,
"id": "78f0b07a",
"metadata": {
"ExecuteTime": {
"start_time": "2024-01-30T22:19:30.826179789Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -269,7 +274,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.11.7"
}
},
"nbformat": 4,

View File

@@ -51,7 +51,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 1,
"id": "ed9a18c8",
"metadata": {
"collapsed": false,
@@ -69,22 +69,22 @@
" -709. -605. -543. -321.\n",
" -674. -571. -341. ]\n",
"variable features (10, 4) \n",
" [[-1.53124309e+03 -3.50000000e+02 0.00000000e+00 9.43468018e+01]\n",
" [-1.53124309e+03 -6.92000000e+02 2.51703322e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -4.54000000e+02 0.00000000e+00 8.25504150e+01]\n",
" [-1.53124309e+03 -7.09000000e+02 1.11373022e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -6.05000000e+02 1.00000000e+00 -1.26055283e+02]\n",
" [-1.53124309e+03 -5.43000000e+02 0.00000000e+00 1.68693771e+02]\n",
" [[-1.53124309e+03 -3.50000000e+02 0.00000000e+00 9.43467993e+01]\n",
" [-1.53124309e+03 -6.92000000e+02 2.51703329e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -4.54000000e+02 0.00000000e+00 8.25504181e+01]\n",
" [-1.53124309e+03 -7.09000000e+02 1.11373019e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -6.05000000e+02 1.00000000e+00 -1.26055279e+02]\n",
" [-1.53124309e+03 -5.43000000e+02 0.00000000e+00 1.68693775e+02]\n",
" [-1.53124309e+03 -3.21000000e+02 1.07488781e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -6.74000000e+02 8.82293701e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -6.74000000e+02 8.82293687e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -5.71000000e+02 0.00000000e+00 1.41129074e+02]\n",
" [-1.53124309e+03 -3.41000000e+02 1.28830120e-01 0.00000000e+00]]\n",
" [-1.53124309e+03 -3.41000000e+02 1.28830116e-01 0.00000000e+00]]\n",
"constraint features (5, 3) \n",
" [[ 1.3100000e+03 -1.5978307e-01 0.0000000e+00]\n",
" [ 9.8800000e+02 -3.2881632e-01 0.0000000e+00]\n",
" [ 1.0040000e+03 -4.0601316e-01 0.0000000e+00]\n",
" [ 1.2690000e+03 -1.3659772e-01 0.0000000e+00]\n",
" [ 1.0070000e+03 -2.8800571e-01 0.0000000e+00]]\n"
" [[ 1.31000000e+03 -1.59783068e-01 0.00000000e+00]\n",
" [ 9.88000000e+02 -3.28816327e-01 0.00000000e+00]\n",
" [ 1.00400000e+03 -4.06013164e-01 0.00000000e+00]\n",
" [ 1.26900000e+03 -1.36597720e-01 0.00000000e+00]\n",
" [ 1.00700000e+03 -2.88005696e-01 0.00000000e+00]]\n"
]
}
],
@@ -101,7 +101,7 @@
"from miplearn.io import write_pkl_gz\n",
"from miplearn.problems.multiknapsack import (\n",
" MultiKnapsackGenerator,\n",
" build_multiknapsack_model,\n",
" build_multiknapsack_model_gurobipy,\n",
")\n",
"\n",
"# Set random seed to make example reproducible\n",
@@ -127,7 +127,7 @@
"# Run the basic collector\n",
"BasicCollector().collect(\n",
" glob(\"data/multiknapsack/*\"),\n",
" build_multiknapsack_model,\n",
" build_multiknapsack_model_gurobipy,\n",
" n_jobs=4,\n",
")\n",
"\n",
@@ -166,7 +166,7 @@
"\n",
" # Extract and print constraint features\n",
" x3 = ext.get_constr_features(h5)\n",
" print(\"constraint features\", x3.shape, \"\\n\", x3)\n"
" print(\"constraint features\", x3.shape, \"\\n\", x3)"
]
},
{
@@ -204,7 +204,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 2,
"id": "a1bc38fe",
"metadata": {
"collapsed": false,
@@ -326,7 +326,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.11.7"
}
},
"nbformat": 4,

View File

@@ -15,7 +15,7 @@
"\n",
"Before presenting the primal components themselves, we briefly discuss the three ways a solution may be provided to the solver. Each approach has benefits and limitations, which we also discuss in this section. All primal components can be configured to use any of the following approaches.\n",
"\n",
"The first approach is to provide the solution to the solver as a **warm start**. This is implemented by the class [SetWarmStart](SetWarmStart). The main advantage is that this method maintains all optimality and feasibility guarantees of the MIP solver, while still providing significant performance benefits for various classes of problems. If the machine learning model is able to predict multiple solutions, it is also possible to set multiple warm starts. In this case, the solver evaluates each warm start, discards the infeasible ones, then proceeds with the one that has the best objective value. The main disadvantage of this approach, compared to the next two, is that it provides relatively modest speedups for most problem classes, and no speedup at all for many others, even when the machine learning predictions are 100% accurate.\n",
"The first approach is to provide the solution to the solver as a **warm start**. This is implemented by the class [SetWarmStart][SetWarmStart]. The main advantage is that this method maintains all optimality and feasibility guarantees of the MIP solver, while still providing significant performance benefits for various classes of problems. If the machine learning model is able to predict multiple solutions, it is also possible to set multiple warm starts. In this case, the solver evaluates each warm start, discards the infeasible ones, then proceeds with the one that has the best objective value. The main disadvantage of this approach, compared to the next two, is that it provides relatively modest speedups for most problem classes, and no speedup at all for many others, even when the machine learning predictions are 100% accurate.\n",
"\n",
"[SetWarmStart]: ../../api/components/#miplearn.components.primal.actions.SetWarmStart\n",
"\n",
@@ -120,7 +120,7 @@
" extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n",
" constructor=MergeTopSolutions(k=3, thresholds=[0.25, 0.75]),\n",
" action=EnforceProximity(3),\n",
")\n"
")"
]
},
{
@@ -175,7 +175,7 @@
" ),\n",
" extractor=AlvLouWeh2017Extractor(),\n",
" action=SetWarmStart(),\n",
")\n"
")"
]
},
{
@@ -230,7 +230,7 @@
" instance_fields=[\"static_var_obj_coeffs\"],\n",
" ),\n",
" action=SetWarmStart(),\n",
")\n"
")"
]
},
{
@@ -263,7 +263,7 @@
"# Configures an expert primal component, which reads a pre-computed\n",
"# optimal solution from the HDF5 file and provides it to the solver\n",
"# as warm start.\n",
"comp = ExpertPrimalComponent(action=SetWarmStart())\n"
"comp = ExpertPrimalComponent(action=SetWarmStart())"
]
}
],
@@ -283,7 +283,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.11.7"
}
},
"nbformat": 4,

View File

@@ -11,7 +11,7 @@
"\n",
"Benchmark sets such as [MIPLIB](https://miplib.zib.de/) or [TSPLIB](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/) are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, however, make existing benchmark sets less suitable for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.\n",
"\n",
"To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. As of MIPLearn 0.3, nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.\n",
"To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. Nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.\n",
"\n",
"In the following, we describe the problems included in the library, their MIP formulation and the generation algorithm."
]
@@ -39,7 +39,6 @@
"cell_type": "markdown",
"id": "830f3784-a3fc-4e2f-a484-e7808841ffe8",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
@@ -108,6 +107,10 @@
"execution_count": 1,
"id": "f14e560c-ef9f-4c48-8467-72d6acce5f9f",
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-07T16:29:48.409419720Z",
"start_time": "2023-11-07T16:29:47.824353556Z"
},
"tags": []
},
"outputs": [
@@ -126,9 +129,10 @@
"8 [ 8.47 21.9 16.58 15.37 3.76 3.91 1.57 20.57 14.76 18.61] 94.58\n",
"9 [ 8.57 22.77 17.06 16.25 4.14 4. 1.56 22.97 14.09 19.09] 100.79\n",
"\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 20 rows, 110 columns and 210 nonzeros\n",
@@ -151,31 +155,26 @@
"\n",
" 0 0 1.27484 0 4 5.00000 1.27484 74.5% - 0s\n",
"H 0 0 4.0000000 1.27484 68.1% - 0s\n",
"H 0 0 3.0000000 1.27484 57.5% - 0s\n",
"H 0 0 2.0000000 1.27484 36.3% - 0s\n",
" 0 0 1.27484 0 4 2.00000 1.27484 36.3% - 0s\n",
"\n",
"Explored 1 nodes (38 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 3: 2 4 5 \n",
"Solution count 4: 2 3 4 5 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 2.000000000000e+00, best bound 2.000000000000e+00, gap 0.0000%\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/axavier/.conda/envs/miplearn2/lib/python3.9/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
"Best objective 2.000000000000e+00, best bound 2.000000000000e+00, gap 0.0000%\n",
"\n",
"User-callback calls 148, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"import numpy as np\n",
"from scipy.stats import uniform, randint\n",
"from miplearn.problems.binpack import BinPackGenerator, build_binpack_model\n",
"from miplearn.problems.binpack import BinPackGenerator, build_binpack_model_gurobipy\n",
"\n",
"# Set random seed, to make example reproducible\n",
"np.random.seed(42)\n",
@@ -196,8 +195,8 @@
"print()\n",
"\n",
"# Optimize first instance\n",
"model = build_binpack_model(data[0])\n",
"model.optimize()\n"
"model = build_binpack_model_gurobipy(data[0])\n",
"model.optimize()"
]
},
{
@@ -304,7 +303,12 @@
"cell_type": "code",
"execution_count": 2,
"id": "1ce5f8fb-2769-4fbd-a40c-fd62b897690a",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-07T16:29:48.485068449Z",
"start_time": "2023-11-07T16:29:48.406139946Z"
}
},
"outputs": [
{
"name": "stdout",
@@ -321,9 +325,9 @@
"capacities\n",
" [1310. 988. 1004. 1269. 1007.]\n",
"\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 5 rows, 10 columns and 50 nonzeros\n",
@@ -346,19 +350,20 @@
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 -1428.7265 0 4 -804.00000 -1428.7265 77.7% - 0s\n",
"H 0 0 -995.0000000 -1428.7265 43.6% - 0s\n",
"H 0 0 -1279.000000 -1428.7265 11.7% - 0s\n",
"\n",
"Cutting planes:\n",
" Cover: 1\n",
" 0 0 -1428.7265 0 4 -1279.0000 -1428.7265 11.7% - 0s\n",
"\n",
"Explored 1 nodes (4 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 2: -1279 -804 \n",
"Solution count 3: -1279 -995 -804 \n",
"No other solutions better than -1279\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective -1.279000000000e+03, best bound -1.279000000000e+03, gap 0.0000%\n"
"Best objective -1.279000000000e+03, best bound -1.279000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 416, time in user-callback 0.00 sec\n"
]
}
],
@@ -367,7 +372,7 @@
"from scipy.stats import uniform, randint\n",
"from miplearn.problems.multiknapsack import (\n",
" MultiKnapsackGenerator,\n",
" build_multiknapsack_model,\n",
" build_multiknapsack_model_gurobipy,\n",
")\n",
"\n",
"# Set random seed, to make example reproducible\n",
@@ -394,8 +399,8 @@
"print()\n",
"\n",
"# Build model and optimize\n",
"model = build_multiknapsack_model(data[0])\n",
"model.optimize()\n"
"model = build_multiknapsack_model_gurobipy(data[0])\n",
"model.optimize()"
]
},
{
@@ -470,7 +475,12 @@
"cell_type": "code",
"execution_count": 3,
"id": "4e0e4223-b4e0-4962-a157-82a23a86e37d",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-07T16:29:48.575025403Z",
"start_time": "2023-11-07T16:29:48.453962705Z"
}
},
"outputs": [
{
"name": "stdout",
@@ -491,9 +501,9 @@
"demands = [6.12 1.39 2.92 3.66 4.56 7.85 2. 5.14 5.92 0.46]\n",
"capacities = [151.89 42.63 16.26 237.22 241.41 202.1 76.15 24.42 171.06 110.04]\n",
"\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 21 rows, 110 columns and 220 nonzeros\n",
@@ -508,40 +518,45 @@
"Presolve time: 0.00s\n",
"Presolved: 21 rows, 110 columns, 220 nonzeros\n",
"Variable types: 0 continuous, 110 integer (110 binary)\n",
"Found heuristic solution: objective 245.6400000\n",
"\n",
"Root relaxation: objective 0.000000e+00, 18 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 0.00000 0 6 245.64000 0.00000 100% - 0s\n",
" 0 0 0.00000 0 6 368.79000 0.00000 100% - 0s\n",
"H 0 0 301.7200000 0.00000 100% - 0s\n",
"H 0 0 185.1900000 0.00000 100% - 0s\n",
"H 0 0 148.6300000 17.14595 88.5% - 0s\n",
"H 0 0 113.1800000 17.14595 84.9% - 0s\n",
" 0 0 17.14595 0 10 113.18000 17.14595 84.9% - 0s\n",
"H 0 0 99.5000000 17.14595 82.8% - 0s\n",
"H 0 0 98.3900000 17.14595 82.6% - 0s\n",
"H 0 0 93.9800000 64.28872 31.6% - 0s\n",
" 0 0 64.28872 0 15 93.98000 64.28872 31.6% - 0s\n",
"H 0 0 93.9200000 64.28872 31.5% - 0s\n",
" 0 0 86.06884 0 15 93.92000 86.06884 8.36% - 0s\n",
"* 0 0 0 91.2300000 91.23000 0.00% - 0s\n",
"H 0 0 153.5000000 0.00000 100% - 0s\n",
"H 0 0 131.7700000 0.00000 100% - 0s\n",
" 0 0 17.14595 0 10 131.77000 17.14595 87.0% - 0s\n",
"H 0 0 115.6500000 17.14595 85.2% - 0s\n",
"H 0 0 114.5300000 64.28872 43.9% - 0s\n",
"H 0 0 98.3900000 64.28872 34.7% - 0s\n",
" 0 0 74.01104 0 15 98.39000 74.01104 24.8% - 0s\n",
"H 0 0 91.2300000 74.01104 18.9% - 0s\n",
"\n",
"Explored 1 nodes (70 simplex iterations) in 0.02 seconds (0.00 work units)\n",
"Cutting planes:\n",
" Cover: 16\n",
" MIR: 1\n",
" StrongCG: 1\n",
"\n",
"Explored 1 nodes (42 simplex iterations) in 0.02 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 10: 91.23 93.92 93.98 ... 368.79\n",
"Solution count 9: 91.23 98.39 114.53 ... 368.79\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 9.123000000000e+01, best bound 9.123000000000e+01, gap 0.0000%\n"
"Best objective 9.123000000000e+01, best bound 9.123000000000e+01, gap 0.0000%\n",
"\n",
"User-callback calls 187, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"import numpy as np\n",
"from scipy.stats import uniform, randint\n",
"from miplearn.problems.pmedian import PMedianGenerator, build_pmedian_model\n",
"from miplearn.problems.pmedian import PMedianGenerator, build_pmedian_model_gurobipy\n",
"\n",
"# Set random seed, to make example reproducible\n",
"np.random.seed(42)\n",
@@ -569,8 +584,8 @@
"print()\n",
"\n",
"# Build and optimize model\n",
"model = build_pmedian_model(data[0])\n",
"model.optimize()\n"
"model = build_pmedian_model_gurobipy(data[0])\n",
"model.optimize()"
]
},
{
@@ -643,7 +658,12 @@
"cell_type": "code",
"execution_count": 4,
"id": "3224845b-9afd-463e-abf4-e0e93d304859",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-07T16:29:48.804292323Z",
"start_time": "2023-11-07T16:29:48.492933268Z"
}
},
"outputs": [
{
"name": "stdout",
@@ -658,9 +678,9 @@
"costs [1044.58 850.13 1014.5 944.83 697.9 971.87 213.49 220.98 70.23\n",
" 425.33]\n",
"\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 5 rows, 10 columns and 28 nonzeros\n",
@@ -676,13 +696,15 @@
"Presolve time: 0.00s\n",
"Presolve: All rows and columns removed\n",
"\n",
"Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)\n",
"Explored 0 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 1 (of 32 available processors)\n",
"\n",
"Solution count 1: 213.49 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 2.134900000000e+02, best bound 2.134900000000e+02, gap 0.0000%\n"
"Best objective 2.134900000000e+02, best bound 2.134900000000e+02, gap 0.0000%\n",
"\n",
"User-callback calls 183, time in user-callback 0.00 sec\n"
]
}
],
@@ -714,7 +736,7 @@
"\n",
"# Build and optimize model\n",
"model = build_setcover_model_gurobipy(data[0])\n",
"model.optimize()\n"
"model.optimize()"
]
},
{
@@ -774,6 +796,10 @@
"execution_count": 5,
"id": "cc797da7",
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-07T16:29:48.806917868Z",
"start_time": "2023-11-07T16:29:48.781619530Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -793,9 +819,9 @@
"costs [1044.58 850.13 1014.5 944.83 697.9 971.87 213.49 220.98 70.23\n",
" 425.33]\n",
"\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 5 rows, 10 columns and 28 nonzeros\n",
@@ -811,21 +837,23 @@
"Presolve time: 0.00s\n",
"Presolve: All rows and columns removed\n",
"\n",
"Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)\n",
"Explored 0 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 1 (of 32 available processors)\n",
"\n",
"Solution count 2: -1986.37 -1265.56 \n",
"No other solutions better than -1986.37\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective -1.986370000000e+03, best bound -1.986370000000e+03, gap 0.0000%\n"
"Best objective -1.986370000000e+03, best bound -1.986370000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 244, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"import numpy as np\n",
"from scipy.stats import uniform, randint\n",
"from miplearn.problems.setpack import SetPackGenerator, build_setpack_model\n",
"from miplearn.problems.setpack import SetPackGenerator, build_setpack_model_gurobipy\n",
"\n",
"# Set random seed, to make example reproducible\n",
"np.random.seed(42)\n",
@@ -849,8 +877,8 @@
"print()\n",
"\n",
"# Build and optimize model\n",
"model = build_setpack_model(data[0])\n",
"model.optimize()\n"
"model = build_setpack_model_gurobipy(data[0])\n",
"model.optimize()"
]
},
{
@@ -875,11 +903,10 @@
"$$\n",
"\\begin{align*}\n",
"\\text{minimize} \\;\\;\\; & -\\sum_{v \\in V} w_v x_v \\\\\n",
"\\text{such that} \\;\\;\\; & \\sum_{v \\in C} x_v \\leq 1 & \\forall C \\in \\mathcal{C} \\\\\n",
"\\text{such that} \\;\\;\\; & x_v + x_u \\leq 1 & \\forall (v,u) \\in E \\\\\n",
"& x_v \\in \\{0, 1\\} & \\forall v \\in V\n",
"\\end{align*}\n",
"$$\n",
"where $\\mathcal{C}$ is the set of cliques in $G$. We recall that a clique is a subset of vertices in which every pair of vertices is adjacent."
"$$"
]
},
{
@@ -903,7 +930,12 @@
"cell_type": "code",
"execution_count": 6,
"id": "0f996e99-0ec9-472b-be8a-30c9b8556931",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-07T16:29:48.954896857Z",
"start_time": "2023-11-07T16:29:48.825579097Z"
}
},
"outputs": [
{
"name": "stdout",
@@ -913,13 +945,17 @@
"weights[0] [37.45 95.07 73.2 59.87 15.6 15.6 5.81 86.62 60.11 70.81]\n",
"weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]\n",
"\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Set parameter PreCrush to value 1\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 10 rows, 10 columns and 24 nonzeros\n",
"Model fingerprint: 0xf4c21689\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"\n",
"Optimize a model with 15 rows, 10 columns and 30 nonzeros\n",
"Model fingerprint: 0x3240ea4a\n",
"Variable types: 0 continuous, 10 integer (10 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
@@ -927,26 +963,28 @@
" Bounds range [1e+00, 1e+00]\n",
" RHS range [1e+00, 1e+00]\n",
"Found heuristic solution: objective -219.1400000\n",
"Presolve removed 2 rows and 2 columns\n",
"Presolve removed 7 rows and 2 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 8 rows, 8 columns, 19 nonzeros\n",
"Variable types: 0 continuous, 8 integer (8 binary)\n",
"\n",
"Root relaxation: objective -2.205650e+02, 4 iterations, 0.00 seconds (0.00 work units)\n",
"Root relaxation: objective -2.205650e+02, 5 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 infeasible 0 -219.14000 -219.14000 0.00% - 0s\n",
"\n",
"Explored 1 nodes (4 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 1: -219.14 \n",
"No other solutions better than -219.14\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective -2.191400000000e+02, best bound -2.191400000000e+02, gap 0.0000%\n"
"Best objective -2.191400000000e+02, best bound -2.191400000000e+02, gap 0.0000%\n",
"\n",
"User-callback calls 303, time in user-callback 0.00 sec\n"
]
}
],
@@ -980,7 +1018,7 @@
"\n",
"# Load and optimize the first instance\n",
"model = build_stab_model_gurobipy(data[0])\n",
"model.optimize()\n"
"model.optimize()"
]
},
{
@@ -1052,6 +1090,10 @@
"execution_count": 7,
"id": "9d0c56c6",
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-07T16:29:48.958833448Z",
"start_time": "2023-11-07T16:29:48.898121017Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -1085,12 +1127,17 @@
" [ 444. 398. 371. 454. 356. 476. 565. 374. 0. 274.]\n",
" [ 668. 446. 317. 648. 469. 752. 394. 286. 274. 0.]]\n",
"\n",
"Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"LazyConstraints 1\n",
"\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x719675e5\n",
"Variable types: 0 continuous, 45 integer (45 binary)\n",
@@ -1121,7 +1168,7 @@
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 2.921000000000e+03, best bound 2.921000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 106, time in user-callback 0.00 sec\n"
"User-callback calls 111, time in user-callback 0.00 sec\n"
]
}
],
@@ -1129,7 +1176,10 @@
"import random\n",
"import numpy as np\n",
"from scipy.stats import uniform, randint\n",
"from miplearn.problems.tsp import TravelingSalesmanGenerator, build_tsp_model\n",
"from miplearn.problems.tsp import (\n",
" TravelingSalesmanGenerator,\n",
" build_tsp_model_gurobipy,\n",
")\n",
"\n",
"# Set random seed to make example reproducible\n",
"random.seed(42)\n",
@@ -1152,8 +1202,8 @@
"print()\n",
"\n",
"# Load and optimize the first instance\n",
"model = build_tsp_model(data[0])\n",
"model.optimize()\n"
"model = build_tsp_model_gurobipy(data[0])\n",
"model.optimize()"
]
},
{
@@ -1171,7 +1221,6 @@
"id": "7048d771",
"metadata": {},
"source": [
"\n",
"<div class=\"alert alert-info\">\n",
"Note\n",
"\n",
@@ -1180,7 +1229,7 @@
"\n",
"### Formulation\n",
"\n",
"Let $T$ be the number of time steps, $G$ be the number of generation units, and let $D_t$ be the power demand (in MW) at time $t$. For each generating unit $g$, let $P^\\max_g$ and $P^\\min_g$ be the maximum and minimum amount of power the unit is able to produce when switched on; let $L_g$ and $l_g$ be the minimum up- and down-time for unit $g$; let $C^\\text{fixed}$ be the cost to keep unit $g$ on for one time step, regardless of its power output level; let $C^\\text{start}$ be the cost to switch unit $g$ on; and let $C^\\text{var}$ be the cost for generator $g$ to produce 1 MW of power. In this formulation, we assume linear production costs. For each generator $g$ and time $t$, let $x_{gt}$ be a binary variable which equals one if unit $g$ is on at time $t$, let $w_{gt}$ be a binary variable which equals one if unit $g$ switches from being off at time $t-1$ to being on at time $t$, and let $p_{gt}$ be a continuous variable which indicates the amount of power generated. The formulation is given by:"
"Let $T$ be the number of time steps, $G$ be the number of generation units, and let $D_t$ be the power demand (in MW) at time $t$. For each generating unit $g$, let $P^\\max_g$ and $P^\\min_g$ be the maximum and minimum amount of power the unit is able to produce when switched on; let $L_g$ and $l_g$ be the minimum up- and down-time for unit $g$; let $C^\\text{fixed}$ be the cost to keep unit $g$ on for one time step, regardless of its power output level; let $C^\\text{start}$ be the cost to switch unit $g$ on; let $C^\\text{prod-lin}$ be the linear cost coefficient for generator $g$ to produce 1 MW of power; and let $C^\\text{prod-quad}$ be the quadratic cost coefficient for unit $g$. For each generator $g$ and time $t$, let $x_{gt}$ be a binary variable which equals one if unit $g$ is on at time $t$, let $w_{gt}$ be a binary variable which equals one if unit $g$ switches from being off at time $t-1$ to being on at time $t$, and let $p_{gt}$ be a continuous variable which indicates the amount of power generated. The formulation is given by:"
]
},
{
@@ -1188,14 +1237,14 @@
"id": "bec5ee1c",
"metadata": {},
"source": [
"\n",
"$$\n",
"\\begin{align*}\n",
"\\text{minimize} \\;\\;\\;\n",
" & \\sum_{t=1}^T \\sum_{g=1}^G \\left(\n",
" x_{gt} C^\\text{fixed}_g\n",
" + w_{gt} C^\\text{start}_g\n",
" + p_{gt} C^\\text{var}_g\n",
" + p_{gt} C^\\text{prod-lin}_g\n",
" + p_{gt}^2 C^\\text{prod-quad}_g\n",
" \\right)\n",
" \\\\\n",
"\\text{such that} \\;\\;\\;\n",
@@ -1237,12 +1286,11 @@
"id": "01bed9fc",
"metadata": {},
"source": [
"\n",
"### Random instance generator\n",
"\n",
"The class `UnitCommitmentGenerator` can be used to generate random instances of this problem.\n",
"\n",
"First, the user-provided probability distributions `n_units` and `n_periods` are sampled to determine the number of generating units and the number of time steps, respectively. Then, for each unit, the probabilities `max_power` and `min_power` are sampled to determine the unit's maximum and minimum power output. To make it easier to generate valid ranges, `min_power` is not specified as the absolute power level in MW, but rather as a multiplier of `max_power`; for example, if `max_power` samples to 100 and `min_power` samples to 0.5, then the unit's power range is set to `[50,100]`. Then, the distributions `cost_startup`, `cost_prod` and `cost_fixed` are sampled to determine the unit's startup, variable and fixed costs, while the distributions `min_uptime` and `min_downtime` are sampled to determine its minimum up/down-time.\n",
"First, the user-provided probability distributions `n_units` and `n_periods` are sampled to determine the number of generating units and the number of time steps, respectively. Then, for each unit, the probabilities `max_power` and `min_power` are sampled to determine the unit's maximum and minimum power output. To make it easier to generate valid ranges, `min_power` is not specified as the absolute power level in MW, but rather as a multiplier of `max_power`; for example, if `max_power` samples to 100 and `min_power` samples to 0.5, then the unit's power range is set to `[50,100]`. Then, the distributions `cost_startup`, `cost_prod`, `cost_prod_quad` and `cost_fixed` are sampled to determine the unit's startup, linear variable, quadratic variable, and fixed costs, while the distributions `min_uptime` and `min_downtime` are sampled to determine its minimum up/down-time.\n",
"\n",
"After parameters for the units have been generated, the class then generates a periodic demand curve, with a peak every 12 time steps, in the range $(0.4C, 0.8C)$, where $C$ is the sum of all units' maximum power output. Finally, all costs and demand values are perturbed by random scaling factors independently sampled from the distributions `cost_jitter` and `demand_jitter`, respectively.\n",
"\n",
@@ -1259,9 +1307,13 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 12,
"id": "6217da7c",
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-07T16:29:49.061613905Z",
"start_time": "2023-11-07T16:29:48.941857719Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -1279,73 +1331,94 @@
"min_power[0] [117.79 245.85 271.85 207.7 81.38]\n",
"cost_startup[0] [3042.42 5247.56 4319.45 2912.29 6118.53]\n",
"cost_prod[0] [ 6.97 14.61 18.32 22.8 39.26]\n",
"cost_fixed[0] [199.67 514.23 592.41 46.45 607.54]\n",
"cost_prod_quad[0] [0.02 0.0514 0.0592 0.0046 0.0608]\n",
"cost_fixed[0] [170.52 65.05 948.89 965.63 808.4 ]\n",
"demand[0]\n",
" [ 905.06 915.41 1166.52 1212.29 1127.81 953.52 905.06 796.21 783.78\n",
" 866.23 768.62 899.59 905.06 946.23 1087.61 1004.24 1048.36 992.03\n",
" 905.06 750.82 691.48 606.15 658.5 809.95]\n",
" [ 869.31 897.58 1212.29 1124.08 930.48 1012.62 869.31 606.15]\n",
"\n",
"min_power[1] [117.79 245.85 271.85 207.7 81.38]\n",
"max_power[1] [218.54 477.82 379.4 319.4 120.21]\n",
"min_uptime[1] [7 6 3 5 7]\n",
"min_downtime[1] [7 3 5 6 2]\n",
"min_power[1] [117.79 245.85 271.85 207.7 81.38]\n",
"cost_startup[1] [2458.08 6200.26 4585.74 2666.05 4783.34]\n",
"cost_prod[1] [ 6.31 13.33 20.42 24.37 46.86]\n",
"cost_fixed[1] [196.9 416.42 655.57 52.51 626.15]\n",
"cost_startup[1] [3710.99 6283.5 4530.89 3526.6 4859.62]\n",
"cost_prod[1] [ 5.91 11.29 16.72 21.53 34.77]\n",
"cost_prod_quad[1] [0.0233 0.0477 0.0527 0.0047 0.0499]\n",
"cost_fixed[1] [ 196.29 51.21 1179.89 1097.07 686.62]\n",
"demand[1]\n",
" [ 981.42 840.07 1095.59 1102.03 1088.41 932.29 863.67 848.56 761.33\n",
" 828.28 775.18 834.99 959.76 865.72 1193.52 1058.92 985.19 893.92\n",
" 962.16 781.88 723.15 639.04 602.4 787.02]\n",
" [ 827.37 926.76 1166.64 1128.59 939.17 948.8 950.95 639.5 ]\n",
"\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 578 rows, 360 columns and 2128 nonzeros\n",
"Model fingerprint: 0x4dc1c661\n",
"Variable types: 120 continuous, 240 integer (240 binary)\n",
"Optimize a model with 162 rows, 120 columns and 512 nonzeros\n",
"Model fingerprint: 0x1e3651da\n",
"Model has 40 quadratic objective terms\n",
"Variable types: 40 continuous, 80 integer (80 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 5e+02]\n",
" Objective range [7e+00, 6e+03]\n",
" QObjective range [9e-03, 1e-01]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [1e+00, 1e+03]\n",
"Presolve removed 244 rows and 131 columns\n",
"Presolve time: 0.02s\n",
"Presolved: 334 rows, 229 columns, 842 nonzeros\n",
"Variable types: 116 continuous, 113 integer (113 binary)\n",
"Found heuristic solution: objective 440662.46430\n",
"Found heuristic solution: objective 429461.97680\n",
"Found heuristic solution: objective 374043.64040\n",
"Found heuristic solution: objective 282371.35206\n",
"Presolve removed 61 rows and 40 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 137 rows, 98 columns, 362 nonzeros\n",
"Presolved model has 40 quadratic objective terms\n",
"Variable types: 58 continuous, 40 integer (40 binary)\n",
"\n",
"Root relaxation: objective 3.361348e+05, 142 iterations, 0.00 seconds (0.00 work units)\n",
"Root relaxation: objective 1.995341e+05, 126 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 336134.820 0 18 374043.640 336134.820 10.1% - 0s\n",
"H 0 0 368600.14450 336134.820 8.81% - 0s\n",
"H 0 0 364721.76610 336134.820 7.84% - 0s\n",
" 0 0 cutoff 0 364721.766 364721.766 0.00% - 0s\n",
" 0 0 199534.058 0 8 282371.352 199534.058 29.3% - 0s\n",
"H 0 0 212978.80635 199534.058 6.31% - 0s\n",
"H 0 0 208079.69920 199534.058 4.11% - 0s\n",
" 0 0 203097.772 0 16 208079.699 203097.772 2.39% - 0s\n",
" 0 0 203097.772 0 4 208079.699 203097.772 2.39% - 0s\n",
" 0 0 203097.772 0 4 208079.699 203097.772 2.39% - 0s\n",
" 0 0 203097.772 0 3 208079.699 203097.772 2.39% - 0s\n",
" 0 0 203097.772 0 3 208079.699 203097.772 2.39% - 0s\n",
" 0 0 203097.772 0 3 208079.699 203097.772 2.39% - 0s\n",
" 0 0 205275.299 0 - 208079.699 205275.299 1.35% - 0s\n",
" 0 0 205777.846 0 2 208079.699 205777.846 1.11% - 0s\n",
" 0 0 205789.407 0 - 208079.699 205789.407 1.10% - 0s\n",
" 0 0 postponed 0 208079.699 205789.407 1.10% - 0s\n",
" 0 0 postponed 0 208079.699 205789.408 1.10% - 0s\n",
" 0 0 205789.408 0 4 208079.699 205789.408 1.10% - 0s\n",
" 0 0 205789.408 0 4 208079.699 205789.408 1.10% - 0s\n",
" 0 0 205789.408 0 1 208079.699 205789.408 1.10% - 0s\n",
" 0 0 205789.408 0 - 208079.699 205789.408 1.10% - 0s\n",
" 0 0 205789.408 0 - 208079.699 205789.408 1.10% - 0s\n",
" 0 0 205789.408 0 - 208079.699 205789.408 1.10% - 0s\n",
" 0 0 205789.408 0 - 208079.699 205789.408 1.10% - 0s\n",
" 0 0 205789.408 0 2 208079.699 205789.408 1.10% - 0s\n",
"H 0 0 207525.63560 205789.408 0.84% - 0s\n",
" 0 0 205789.408 0 2 207525.636 205789.408 0.84% - 0s\n",
" 0 2 205789.408 0 2 207525.636 205789.408 0.84% - 0s\n",
"* 9 0 5 205789.40812 205789.408 0.00% 0.0 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 3\n",
" Cover: 8\n",
" Implied bound: 29\n",
" Clique: 222\n",
" MIR: 7\n",
" Flow cover: 7\n",
" RLT: 1\n",
" Relax-and-lift: 7\n",
" Cover: 1\n",
" Implied bound: 6\n",
" MIR: 2\n",
" Flow cover: 2\n",
" RLT: 2\n",
"\n",
"Explored 1 nodes (234 simplex iterations) in 0.04 seconds (0.02 work units)\n",
"Explored 11 nodes (621 simplex iterations) in 0.32 seconds (0.19 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 5: 364722 368600 374044 ... 440662\n",
"Solution count 5: 205789 207526 208080 ... 282371\n",
"No other solutions better than 205789\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 3.647217661000e+05, best bound 3.647217661000e+05, gap 0.0000%\n"
"Best objective 2.057894080889e+05, best bound 2.057894081187e+05, gap 0.0000%\n",
"\n",
"User-callback calls 650, time in user-callback 0.00 sec\n"
]
}
],
@@ -1353,20 +1426,21 @@
"import random\n",
"import numpy as np\n",
"from scipy.stats import uniform, randint\n",
"from miplearn.problems.uc import UnitCommitmentGenerator, build_uc_model\n",
"from miplearn.problems.uc import UnitCommitmentGenerator, build_uc_model_gurobipy\n",
"\n",
"# Set random seed to make example reproducible\n",
"random.seed(42)\n",
"np.random.seed(42)\n",
"\n",
"# Generate a random instance with 5 generators and 24 time steps\n",
"# Generate a random instance with 5 generators and 8 time steps\n",
"data = UnitCommitmentGenerator(\n",
" n_units=randint(low=5, high=6),\n",
" n_periods=randint(low=24, high=25),\n",
" n_periods=randint(low=8, high=9),\n",
" max_power=uniform(loc=50, scale=450),\n",
" min_power=uniform(loc=0.5, scale=0.25),\n",
" cost_startup=uniform(loc=0, scale=10_000),\n",
" cost_prod=uniform(loc=0, scale=50),\n",
" cost_prod_quad=uniform(loc=0, scale=0.1),\n",
" cost_fixed=uniform(loc=0, scale=1_000),\n",
" min_uptime=randint(low=2, high=8),\n",
" min_downtime=randint(low=2, high=8),\n",
@@ -1384,13 +1458,14 @@
" print(f\"min_power[{i}]\", data[i].min_power)\n",
" print(f\"cost_startup[{i}]\", data[i].cost_startup)\n",
" print(f\"cost_prod[{i}]\", data[i].cost_prod)\n",
" print(f\"cost_prod_quad[{i}]\", data[i].cost_prod_quad)\n",
" print(f\"cost_fixed[{i}]\", data[i].cost_fixed)\n",
" print(f\"demand[{i}]\\n\", data[i].demand)\n",
" print()\n",
"\n",
"# Load and optimize the first instance\n",
"model = build_uc_model(data[0])\n",
"model.optimize()\n"
"model = build_uc_model_gurobipy(data[0])\n",
"model.optimize()"
]
},
{
@@ -1450,7 +1525,12 @@
"cell_type": "code",
"execution_count": 9,
"id": "5fff7afe-5b7a-4889-a502-66751ec979bf",
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-07T16:29:49.075657363Z",
"start_time": "2023-11-07T16:29:49.049561363Z"
}
},
"outputs": [
{
"name": "stdout",
@@ -1460,9 +1540,9 @@
"weights[0] [37.45 95.07 73.2 59.87 15.6 15.6 5.81 86.62 60.11 70.81]\n",
"weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]\n",
"\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 15 rows, 10 columns and 30 nonzeros\n",
@@ -1479,12 +1559,12 @@
"Presolved: 8 rows, 8 columns, 19 nonzeros\n",
"Variable types: 0 continuous, 8 integer (8 binary)\n",
"\n",
"Root relaxation: objective 2.995750e+02, 8 iterations, 0.00 seconds (0.00 work units)\n",
"Root relaxation: cutoff, 8 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 infeasible 0 301.00000 301.00000 0.00% - 0s\n",
" 0 0 cutoff 0 301.00000 301.00000 0.00% - 0s\n",
"\n",
"Explored 1 nodes (8 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
@@ -1492,7 +1572,9 @@
"Solution count 1: 301 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 3.010000000000e+02, best bound 3.010000000000e+02, gap 0.0000%\n"
"Best objective 3.010000000000e+02, best bound 3.010000000000e+02, gap 0.0000%\n",
"\n",
"User-callback calls 333, time in user-callback 0.00 sec\n"
]
}
],
@@ -1502,7 +1584,7 @@
"from scipy.stats import uniform, randint\n",
"from miplearn.problems.vertexcover import (\n",
" MinWeightVertexCoverGenerator,\n",
" build_vertexcover_model,\n",
" build_vertexcover_model_gurobipy,\n",
")\n",
"\n",
"# Set random seed to make example reproducible\n",
@@ -1525,22 +1607,151 @@
"print()\n",
"\n",
"# Load and optimize the first instance\n",
"model = build_vertexcover_model(data[0])\n",
"model.optimize()\n"
"model = build_vertexcover_model_gurobipy(data[0])\n",
"model.optimize()"
]
},
{
"cell_type": "markdown",
"id": "k4ojjjni3z",
"metadata": {},
"source": [
"## Maximum Cut\n",
"\n",
"The **maximum cut problem** is a classical optimization problem in graph theory and combinatorial optimization. Given a graph with weighted edges, the goal is to partition the vertices into two disjoint sets such that the sum of the weights of the edges crossing the partition is maximized. This problem is one of Karp's 21 NP-complete problems and has important applications in theoretical physics, machine learning, and VLSI design."
]
},
{
"cell_type": "markdown",
"id": "kzdqjib7lac",
"metadata": {},
"source": [
"### Formulation\n",
"\n",
"Let $G=(V,E)$ be an undirected graph, and for each edge $e \\in E$, let $w_e$ be its weight. For each vertex $v \\in V$, let $x_v$ be a binary decision variable that equals one if vertex $v$ is assigned to the first partition, and zero if it is assigned to the second partition. The maximum cut problem is formulated as:"
]
},
{
"cell_type": "markdown",
"id": "lnmj134ojad",
"metadata": {},
"source": [
"$$\n",
"\\begin{align*}\n",
"\\text{minimize} \\;\\;\\;\n",
" & -\\sum_{(i,j) \\in E} w_{ij} x_i \\left( 1 - x_j \\right) \\\\\n",
"\\text{such that} \\;\\;\\;\n",
" & x_v \\in \\{0, 1\\} & \\forall v \\in V\n",
"\\end{align*}\n",
"$$\n",
"\n",
"The objective function counts the total weight of edges that cross the cut (i.e., edges connecting vertices in different partitions)."
]
},
{
"cell_type": "markdown",
"id": "j49upfw2o8k",
"metadata": {},
"source": [
"### Random instance generator\n",
"\n",
"The class [MaxCutGenerator][MaxCutGenerator] can be used to generate random instances of this problem. The generator operates in two modes:\n",
"\n",
"When `fix_graph=False`, a new random Erdős-Rényi graph $G_{n,p}$ is generated for each instance, where $n$ (number of vertices) and $p$ (edge probability) are sampled from the provided probability distributions. Each edge is assigned a random weight drawn from the set $\\{-1, +1\\}$ with equal probability.\n",
"\n",
"When `fix_graph=True`, a single random graph is generated during initialization and reused across all instances. To create variations, the generator randomly flips the sign of each edge weight with probability `w_jitter`, allowing for instances with the same graph structure but different edge weight patterns.\n",
"\n",
"[MaxCutGenerator]: ../../api/problems/#miplearn.problems.maxcut.MaxCutGenerator"
]
},
{
"cell_type": "markdown",
"id": "wsd5jlowc4k",
"metadata": {},
"source": [
"### Example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f12e91f",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
"execution_count": 10,
"id": "uge28hmv3a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"graph edges: [(0, 2), (0, 3), (0, 4), (0, 8), (1, 2), (1, 3), (1, 5), (1, 6), (1, 9), (2, 5), (2, 9), (3, 6), (3, 7), (6, 9), (7, 8), (7, 9), (8, 9)]\n",
"weights[0]: [ 1 1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 1 1 -1 -1]\n",
"weights[1]: [-1 1 -1 -1 -1 1 -1 1 -1 1 -1 1 -1 -1 1 -1 1]\n",
"\n",
"Gurobi Optimizer version 12.0.3 build v12.0.3rc0 (linux64 - \"Ubuntu 24.04.3 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 3950X 16-Core Processor, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 0 rows, 10 columns and 0 nonzeros\n",
"Model fingerprint: 0x005f9eac\n",
"Model has 17 quadratic objective terms\n",
"Variable types: 0 continuous, 10 integer (10 binary)\n",
"Coefficient statistics:\n",
" Matrix range [0e+00, 0e+00]\n",
" Objective range [1e+00, 5e+00]\n",
" QObjective range [2e+00, 2e+00]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [0e+00, 0e+00]\n",
"Found heuristic solution: objective 0.0000000\n",
"Found heuristic solution: objective -3.0000000\n",
"Presolve removed 0 rows and 10 columns\n",
"Presolve time: 0.00s\n",
"Presolve: All rows and columns removed\n",
"\n",
"Explored 0 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 1 (of 32 available processors)\n",
"\n",
"Solution count 2: -3 0 \n",
"No other solutions better than -3\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective -3.000000000000e+00, best bound -3.000000000000e+00, gap 0.0000%\n",
"\n",
"User-callback calls 86, time in user-callback 0.00 sec\n"
]
}
},
"outputs": [],
"source": []
],
"source": [
"import random\n",
"import numpy as np\n",
"from scipy.stats import uniform, randint\n",
"from miplearn.problems.maxcut import (\n",
" MaxCutGenerator,\n",
" build_maxcut_model_gurobipy,\n",
")\n",
"\n",
"# Set random seed to make example reproducible\n",
"random.seed(42)\n",
"np.random.seed(42)\n",
"\n",
"# Generate random instances with a fixed 10-node graph,\n",
"# 30% edge probability, and random weight jittering\n",
"data = MaxCutGenerator(\n",
" n=randint(low=10, high=11),\n",
" p=uniform(loc=0.3, scale=0.0),\n",
" w_jitter=0.2,\n",
" fix_graph=True,\n",
").generate(10)\n",
"\n",
"# Print the graph and weights for two instances\n",
"print(\"graph edges:\", list(data[0].graph.edges()))\n",
"print(\"weights[0]:\", data[0].weights)\n",
"print(\"weights[1]:\", data[1].weights)\n",
"print()\n",
"\n",
"# Build and optimize the first instance\n",
"model = build_maxcut_model_gurobipy(data[0])\n",
"model.optimize()"
]
}
],
"metadata": {
@@ -1559,7 +1770,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.12.3"
}
},
"nbformat": 4,

View File

@@ -57,7 +57,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "92b09b98",
"metadata": {
"collapsed": false,
@@ -70,10 +70,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
"\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x6ddcd141\n",
@@ -89,13 +90,20 @@
" 0 6.3600000e+02 1.700000e+01 0.000000e+00 0s\n",
" 15 2.7610000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 15 iterations and 0.00 seconds (0.00 work units)\n",
"Solved in 15 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 2.761000000e+03\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"User-callback calls 56, time in user-callback 0.00 sec\n",
"Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"LazyConstraints 1\n",
"\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x74ca3d0a\n",
@@ -119,31 +127,20 @@
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 2761.00000 0 - 2796.00000 2761.00000 1.25% - 0s\n",
" 0 0 cutoff 0 2796.00000 2796.00000 0.00% - 0s\n",
"\n",
"Cutting planes:\n",
" Lazy constraints: 3\n",
"\n",
"Explored 1 nodes (16 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"Explored 1 nodes (14 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 20 (of 20 available processors)\n",
"\n",
"Solution count 1: 2796 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 110, time in user-callback 0.00 sec\n"
"User-callback calls 114, time in user-callback 0.00 sec\n"
]
},
{
"data": {
"text/plain": [
"{'WS: Count': 1, 'WS: Number of variables set': 41.0}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
@@ -162,7 +159,7 @@
"from miplearn.io import write_pkl_gz\n",
"from miplearn.problems.tsp import (\n",
" TravelingSalesmanGenerator,\n",
" build_tsp_model,\n",
" build_tsp_model_gurobipy,\n",
")\n",
"from miplearn.solvers.learning import LearningSolver\n",
"\n",
@@ -189,7 +186,7 @@
"\n",
"# Collect training data\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_tsp_model, n_jobs=4)\n",
"bc.collect(train_data, build_tsp_model_gurobipy, n_jobs=4)\n",
"\n",
"# Build learning solver\n",
"solver = LearningSolver(\n",
@@ -211,7 +208,7 @@
"solver.fit(train_data)\n",
"\n",
"# Solve a test instance\n",
"solver.optimize(test_data[0], build_tsp_model)"
"solver.optimize(test_data[0], build_tsp_model_gurobipy);"
]
},
{
@@ -239,7 +236,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
"version": "3.11.7"
}
},
"nbformat": 4,

View File

@@ -1,6 +1,6 @@
MIPLearn
========
**MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Linear Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS.
**MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS.
Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits.
@@ -16,6 +16,7 @@ Contents
tutorials/getting-started-pyomo
tutorials/getting-started-gurobipy
tutorials/getting-started-jump
tutorials/cuts-gurobipy
.. toctree::
:maxdepth: 2
@@ -60,7 +61,7 @@ Citing MIPLearn
If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.3)*. Zenodo (2023). DOI: https://doi.org/10.5281/zenodo.4287567
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.4)*. Zenodo (2024). DOI: https://doi.org/10.5281/zenodo.4287567
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:

View File

@@ -0,0 +1,571 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b4bd8bd6-3ce9-4932-852f-f98a44120a3e",
"metadata": {},
"source": [
"# User cuts and lazy constraints\n",
"\n",
"User cuts and lazy constraints are two advanced mixed-integer programming techniques that can accelerate solver performance. User cuts are additional constraints, derived from the constraints already in the model, that can tighten the feasible region and eliminate fractional solutions, thus reducing the size of the branch-and-bound tree. Lazy constraints, on the other hand, are constraints that are potentially part of the problem formulation but are omitted from the initial model to reduce its size; these constraints are added to the formulation only once the solver finds a solution that violates them. While both techniques have been successful, significant computational effort may still be required to generate strong user cuts and to identify violated lazy constraints, which can reduce their effectiveness.\n",
"\n",
"MIPLearn is able to predict which user cuts and which lazy constraints to enforce at the beginning of the optimization process, using machine learning. In this tutorial, we will use the framework to predict subtour elimination constraints for the **traveling salesman problem** using Gurobipy. We assume that MIPLearn has already been correctly installed.\n",
"\n",
"<div class=\"alert alert-info\">\n",
"\n",
"Solver Compatibility\n",
"\n",
"User cuts and lazy constraints are also supported in the Python/Pyomo and Julia/JuMP versions of the package. See the source code of <code>build_tsp_model_pyomo</code> and <code>build_tsp_model_jump</code> for more details. Note, however, the following limitations:\n",
"\n",
"- Python/Pyomo: Only `gurobi_persistent` is currently supported. PRs implementing callbacks for other persistent solvers are welcome.\n",
"- Julia/JuMP: Only solvers supporting solver-independent callbacks are supported. As of JuMP 1.19, this includes Gurobi, CPLEX, XPRESS, SCIP and GLPK. Note that HiGHS and Cbc are not supported. As newer versions of JuMP implement further callback support, MIPLearn should become automatically compatible with these solvers.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "72229e1f-cbd8-43f0-82ee-17d6ec9c3b7d",
"metadata": {},
"source": [
"## Modeling the traveling salesman problem\n",
"\n",
"Given a list of cities and the distances between them, the **traveling salesman problem (TSP)** asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes.\n",
"\n",
"To describe an instance of TSP, we need to specify the number of cities $n$, and an $n \\times n$ matrix of distances. The class `TravelingSalesmanData`, in the `miplearn.problems.tsp` package, can hold this data:"
]
},
{
"cell_type": "markdown",
"id": "4598a1bc-55b6-48cc-a050-2262786c203a",
"metadata": {},
"source": [
"```python\n",
"@dataclass\r\n",
"class TravelingSalesmanData:\r\n",
" n_cities: int\r\n",
" distances: np.ndarray\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "3a43cc12-1207-4247-bdb2-69a6a2910738",
"metadata": {},
"source": [
"MIPLearn also provides `TravelingSalesmandGenerator`, a random generator for TSP instances, and `build_tsp_model_gurobipy`, a function which converts `TravelingSalesmanData` into an actual gurobipy optimization model, and which uses lazy constraints to enforce subtour elimination.\n",
"\n",
"The example below is a simplified and annotated version of `build_tsp_model_gurobipy`, illustrating the usage of callbacks with MIPLearn. Compared the the previous tutorial examples, note that, in addition to defining the variables, objective function and constraints of our problem, we also define two callback functions `lazy_separate` and `lazy_enforce`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e4712a85-0327-439c-8889-933e1ff714e7",
"metadata": {},
"outputs": [],
"source": [
"import gurobipy as gp\n",
"from gurobipy import quicksum, GRB, tuplelist\n",
"from miplearn.solvers.gurobi import GurobiModel\n",
"import networkx as nx\n",
"import numpy as np\n",
"from miplearn.problems.tsp import (\n",
" TravelingSalesmanData,\n",
" TravelingSalesmanGenerator,\n",
")\n",
"from scipy.stats import uniform, randint\n",
"from miplearn.io import write_pkl_gz, read_pkl_gz\n",
"from miplearn.collectors.basic import BasicCollector\n",
"from miplearn.solvers.learning import LearningSolver\n",
"from miplearn.components.lazy.mem import MemorizingLazyComponent\n",
"from miplearn.extractors.fields import H5FieldsExtractor\n",
"from sklearn.neighbors import KNeighborsClassifier\n",
"\n",
"# Set up random seed to make example more reproducible\n",
"np.random.seed(42)\n",
"\n",
"# Set up Python logging\n",
"import logging\n",
"\n",
"logging.basicConfig(level=logging.WARNING)\n",
"\n",
"\n",
"def build_tsp_model_gurobipy_simplified(data):\n",
" # Read data from file if a filename is provided\n",
" if isinstance(data, str):\n",
" data = read_pkl_gz(data)\n",
"\n",
" # Create empty gurobipy model\n",
" model = gp.Model()\n",
"\n",
" # Create set of edges between every pair of cities, for convenience\n",
" edges = tuplelist(\n",
" (i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)\n",
" )\n",
"\n",
" # Add binary variable x[e] for each edge e\n",
" x = model.addVars(edges, vtype=GRB.BINARY, name=\"x\")\n",
"\n",
" # Add objective function\n",
" model.setObjective(quicksum(x[(i, j)] * data.distances[i, j] for (i, j) in edges))\n",
"\n",
" # Add constraint: must choose two edges adjacent to each city\n",
" model.addConstrs(\n",
" (\n",
" quicksum(x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)\n",
" == 2\n",
" for i in range(data.n_cities)\n",
" ),\n",
" name=\"eq_degree\",\n",
" )\n",
"\n",
" def lazy_separate(m: GurobiModel):\n",
" \"\"\"\n",
" Callback function that finds subtours in the current solution.\n",
" \"\"\"\n",
" # Query current value of the x variables\n",
" x_val = m.inner.cbGetSolution(x)\n",
"\n",
" # Initialize empty set of violations\n",
" violations = []\n",
"\n",
" # Build set of edges we have currently selected\n",
" selected_edges = [e for e in edges if x_val[e] > 0.5]\n",
"\n",
" # Build a graph containing the selected edges, using networkx\n",
" graph = nx.Graph()\n",
" graph.add_edges_from(selected_edges)\n",
"\n",
" # For each component of the graph\n",
" for component in list(nx.connected_components(graph)):\n",
"\n",
" # If the component is not the entire graph, we found a\n",
" # subtour. Add the edge cut to the list of violations.\n",
" if len(component) < data.n_cities:\n",
" cut_edges = [\n",
" [e[0], e[1]]\n",
" for e in edges\n",
" if (e[0] in component and e[1] not in component)\n",
" or (e[0] not in component and e[1] in component)\n",
" ]\n",
" violations.append(cut_edges)\n",
"\n",
" # Return the list of violations\n",
" return violations\n",
"\n",
" def lazy_enforce(m: GurobiModel, violations) -> None:\n",
" \"\"\"\n",
" Callback function that, given a list of subtours, adds lazy\n",
" constraints to remove them from the feasible region.\n",
" \"\"\"\n",
" print(f\"Enforcing {len(violations)} subtour elimination constraints\")\n",
" for violation in violations:\n",
" m.add_constr(quicksum(x[e[0], e[1]] for e in violation) >= 2)\n",
"\n",
" return GurobiModel(\n",
" model,\n",
" lazy_separate=lazy_separate,\n",
" lazy_enforce=lazy_enforce,\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "58875042-d6ac-4f93-b3cc-9a5822b11dad",
"metadata": {},
"source": [
"The `lazy_separate` function starts by querying the current fractional solution value through `m.inner.cbGetSolution` (recall that `m.inner` is a regular gurobipy model), then finds the set of violated lazy constraints. Unlike a regular lazy constraint solver callback, note that `lazy_separate` does not add the violated constraints to the model; it simply returns a list of objects that uniquely identifies the set of lazy constraints that should be generated. Enforcing the constraints is the responsbility of the second callback function, `lazy_enforce`. This function takes as input the model and the list of violations found by `lazy_separate`, converts them into actual constraints, and adds them to the model through `m.add_constr`.\n",
"\n",
"During training data generation, MIPLearn calls `lazy_separate` and `lazy_enforce` in sequence, inside a regular solver callback. However, once the machine learning models are trained, MIPLearn calls `lazy_enforce` directly, before the optimization process starts, with a list of **predicted** violations, as we will see in the example below."
]
},
{
"cell_type": "markdown",
"id": "5839728e-406c-4be2-ba81-83f2b873d4b2",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
"\n",
"Constraint Representation\n",
"\n",
"How should user cuts and lazy constraints be represented is a decision that the user can make; MIPLearn is representation agnostic. The objects returned by `lazy_separate`, however, are serialized as JSON and stored in the HDF5 training data files. Therefore, it is recommended to use only simple objects, such as lists, tuples and dictionaries.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "847ae32e-fad7-406a-8797-0d79065a07fd",
"metadata": {},
"source": [
"## Generating training data\n",
"\n",
"To test the callback defined above, we generate a small set of TSP instances, using the provided random instance generator. As in the previous tutorial, we generate some test instances and some training instances, then solve them using `BasicCollector`. Input problem data is stored in `tsp/train/00000.pkl.gz, ...`, whereas solver training data (including list of required lazy constraints) is stored in `tsp/train/00000.h5, ...`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "eb63154a-1fa6-4eac-aa46-6838b9c201f6",
"metadata": {},
"outputs": [],
"source": [
"# Configure generator to produce instances with 50 cities located\n",
"# in the 1000 x 1000 square, and with slightly perturbed distances.\n",
"gen = TravelingSalesmanGenerator(\n",
" x=uniform(loc=0.0, scale=1000.0),\n",
" y=uniform(loc=0.0, scale=1000.0),\n",
" n=randint(low=50, high=51),\n",
" gamma=uniform(loc=1.0, scale=0.25),\n",
" fix_cities=True,\n",
" round=True,\n",
")\n",
"\n",
"# Generate 500 instances and store input data file to .pkl.gz files\n",
"data = gen.generate(500)\n",
"train_data = write_pkl_gz(data[0:450], \"tsp/train\")\n",
"test_data = write_pkl_gz(data[450:500], \"tsp/test\")\n",
"\n",
"# Solve the training instances in parallel, collecting the required lazy\n",
"# constraints, in addition to other information, such as optimal solution.\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_tsp_model_gurobipy_simplified, n_jobs=10)"
]
},
{
"cell_type": "markdown",
"id": "6903c26c-dbe0-4a2e-bced-fdbf93513dde",
"metadata": {},
"source": [
"## Training and solving new instances"
]
},
{
"cell_type": "markdown",
"id": "57cd724a-2d27-4698-a1e6-9ab8345ef31f",
"metadata": {},
"source": [
"After producing the training dataset, we can train the machine learning models to predict which lazy constraints are necessary. In this tutorial, we use the following ML strategy: given a new instance, find the 50 most similar ones in the training dataset and verify how often each lazy constraint was required. If a lazy constraint was required for the majority of the 50 most-similar instances, enforce it ahead-of-time for the current instance. To measure instance similarity, use the objective function only. This ML strategy can be implemented using `MemorizingLazyComponent` with `H5FieldsExtractor` and `KNeighborsClassifier`, as shown below."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "43779e3d-4174-4189-bc75-9f564910e212",
"metadata": {},
"outputs": [],
"source": [
"solver = LearningSolver(\n",
" components=[\n",
" MemorizingLazyComponent(\n",
" extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n",
" clf=KNeighborsClassifier(n_neighbors=100),\n",
" ),\n",
" ],\n",
")\n",
"solver.fit(train_data)"
]
},
{
"cell_type": "markdown",
"id": "12480712-9d3d-4cbc-a6d7-d6c1e2f950f4",
"metadata": {},
"source": [
"Next, we solve one of the test instances using the trained solver. In the run below, we can see that MIPLearn adds many lazy constraints ahead-of-time, before the optimization starts. During the optimization process itself, some additional lazy constraints are required, but very few."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "23f904ad-f1a8-4b5a-81ae-c0b9e813a4b2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter Threads to value 1\n",
"Read parameters from file gurobi.env\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
"Model fingerprint: 0x04d7bec1\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Presolve time: 0.00s\n",
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n",
" 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 66 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 5.588000000e+03\n",
"\n",
"User-callback calls 110, time in user-callback 0.00 sec\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:miplearn.components.cuts.mem:Predicting violated lazy constraints...\n",
"INFO:miplearn.components.lazy.mem:Enforcing 19 constraints ahead-of-time...\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enforcing 19 subtour elimination constraints\n",
"Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"Threads 1\n",
"LazyConstraints 1\n",
"\n",
"Optimize a model with 69 rows, 1225 columns and 6091 nonzeros\n",
"Model fingerprint: 0x09bd34d6\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Found heuristic solution: objective 29853.000000\n",
"Presolve time: 0.00s\n",
"Presolved: 69 rows, 1225 columns, 6091 nonzeros\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"\n",
"Root relaxation: objective 6.139000e+03, 93 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 6139.00000 0 6 29853.0000 6139.00000 79.4% - 0s\n",
"H 0 0 6390.0000000 6139.00000 3.93% - 0s\n",
" 0 0 6165.50000 0 10 6390.00000 6165.50000 3.51% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
" 0 0 6165.50000 0 6 6390.00000 6165.50000 3.51% - 0s\n",
" 0 0 6198.50000 0 16 6390.00000 6198.50000 3.00% - 0s\n",
" 0 0 6210.50000 0 6 6390.00000 6210.50000 2.81% - 0s\n",
" 0 0 6212.60000 0 31 6390.00000 6212.60000 2.78% - 0s\n",
"H 0 0 6241.0000000 6212.60000 0.46% - 0s\n",
"* 0 0 0 6219.0000000 6219.00000 0.00% - 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 6\n",
" Clique: 1\n",
" MIR: 1\n",
" StrongCG: 1\n",
" Zero half: 4\n",
" RLT: 1\n",
" Lazy constraints: 3\n",
"\n",
"Explored 1 nodes (219 simplex iterations) in 0.04 seconds (0.03 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 4: 6219 6241 6390 29853 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 163, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"# Increase log verbosity, so that we can see what is MIPLearn doing\n",
"logging.getLogger(\"miplearn\").setLevel(logging.INFO)\n",
"\n",
"# Solve a new test instance\n",
"solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);"
]
},
{
"cell_type": "markdown",
"id": "79cc3e61-ee2b-4f18-82cb-373d55d67de6",
"metadata": {},
"source": [
"Finally, we solve the same instance, but using a regular solver, without ML prediction. We can see that a much larger number of lazy constraints are added during the optimization process itself. Additionally, the solver requires a larger number of iterations to find the optimal solution. There is not a significant difference in running time because of the small size of these instances."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a015c51c-091a-43b6-b761-9f3577fc083e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
"Model fingerprint: 0x04d7bec1\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Presolve time: 0.00s\n",
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n",
" 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 66 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 5.588000000e+03\n",
"\n",
"User-callback calls 110, time in user-callback 0.00 sec\n",
"Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"Threads 1\n",
"LazyConstraints 1\n",
"\n",
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
"Model fingerprint: 0x77a94572\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Found heuristic solution: objective 29695.000000\n",
"Presolve time: 0.00s\n",
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"\n",
"Root relaxation: objective 5.588000e+03, 68 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 5588.00000 0 12 29695.0000 5588.00000 81.2% - 0s\n",
"Enforcing 9 subtour elimination constraints\n",
"Enforcing 9 subtour elimination constraints\n",
"H 0 0 24919.000000 5588.00000 77.6% - 0s\n",
" 0 0 5847.50000 0 14 24919.0000 5847.50000 76.5% - 0s\n",
"Enforcing 5 subtour elimination constraints\n",
"Enforcing 5 subtour elimination constraints\n",
"Enforcing 3 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
"H 0 0 7764.0000000 5847.50000 24.7% - 0s\n",
"H 0 0 6684.0000000 5847.50000 12.5% - 0s\n",
" 0 0 6013.75000 0 11 6684.00000 6013.75000 10.0% - 0s\n",
"H 0 0 6340.0000000 6013.75000 5.15% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6095.00000 0 10 6340.00000 6095.00000 3.86% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6128.00000 0 - 6340.00000 6128.00000 3.34% - 0s\n",
" 0 0 6139.00000 0 6 6340.00000 6139.00000 3.17% - 0s\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6187.25000 0 17 6340.00000 6187.25000 2.41% - 0s\n",
"Enforcing 2 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6201.00000 0 15 6340.00000 6201.00000 2.19% - 0s\n",
" 0 0 6201.00000 0 15 6340.00000 6201.00000 2.19% - 0s\n",
"H 0 0 6219.0000000 6201.00000 0.29% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
" 0 0 infeasible 0 6219.00000 6219.00000 0.00% - 0s\n",
"\n",
"Cutting planes:\n",
" Lazy constraints: 2\n",
"\n",
"Explored 1 nodes (217 simplex iterations) in 0.12 seconds (0.05 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 6: 6219 6340 6684 ... 29695\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 216, time in user-callback 0.06 sec\n"
]
}
],
"source": [
"solver = LearningSolver(components=[]) # empty set of ML components\n",
"solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);"
]
},
{
"cell_type": "markdown",
"id": "432c99b2-67fe-409b-8224-ccef91de96d1",
"metadata": {},
"source": [
"## Learning user cuts\n",
"\n",
"The example above focused on lazy constraints. To enforce user cuts instead, the procedure is very similar, with the following changes:\n",
"\n",
"- Instead of `lazy_separate` and `lazy_enforce`, use `cuts_separate` and `cuts_enforce`\n",
"- Instead of `m.inner.cbGetSolution`, use `m.inner.cbGetNodeRel`\n",
"\n",
"For a complete example, see `build_stab_model_gurobipy`, `build_stab_model_pyomo` and `build_stab_model_jump`, which solves the maximum-weight stable set problem using user cut callbacks."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e6cb694d-8c43-410f-9a13-01bf9e0763b7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -33,6 +33,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "02f0a927",
"metadata": {},
@@ -44,53 +45,11 @@
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
"- Julia version, compatible with the JuMP modeling language.\n",
"\n",
"In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cd8a69c1",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:18:02.381829278Z",
"start_time": "2023-06-06T20:18:02.381532300Z"
}
},
"outputs": [],
"source": [
"# !pip install MIPLearn==0.3.0"
]
},
{
"cell_type": "markdown",
"id": "e8274543",
"metadata": {},
"source": [
"In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dcc8756c",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:18:15.537811992Z",
"start_time": "2023-06-06T20:18:13.449177860Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: gurobipy<10.1,>=10 in /home/axavier/Software/anaconda3/envs/miplearn/lib/python3.8/site-packages (10.0.1)\n"
]
}
],
"source": [
"!pip install 'gurobipy>=10,<10.1'"
"In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
"\n",
"```\n",
"$ pip install MIPLearn~=0.4\n",
"```"
]
},
{
@@ -162,7 +121,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "22a67170-10b4-43d3-8708-014d91141e73",
"metadata": {
"ExecuteTime": {
@@ -198,7 +157,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "2f67032f-0d74-4317-b45c-19da0ec859e9",
"metadata": {
"ExecuteTime": {
@@ -214,6 +173,7 @@
"from miplearn.io import read_pkl_gz\n",
"from miplearn.solvers.gurobi import GurobiModel\n",
"\n",
"\n",
"def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:\n",
" if isinstance(data, str):\n",
" data = read_pkl_gz(data)\n",
@@ -223,9 +183,7 @@
" x = model._x = model.addVars(n, vtype=GRB.BINARY, name=\"x\")\n",
" y = model._y = model.addVars(n, name=\"y\")\n",
" model.setObjective(\n",
" quicksum(\n",
" data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n)\n",
" )\n",
" quicksum(data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n))\n",
" )\n",
" model.addConstrs(y[i] <= data.pmax[i] * x[i] for i in range(n))\n",
" model.addConstrs(y[i] >= data.pmin[i] * x[i] for i in range(n))\n",
@@ -243,7 +201,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "2a896f47",
"metadata": {
"ExecuteTime": {
@@ -256,11 +214,16 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Restricted license - for non-production use only - expires 2024-10-28\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Set parameter Threads to value 1\n",
"Read parameters from file gurobi.env\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
"Model fingerprint: 0x58dfdd53\n",
@@ -270,28 +233,28 @@
" Objective range [2e+00, 7e+02]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [1e+02, 1e+02]\n",
"Presolve removed 2 rows and 1 columns\n",
"Presolve removed 6 rows and 3 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 5 rows, 5 columns, 13 nonzeros\n",
"Variable types: 0 continuous, 5 integer (3 binary)\n",
"Found heuristic solution: objective 1400.0000000\n",
"Presolved: 1 rows, 3 columns, 3 nonzeros\n",
"Variable types: 0 continuous, 3 integer (1 binary)\n",
"Found heuristic solution: objective 1990.0000000\n",
"\n",
"Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n",
"Root relaxation: objective 1.320000e+03, 0 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n",
" 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n",
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
"\n",
"Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"Explored 1 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 2: 1320 1400 \n",
"Solution count 2: 1320 1990 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 541, time in user-callback 0.00 sec\n",
"obj = 1320.0\n",
"x = [-0.0, 1.0, 1.0]\n",
"y = [0.0, 60.0, 40.0]\n"
@@ -351,7 +314,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "5eb09fab",
"metadata": {
"ExecuteTime": {
@@ -397,7 +360,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "6156752c",
"metadata": {
"ExecuteTime": {
@@ -424,7 +387,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"id": "7623f002",
"metadata": {
"ExecuteTime": {
@@ -437,7 +400,7 @@
"from miplearn.collectors.basic import BasicCollector\n",
"\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_uc_model, n_jobs=4)"
"bc.collect(train_data, build_uc_model)"
]
},
{
@@ -465,7 +428,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 7,
"id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2",
"metadata": {
"ExecuteTime": {
@@ -503,7 +466,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 8,
"id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf",
"metadata": {
"ExecuteTime": {
@@ -516,10 +479,13 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa8b70287\n",
@@ -536,15 +502,20 @@
" 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n",
" 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Solved in 1 iterations and 0.02 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"User-callback calls 59, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x4ccd7ae3\n",
"Model fingerprint: 0x892e56b2\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
@@ -552,35 +523,53 @@
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"\n",
"User MIP start produced solution with objective 8.30129e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.29184e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.29146e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.29146e+09 (0.01s)\n",
"Loaded user MIP start with objective 8.29146e+09\n",
"User MIP start produced solution with objective 8.29824e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29398e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
"Loaded user MIP start with objective 8.29153e+09\n",
"\n",
"Presolve removed 500 rows and 0 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Presolved: 501 rows, 1000 columns, 2000 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n",
"Root relaxation: objective 8.290622e+09, 501 iterations, 0.00 seconds (0.02 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 - 0 8.2915e+09 8.2907e+09 0.01% - 0s\n",
"\n",
"Cutting planes:\n",
" Cover: 1\n",
" Flow cover: 2\n",
" Gomory: 1\n",
" RLT: 2\n",
"\n",
"Explored 1 nodes (512 simplex iterations) in 0.07 seconds (0.01 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"Explored 1 nodes (550 simplex iterations) in 0.04 seconds (0.04 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 3: 8.29146e+09 8.29184e+09 8.30129e+09 \n",
"Solution count 4: 8.29153e+09 8.29398e+09 8.29695e+09 8.29824e+09 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291459497797e+09, best bound 8.290645029670e+09, gap 0.0098%\n"
"Best objective 8.291528276179e+09, best bound 8.290709658754e+09, gap 0.0099%\n",
"\n",
"User-callback calls 799, time in user-callback 0.00 sec\n"
]
},
{
"data": {
"text/plain": [
"(<miplearn.solvers.gurobi.GurobiModel at 0x7f2bcd72cfd0>,\n",
" {'WS: Count': 1, 'WS: Number of variables set': 477.0})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
@@ -588,7 +577,7 @@
"\n",
"solver_ml = LearningSolver(components=[comp])\n",
"solver_ml.fit(train_data)\n",
"solver_ml.optimize(test_data[0], build_uc_model);"
"solver_ml.optimize(test_data[0], build_uc_model)"
]
},
{
@@ -601,7 +590,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 9,
"id": "2ff391ed-e855-4228-aa09-a7641d8c2893",
"metadata": {
"ExecuteTime": {
@@ -614,10 +603,13 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa8b70287\n",
@@ -636,10 +628,15 @@
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"User-callback calls 59, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x4cbbf7c7\n",
@@ -649,41 +646,52 @@
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 500 rows and 0 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Presolved: 501 rows, 1000 columns, 2000 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Found heuristic solution: objective 9.757128e+09\n",
"Found heuristic solution: objective 1.729688e+10\n",
"\n",
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n",
"Root relaxation: objective 8.290622e+09, 501 iterations, 0.00 seconds (0.02 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2906e+09 0 1 9.7571e+09 8.2906e+09 15.0% - 0s\n",
"H 0 0 8.298273e+09 8.2906e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2983e+09 8.2907e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n",
" 0 0 8.2906e+09 0 1 1.7297e+10 8.2906e+09 52.1% - 0s\n",
"H 0 0 8.298243e+09 8.2906e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2982e+09 8.2907e+09 0.09% - 0s\n",
"H 0 0 8.293980e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n",
"H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2940e+09 8.2907e+09 0.04% - 0s\n",
"H 0 0 8.291961e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 2 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 3 8.2920e+09 8.2908e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 5 8.2920e+09 8.2908e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 5 8.2920e+09 8.2908e+09 0.01% - 0s\n",
" 0 2 8.2908e+09 0 5 8.2920e+09 8.2908e+09 0.01% - 0s\n",
"H 9 9 8.291298e+09 8.2908e+09 0.01% 1.4 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 2\n",
" MIR: 1\n",
" MIR: 2\n",
"\n",
"Explored 1 nodes (1031 simplex iterations) in 0.07 seconds (0.03 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"Explored 10 nodes (759 simplex iterations) in 0.09 seconds (0.11 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n",
"Solution count 6: 8.2913e+09 8.29196e+09 8.29398e+09 ... 1.72969e+10\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n"
"Best objective 8.291298126440e+09, best bound 8.290812450252e+09, gap 0.0059%\n",
"\n",
"User-callback calls 910, time in user-callback 0.00 sec\n"
]
}
],
@@ -710,12 +718,12 @@
"source": [
"## Accessing the solution\n",
"\n",
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 10,
"id": "67a6cd18",
"metadata": {
"ExecuteTime": {
@@ -728,10 +736,13 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x19042f12\n",
@@ -748,15 +759,20 @@
" 0 6.5917580e+09 5.627453e+04 0.000000e+00 0s\n",
" 1 8.2535968e+09 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Solved in 1 iterations and 0.00 seconds (0.00 work units)\n",
"Optimal objective 8.253596777e+09\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"User-callback calls 59, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x8ee64638\n",
"Model fingerprint: 0x6926c32f\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
@@ -764,46 +780,44 @@
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"\n",
"User MIP start produced solution with objective 8.25814e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25512e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n",
"Loaded user MIP start with objective 8.25459e+09\n",
"User MIP start produced solution with objective 8.25989e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25699e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25678e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25668e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.2554e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25448e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25448e+09 (0.05s)\n",
"Loaded user MIP start with objective 8.25448e+09\n",
"\n",
"Presolve time: 0.01s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Presolve removed 500 rows and 0 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 501 rows, 1000 columns, 2000 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
"Root relaxation: objective 8.253597e+09, 512 iterations, 0.00 seconds (0.00 work units)\n",
"Root relaxation: objective 8.253597e+09, 501 iterations, 0.00 seconds (0.02 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2536e+09 0 1 8.2545e+09 8.2536e+09 0.01% - 0s\n",
"H 0 0 8.254435e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 - 0 8.2544e+09 8.2537e+09 0.01% - 0s\n",
"\n",
"Cutting planes:\n",
" Cover: 1\n",
" MIR: 2\n",
" StrongCG: 1\n",
" Flow cover: 1\n",
" RLT: 2\n",
"\n",
"Explored 1 nodes (575 simplex iterations) in 0.12 seconds (0.01 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"Explored 1 nodes (503 simplex iterations) in 0.07 seconds (0.03 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 3: 8.25459e+09 8.25512e+09 8.25814e+09 \n",
"Solution count 7: 8.25443e+09 8.25448e+09 8.2554e+09 ... 8.25989e+09\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n",
"obj = 8254590409.969726\n",
"Best objective 8.254434593504e+09, best bound 8.253676932849e+09, gap 0.0092%\n",
"\n",
"User-callback calls 787, time in user-callback 0.00 sec\n",
"obj = 8254434593.503945\n",
"x = [1.0, 1.0, 0.0]\n",
"y = [935662.0949263407, 1604270.0218116897, 0.0]\n"
"y = [935662.09492646, 1604270.0218116897, 0.0]\n"
]
}
],
@@ -841,7 +855,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.11.7"
}
},
"nbformat": 4,

View File

@@ -41,7 +41,7 @@
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the [official Julia website for more instructions](https://julialang.org/downloads/). After Julia is installed, launch the Julia REPL, type `]` to enter package mode, then install MIPLearn:\n",
"\n",
"```\n",
"pkg> add MIPLearn@0.3\n",
"pkg> add MIPLearn@0.4\n",
"```"
]
},
@@ -592,7 +592,7 @@
"source": [
"## Accessing the solution\n",
"\n",
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver."
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver."
]
},
{

View File

@@ -33,6 +33,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "02f0a927",
"metadata": {},
@@ -44,53 +45,11 @@
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
"- Julia version, compatible with the JuMP modeling language.\n",
"\n",
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cd8a69c1",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T19:57:33.202580815Z",
"start_time": "2023-06-06T19:57:33.198341886Z"
}
},
"outputs": [],
"source": [
"# !pip install MIPLearn==0.3.0"
]
},
{
"cell_type": "markdown",
"id": "e8274543",
"metadata": {},
"source": [
"In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dcc8756c",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T19:57:35.756831801Z",
"start_time": "2023-06-06T19:57:33.201767088Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: gurobipy<10.1,>=10 in /home/axavier/Software/anaconda3/envs/miplearn/lib/python3.8/site-packages (10.0.1)\n"
]
}
],
"source": [
"!pip install 'gurobipy>=10,<10.1'"
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
"\n",
"```\n",
"$ pip install MIPLearn~=0.4\n",
"```"
]
},
{
@@ -162,7 +121,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "22a67170-10b4-43d3-8708-014d91141e73",
"metadata": {
"ExecuteTime": {
@@ -198,7 +157,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "2f67032f-0d74-4317-b45c-19da0ec859e9",
"metadata": {
"ExecuteTime": {
@@ -248,7 +207,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "2a896f47",
"metadata": {
"ExecuteTime": {
@@ -261,12 +220,19 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Restricted license - for non-production use only - expires 2024-10-28\n",
"Set parameter Threads to value 1\n",
"Read parameters from file gurobi.env\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
"Model fingerprint: 0x15c7a953\n",
@@ -276,25 +242,23 @@
" Objective range [2e+00, 7e+02]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [1e+02, 1e+02]\n",
"Presolve removed 2 rows and 1 columns\n",
"Presolve removed 6 rows and 3 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 5 rows, 5 columns, 13 nonzeros\n",
"Variable types: 0 continuous, 5 integer (3 binary)\n",
"Found heuristic solution: objective 1400.0000000\n",
"Presolved: 1 rows, 3 columns, 3 nonzeros\n",
"Variable types: 0 continuous, 3 integer (1 binary)\n",
"Found heuristic solution: objective 1990.0000000\n",
"\n",
"Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n",
"Root relaxation: objective 1.320000e+03, 0 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n",
" 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n",
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
"\n",
"Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"Explored 1 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 2: 1320 1400 \n",
"Solution count 2: 1320 1990 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n",
@@ -359,7 +323,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "5eb09fab",
"metadata": {
"ExecuteTime": {
@@ -405,7 +369,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "6156752c",
"metadata": {
"ExecuteTime": {
@@ -432,7 +396,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"id": "7623f002",
"metadata": {
"ExecuteTime": {
@@ -473,7 +437,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 7,
"id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2",
"metadata": {
"ExecuteTime": {
@@ -511,7 +475,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 8,
"id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf",
"metadata": {
"ExecuteTime": {
@@ -524,11 +488,16 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x5e67c6ee\n",
@@ -547,14 +516,19 @@
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa4a7961e\n",
"Model fingerprint: 0xff6a55c5\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
@@ -562,37 +536,49 @@
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"\n",
"User MIP start produced solution with objective 8.30129e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.29184e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.29146e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.29146e+09 (0.02s)\n",
"Loaded user MIP start with objective 8.29146e+09\n",
"User MIP start produced solution with objective 8.29153e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29153e+09 (0.00s)\n",
"Loaded user MIP start with objective 8.29153e+09\n",
"\n",
"Presolve time: 0.01s\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.01 seconds (0.00 work units)\n",
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 - 0 8.2915e+09 8.2907e+09 0.01% - 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 1\n",
" Cover: 1\n",
" Flow cover: 2\n",
"\n",
"Explored 1 nodes (512 simplex iterations) in 0.09 seconds (0.01 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"Explored 1 nodes (564 simplex iterations) in 0.03 seconds (0.01 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 3: 8.29146e+09 8.29184e+09 8.30129e+09 \n",
"Solution count 1: 8.29153e+09 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291459497797e+09, best bound 8.290645029670e+09, gap 0.0098%\n",
"Best objective 8.291528276179e+09, best bound 8.290729173948e+09, gap 0.0096%\n",
"WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n"
]
},
{
"data": {
"text/plain": [
"(<miplearn.solvers.pyomo.PyomoModel at 0x7fdb38952450>, {})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
@@ -600,7 +586,7 @@
"\n",
"solver_ml = LearningSolver(components=[comp])\n",
"solver_ml.fit(train_data)\n",
"solver_ml.optimize(test_data[0], build_uc_model);"
"solver_ml.optimize(test_data[0], build_uc_model)"
]
},
{
@@ -613,7 +599,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 9,
"id": "2ff391ed-e855-4228-aa09-a7641d8c2893",
"metadata": {
"ExecuteTime": {
@@ -626,11 +612,16 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x5e67c6ee\n",
@@ -640,7 +631,7 @@
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.01s\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
@@ -649,11 +640,16 @@
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x8a0f9587\n",
@@ -682,31 +678,44 @@
" 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n",
"H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 3 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 3 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 2 8.2908e+09 0 3 8.2940e+09 8.2908e+09 0.04% - 0s\n",
"H 9 9 8.292471e+09 8.2908e+09 0.02% 1.3 0s\n",
"* 90 41 44 8.291525e+09 8.2908e+09 0.01% 1.5 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 2\n",
" MIR: 1\n",
" Gomory: 1\n",
" Cover: 1\n",
" MIR: 2\n",
"\n",
"Explored 1 nodes (1025 simplex iterations) in 0.08 seconds (0.03 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"Explored 91 nodes (1166 simplex iterations) in 0.06 seconds (0.05 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n",
"Solution count 7: 8.29152e+09 8.29247e+09 8.29398e+09 ... 1.0319e+10\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n",
"Best objective 8.291524908632e+09, best bound 8.290823611882e+09, gap 0.0085%\n",
"WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n"
]
},
{
"data": {
"text/plain": [
"(<miplearn.solvers.pyomo.PyomoModel at 0x7fdb2f563f50>, {})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"solver_baseline = LearningSolver(components=[])\n",
"solver_baseline.fit(train_data)\n",
"solver_baseline.optimize(test_data[0], build_uc_model);"
"solver_baseline.optimize(test_data[0], build_uc_model)"
]
},
{
@@ -726,12 +735,12 @@
"source": [
"## Accessing the solution\n",
"\n",
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 10,
"id": "67a6cd18",
"metadata": {
"ExecuteTime": {
@@ -744,11 +753,16 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x2dfe4e1c\n",
@@ -758,7 +772,7 @@
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.01s\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
@@ -767,14 +781,19 @@
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.253596777e+09\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x20637200\n",
"Model fingerprint: 0xd941f1ed\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
@@ -784,11 +803,11 @@
"\n",
"User MIP start produced solution with objective 8.25814e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25512e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n",
"Loaded user MIP start with objective 8.25459e+09\n",
"User MIP start produced solution with objective 8.25448e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25448e+09 (0.02s)\n",
"Loaded user MIP start with objective 8.25448e+09\n",
"\n",
"Presolve time: 0.01s\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
@@ -797,33 +816,25 @@
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2536e+09 0 1 8.2545e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 - 0 8.2545e+09 8.2537e+09 0.01% - 0s\n",
"\n",
"Cutting planes:\n",
" Cover: 1\n",
" MIR: 2\n",
" StrongCG: 1\n",
" Flow cover: 1\n",
" Flow cover: 2\n",
"\n",
"Explored 1 nodes (575 simplex iterations) in 0.11 seconds (0.01 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"Explored 1 nodes (514 simplex iterations) in 0.03 seconds (0.01 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 3: 8.25459e+09 8.25512e+09 8.25814e+09 \n",
"Solution count 3: 8.25448e+09 8.25512e+09 8.25814e+09 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n",
"Best objective 8.254479145594e+09, best bound 8.253676932849e+09, gap 0.0097%\n",
"WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n",
"obj = 8254590409.96973\n",
"obj = 8254479145.594172\n",
" x = [1.0, 1.0, 0.0, 1.0, 1.0]\n",
" y = [935662.0949263407, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]\n"
" y = [935662.0949262811, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]\n"
]
}
],
@@ -861,7 +872,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
"version": "3.11.7"
}
},
"nbformat": 4,

View File

@@ -0,0 +1 @@
Threads 1

BIN
miplearn/.io.py.swp Normal file

Binary file not shown.

View File

@@ -54,7 +54,7 @@ class MinProbabilityClassifier(BaseEstimator):
y_pred = []
for sample_idx in range(n_samples):
yi = float("nan")
for (class_idx, class_val) in enumerate(self.classes_):
for class_idx, class_val in enumerate(self.classes_):
if y_proba[sample_idx, class_idx] >= self.thresholds[class_idx]:
yi = class_val
y_pred.append(yi)

View File

@@ -4,9 +4,12 @@
import json
import os
import sys
from io import StringIO
from os.path import exists
from typing import Callable, List
from typing import Callable, List, Any
import traceback
from ..h5 import H5File
from ..io import _RedirectOutput, gzip, _to_h5_filename
@@ -14,63 +17,81 @@ from ..parallel import p_umap
class BasicCollector:
def __init__(
self,
skip_lp: bool = False,
write_mps: bool = True,
write_log: bool = True,
) -> None:
self.skip_lp = skip_lp
self.write_mps = write_mps
self.write_log = write_log
def collect(
self,
filenames: List[str],
build_model: Callable,
n_jobs: int = 1,
progress: bool = False,
verbose: bool = False,
) -> None:
def _collect(data_filename):
h5_filename = _to_h5_filename(data_filename)
mps_filename = h5_filename.replace(".h5", ".mps")
def _collect(data_filename: str) -> None:
try:
h5_filename = _to_h5_filename(data_filename)
mps_filename = h5_filename.replace(".h5", ".mps")
log_filename = h5_filename.replace(".h5", ".h5.log")
if exists(h5_filename):
# Try to read optimal solution
mip_var_values = None
try:
with H5File(h5_filename, "r") as h5:
mip_var_values = h5.get_array("mip_var_values")
except:
pass
if exists(h5_filename):
# Try to read optimal solution
mip_var_values = None
try:
with H5File(h5_filename, "r") as h5:
mip_var_values = h5.get_array("mip_var_values")
except:
pass
if mip_var_values is None:
print(f"Removing empty/corrupted h5 file: {h5_filename}")
os.remove(h5_filename)
else:
return
if mip_var_values is None:
print(f"Removing empty/corrupted h5 file: {h5_filename}")
os.remove(h5_filename)
else:
return
with H5File(h5_filename, "w") as h5:
streams = [StringIO()]
with _RedirectOutput(streams):
# Load and extract static features
model = build_model(data_filename)
model.extract_after_load(h5)
with H5File(h5_filename, "w") as h5:
h5.put_scalar("data_filename", data_filename)
streams: List[Any] = [StringIO()]
if verbose:
streams += [sys.stdout]
with _RedirectOutput(streams):
# Load and extract static features
model = build_model(data_filename)
model.extract_after_load(h5)
# Solve LP relaxation
relaxed = model.relax()
relaxed.optimize()
relaxed.extract_after_lp(h5)
if not self.skip_lp:
# Solve LP relaxation
relaxed = model.relax()
relaxed.optimize()
relaxed.extract_after_lp(h5)
# Solve MIP
model.optimize()
model.extract_after_mip(h5)
# Solve MIP
model.optimize()
model.extract_after_mip(h5)
# Add lazy constraints to model
if (
hasattr(model, "fix_violations")
and model.fix_violations is not None
):
model.fix_violations(model, model.violations_, "aot")
h5.put_scalar(
"mip_constr_violations", json.dumps(model.violations_)
)
if self.write_mps:
# Add lazy constraints to model
model._lazy_enforce_collected()
# Save MPS file
model.write(mps_filename)
gzip(mps_filename)
# Save MPS file
model.write(mps_filename)
gzip(mps_filename)
h5.put_scalar("mip_log", streams[0].getvalue())
log = streams[0].getvalue()
h5.put_scalar("mip_log", log)
if self.write_log:
with open(log_filename, "w") as log_file:
log_file.write(log)
except:
print(f"Error processing: data_filename")
traceback.print_exc()
if n_jobs > 1:
p_umap(

View File

@@ -1,117 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from io import StringIO
from typing import Callable
import gurobipy as gp
import numpy as np
from gurobipy import GRB, LinExpr
from ..h5 import H5File
from ..io import _RedirectOutput
class LazyCollector:
def __init__(
self,
min_constrs: int = 100_000,
time_limit: float = 900,
) -> None:
self.min_constrs = min_constrs
self.time_limit = time_limit
def collect(
self, data_filename: str, build_model: Callable, tol: float = 1e-6
) -> None:
h5_filename = f"{data_filename}.h5"
with H5File(h5_filename, "r+") as h5:
streams = [StringIO()]
lazy = None
with _RedirectOutput(streams):
slacks = h5.get_array("mip_constr_slacks")
assert slacks is not None
# Check minimum problem size
if len(slacks) < self.min_constrs:
print("Problem is too small. Skipping.")
h5.put_array("mip_constr_lazy", np.zeros(len(slacks)))
return
# Load model
print("Loading model...")
model = build_model(data_filename)
model.params.LazyConstraints = True
model.params.timeLimit = self.time_limit
gp_constrs = np.array(model.getConstrs())
gp_vars = np.array(model.getVars())
# Load constraints
lhs = h5.get_sparse("static_constr_lhs")
rhs = h5.get_array("static_constr_rhs")
sense = h5.get_array("static_constr_sense")
assert lhs is not None
assert rhs is not None
assert sense is not None
lhs_csr = lhs.tocsr()
lhs_csc = lhs.tocsc()
constr_idx = np.array(range(len(rhs)))
lazy = np.zeros(len(rhs))
# Drop loose constraints
selected = (slacks > 0) & ((sense == b"<") | (sense == b">"))
loose_constrs = gp_constrs[selected]
print(
f"Removing {len(loose_constrs):,d} constraints (out of {len(rhs):,d})..."
)
model.remove(list(loose_constrs))
# Filter to constraints that were dropped
lhs_csr = lhs_csr[selected, :]
lhs_csc = lhs_csc[selected, :]
rhs = rhs[selected]
sense = sense[selected]
constr_idx = constr_idx[selected]
lazy[selected] = 1
# Load warm start
var_names = h5.get_array("static_var_names")
var_values = h5.get_array("mip_var_values")
assert var_values is not None
assert var_names is not None
for (var_idx, var_name) in enumerate(var_names):
var = model.getVarByName(var_name.decode())
var.start = var_values[var_idx]
print("Solving MIP with lazy constraints callback...")
def callback(model: gp.Model, where: int) -> None:
assert rhs is not None
assert lazy is not None
assert sense is not None
if where == GRB.Callback.MIPSOL:
x_val = np.array(model.cbGetSolution(model.getVars()))
slack = lhs_csc * x_val - rhs
slack[sense == b">"] *= -1
is_violated = slack > tol
for (j, rhs_j) in enumerate(rhs):
if is_violated[j]:
lazy[constr_idx[j]] = 0
expr = LinExpr(
lhs_csr[j, :].data, gp_vars[lhs_csr[j, :].indices]
)
if sense[j] == b"<":
model.cbLazy(expr <= rhs_j)
elif sense[j] == b">":
model.cbLazy(expr >= rhs_j)
else:
raise RuntimeError(f"Unknown sense: {sense[j]}")
model.optimize(callback)
print(f"Marking {lazy.sum():,.0f} constraints as lazy...")
h5.put_array("mip_constr_lazy", lazy)
h5.put_scalar("mip_constr_lazy_log", streams[0].getvalue())

View File

View File

@@ -0,0 +1,35 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import logging
from typing import Dict, Any, List
from miplearn.components.cuts.mem import convert_lists_to_tuples
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class ExpertCutsComponent:
def fit(
self,
_: List[str],
) -> None:
pass
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
with H5File(test_h5, "r") as h5:
cuts_str = h5.get_scalar("mip_cuts")
assert cuts_str is not None
assert isinstance(cuts_str, str)
cuts = list(set(convert_lists_to_tuples(json.loads(cuts_str))))
model.set_cuts(cuts)
stats["Cuts: AOT"] = len(cuts)

View File

@@ -0,0 +1,113 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import logging
from typing import List, Dict, Any, Hashable
import numpy as np
from sklearn.preprocessing import MultiLabelBinarizer
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
def convert_lists_to_tuples(obj: Any) -> Any:
if isinstance(obj, list):
return tuple(convert_lists_to_tuples(item) for item in obj)
elif isinstance(obj, dict):
return {key: convert_lists_to_tuples(value) for key, value in obj.items()}
else:
return obj
class _BaseMemorizingConstrComponent:
def __init__(self, clf: Any, extractor: FeaturesExtractor, field: str) -> None:
self.clf = clf
self.extractor = extractor
self.constrs_: List[Hashable] = []
self.n_features_: int = 0
self.n_targets_: int = 0
self.field = field
def fit(
self,
train_h5: List[str],
) -> None:
logger.info("Reading training data...")
n_samples = len(train_h5)
x, y, constrs, n_features = [], [], [], None
constr_to_idx: Dict[Hashable, int] = {}
for h5_filename in train_h5:
with H5File(h5_filename, "r") as h5:
# Store constraints
sample_constrs_str = h5.get_scalar(self.field)
assert sample_constrs_str is not None
assert isinstance(sample_constrs_str, str)
sample_constrs = convert_lists_to_tuples(json.loads(sample_constrs_str))
y_sample = []
for c in sample_constrs:
if c not in constr_to_idx:
constr_to_idx[c] = len(constr_to_idx)
constrs.append(c)
y_sample.append(constr_to_idx[c])
y.append(y_sample)
# Extract features
x_sample = self.extractor.get_instance_features(h5)
assert len(x_sample.shape) == 1
if n_features is None:
n_features = len(x_sample)
else:
assert len(x_sample) == n_features
x.append(x_sample)
logger.info("Constructing matrices...")
assert n_features is not None
self.n_features_ = n_features
self.constrs_ = constrs
self.n_targets_ = len(constr_to_idx)
x_np = np.vstack(x)
assert x_np.shape == (n_samples, n_features)
y_np = MultiLabelBinarizer().fit_transform(y)
assert y_np.shape == (n_samples, self.n_targets_)
logger.info(
f"Dataset has {n_samples:,d} samples, "
f"{n_features:,d} features and {self.n_targets_:,d} targets"
)
logger.info("Training classifier...")
self.clf.fit(x_np, y_np)
def predict(
self,
msg: str,
test_h5: str,
) -> List[Hashable]:
with H5File(test_h5, "r") as h5:
x_sample = self.extractor.get_instance_features(h5)
assert x_sample.shape == (self.n_features_,)
x_sample = x_sample.reshape(1, -1)
logger.info(msg)
y = self.clf.predict(x_sample)
assert y.shape == (1, self.n_targets_)
y = y.reshape(-1)
return [self.constrs_[i] for (i, yi) in enumerate(y) if yi > 0.5]
class MemorizingCutsComponent(_BaseMemorizingConstrComponent):
def __init__(self, clf: Any, extractor: FeaturesExtractor) -> None:
super().__init__(clf, extractor, "mip_cuts")
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
assert self.constrs_ is not None
cuts = self.predict("Predicting cutting planes...", test_h5)
model.set_cuts(cuts)
stats["Cuts: AOT"] = len(cuts)

View File

@@ -1,43 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
from typing import Any, Dict, List
import gurobipy as gp
from ..h5 import H5File
class ExpertLazyComponent:
def __init__(self) -> None:
pass
def fit(self, train_h5: List[str]) -> None:
pass
def before_mip(self, test_h5: str, model: gp.Model, stats: Dict[str, Any]) -> None:
with H5File(test_h5, "r") as h5:
constr_names = h5.get_array("static_constr_names")
constr_lazy = h5.get_array("mip_constr_lazy")
constr_violations = h5.get_scalar("mip_constr_violations")
assert constr_names is not None
assert constr_violations is not None
# Static lazy constraints
n_static_lazy = 0
if constr_lazy is not None:
for (constr_idx, constr_name) in enumerate(constr_names):
if constr_lazy[constr_idx]:
constr = model.getConstrByName(constr_name.decode())
constr.lazy = 3
n_static_lazy += 1
stats.update({"Static lazy constraints": n_static_lazy})
# Dynamic lazy constraints
if hasattr(model, "_fix_violations"):
violations = json.loads(constr_violations)
model._fix_violations(model, violations, "aot")
stats.update({"Dynamic lazy constraints": len(violations)})

View File

View File

@@ -0,0 +1,36 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import logging
from typing import Dict, Any, List
from miplearn.components.cuts.mem import convert_lists_to_tuples
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class ExpertLazyComponent:
def fit(
self,
_: List[str],
) -> None:
pass
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
with H5File(test_h5, "r") as h5:
violations_str = h5.get_scalar("mip_lazy")
assert violations_str is not None
assert isinstance(violations_str, str)
violations = list(set(convert_lists_to_tuples(json.loads(violations_str))))
logger.info(f"Enforcing {len(violations)} constraints ahead-of-time...")
model.lazy_enforce(violations)
stats["Lazy Constraints: AOT"] = len(violations)

View File

@@ -0,0 +1,31 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from typing import List, Dict, Any, Hashable
from miplearn.components.cuts.mem import (
_BaseMemorizingConstrComponent,
)
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class MemorizingLazyComponent(_BaseMemorizingConstrComponent):
def __init__(self, clf: Any, extractor: FeaturesExtractor) -> None:
super().__init__(clf, extractor, "mip_lazy")
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
assert self.constrs_ is not None
violations = self.predict("Predicting violated lazy constraints...", test_h5)
logger.info(f"Enforcing {len(violations)} constraints ahead-of-time...")
model.lazy_enforce(violations)
stats["Lazy Constraints: AOT"] = len(violations)

View File

@@ -1,29 +1,53 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import Tuple
from typing import Tuple, List
import numpy as np
from miplearn.h5 import H5File
def _extract_bin_var_names_values(
def _extract_var_names_values(
h5: H5File,
selected_var_types: List[bytes],
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
bin_var_names, bin_var_indices = _extract_bin_var_names(h5)
bin_var_names, bin_var_indices = _extract_var_names(h5, selected_var_types)
var_values = h5.get_array("mip_var_values")
assert var_values is not None
bin_var_values = var_values[bin_var_indices].astype(int)
return bin_var_names, bin_var_values, bin_var_indices
def _extract_bin_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
def _extract_var_names(
h5: H5File,
selected_var_types: List[bytes],
) -> Tuple[np.ndarray, np.ndarray]:
var_types = h5.get_array("static_var_types")
var_names = h5.get_array("static_var_names")
assert var_types is not None
assert var_names is not None
bin_var_indices = np.where(var_types == b"B")[0]
bin_var_indices = np.where(np.isin(var_types, selected_var_types))[0]
bin_var_names = var_names[bin_var_indices]
assert len(bin_var_names.shape) == 1
return bin_var_names, bin_var_indices
def _extract_bin_var_names_values(
h5: H5File,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
return _extract_var_names_values(h5, [b"B"])
def _extract_bin_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
return _extract_var_names(h5, [b"B"])
def _extract_int_var_names_values(
h5: H5File,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
return _extract_var_names_values(h5, [b"B", b"I"])
def _extract_int_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
return _extract_var_names(h5, [b"B", b"I"])

View File

@@ -71,7 +71,7 @@ class EnforceProximity(PrimalComponentAction):
constr_lhs = []
constr_vars = []
constr_rhs = 0.0
for (i, var_name) in enumerate(var_names):
for i, var_name in enumerate(var_names):
if np.isnan(var_values[i]):
continue
constr_lhs.append(1.0 if var_values[i] < 0.5 else -1.0)

View File

@@ -5,7 +5,7 @@
import logging
from typing import Any, Dict, List
from . import _extract_bin_var_names_values
from . import _extract_int_var_names_values
from .actions import PrimalComponentAction
from ...solvers.abstract import AbstractModel
from ...h5 import H5File
@@ -28,5 +28,5 @@ class ExpertPrimalComponent:
self, test_h5: str, model: AbstractModel, stats: Dict[str, Any]
) -> None:
with H5File(test_h5, "r") as h5:
names, values, _ = _extract_bin_var_names_values(h5)
names, values, _ = _extract_int_var_names_values(h5)
self.action.perform(model, names, values.reshape(1, -1), stats)

View File

@@ -91,7 +91,7 @@ class IndependentVarsPrimalComponent:
logger.info(f"Training {n_bin_vars} classifiers...")
self.clf_ = {}
for (var_idx, var_name) in enumerate(self.bin_var_names_):
for var_idx, var_name in enumerate(self.bin_var_names_):
self.clf_[var_name] = self.clone_fn(self.base_clf)
self.clf_[var_name].fit(
x_np[var_idx::n_bin_vars, :], y_np[var_idx::n_bin_vars]
@@ -117,7 +117,7 @@ class IndependentVarsPrimalComponent:
# Predict optimal solution
logger.info("Predicting warm starts...")
y_pred = []
for (var_idx, var_name) in enumerate(self.bin_var_names_):
for var_idx, var_name in enumerate(self.bin_var_names_):
x_var = x_sample[var_idx, :].reshape(1, -1)
y_var = self.clf_[var_name].predict(x_var)
assert y_var.shape == (1,)

View File

@@ -25,7 +25,8 @@ class ExpertBranchPriorityComponent:
assert var_priority is not None
assert var_names is not None
for (var_idx, var_name) in enumerate(var_names):
for var_idx, var_name in enumerate(var_names):
if np.isfinite(var_priority[var_idx]):
var = model.getVarByName(var_name.decode())
var.branchPriority = int(log(1 + var_priority[var_idx]))
assert var is not None, f"unknown var: {var_name}"
var.BranchPriority = int(log(1 + var_priority[var_idx]))

View File

@@ -22,7 +22,7 @@ class AlvLouWeh2017Extractor(FeaturesExtractor):
self.with_m3 = with_m3
def get_instance_features(self, h5: H5File) -> np.ndarray:
raise NotImplemented()
raise NotImplementedError()
def get_var_features(self, h5: H5File) -> np.ndarray:
"""
@@ -197,7 +197,7 @@ class AlvLouWeh2017Extractor(FeaturesExtractor):
return features
def get_constr_features(self, h5: H5File) -> np.ndarray:
raise NotImplemented()
raise NotImplementedError()
def _fix_infinity(m: Optional[np.ndarray]) -> None:

View File

@@ -31,9 +31,9 @@ class H5FieldsExtractor(FeaturesExtractor):
data = h5.get_scalar(field)
assert data is not None
x.append(data)
x = np.hstack(x)
assert len(x.shape) == 1
return x
x_np = np.hstack(x)
assert len(x_np.shape) == 1
return x_np
def get_var_features(self, h5: H5File) -> np.ndarray:
var_types = h5.get_array("static_var_types")
@@ -51,13 +51,14 @@ class H5FieldsExtractor(FeaturesExtractor):
raise Exception("No constr fields provided")
return self._extract(h5, self.constr_fields, n_constr)
def _extract(self, h5, fields, n_expected):
def _extract(self, h5: H5File, fields: List[str], n_expected: int) -> np.ndarray:
x = []
for field in fields:
try:
data = h5.get_array(field)
except ValueError:
v = h5.get_scalar(field)
assert v is not None
data = np.repeat(v, n_expected)
assert data is not None
assert len(data.shape) == 1

View File

@@ -68,7 +68,7 @@ class H5File:
return
self._assert_is_array(value)
if value.dtype.kind == "f":
value = value.astype("float32")
value = value.astype("float64")
if key in self.file:
del self.file[key]
return self.file.create_dataset(key, data=value, compression="gzip")
@@ -111,7 +111,7 @@ class H5File:
), f"bytes expected; found: {value.__class__}" # type: ignore
self.put_array(key, np.frombuffer(value, dtype="uint8"))
def close(self):
def close(self) -> None:
self.file.close()
def __enter__(self) -> "H5File":

View File

@@ -86,7 +86,11 @@ def read_pkl_gz(filename: str) -> Any:
def _to_h5_filename(data_filename: str) -> str:
output = f"{data_filename}.h5"
output = output.replace(".pkl.gz.h5", ".h5")
output = output.replace(".pkl.h5", ".h5")
output = output.replace(".gz.h5", ".h5")
output = output.replace(".csv.h5", ".h5")
output = output.replace(".jld2.h5", ".h5")
output = output.replace(".json.h5", ".h5")
output = output.replace(".lp.h5", ".h5")
output = output.replace(".mps.h5", ".h5")
output = output.replace(".pkl.h5", ".h5")
return output

View File

@@ -1,3 +1,28 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import Any, Optional
import gurobipy as gp
from pyomo import environ as pe
def _gurobipy_set_params(model: gp.Model, params: Optional[dict[str, Any]]) -> None:
assert isinstance(model, gp.Model)
if params is not None:
for param_name, param_value in params.items():
setattr(model.params, param_name, param_value)
def _pyomo_set_params(
model: pe.ConcreteModel,
params: Optional[dict[str, Any]],
solver: str,
) -> None:
assert (
solver == "gurobi_persistent"
), "setting parameters is only supported with gurobi_persistent"
if solver == "gurobi_persistent" and params is not None:
for param_name, param_value in params.items():
model.solver.set_gurobi_param(param_name, param_value)

View File

@@ -109,7 +109,7 @@ class BinPackGenerator:
return [_sample() for n in range(n_samples)]
def build_binpack_model(data: Union[str, BinPackData]) -> GurobiModel:
def build_binpack_model_gurobipy(data: Union[str, BinPackData]) -> GurobiModel:
"""Converts bin packing problem data into a concrete Gurobipy model."""
if isinstance(data, str):
data = read_pkl_gz(data)

163
miplearn/problems/maxcut.py Normal file
View File

@@ -0,0 +1,163 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2025, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from dataclasses import dataclass
from typing import List, Union, Optional, Any
import gurobipy as gp
import networkx as nx
import numpy as np
import pyomo.environ as pe
from networkx import Graph
from scipy.stats.distributions import rv_frozen
from miplearn.io import read_pkl_gz
from miplearn.problems import _gurobipy_set_params, _pyomo_set_params
from miplearn.solvers.gurobi import GurobiModel
from miplearn.solvers.pyomo import PyomoModel
@dataclass
class MaxCutData:
graph: Graph
weights: np.ndarray
class MaxCutGenerator:
"""
Random instance generator for the Maximum Cut Problem.
The generator operates in two modes. When `fix_graph=True`, a single random Erdős-Rényi graph $G_{n,
p}$ is generated during initialization, with parameters $n$ and $p$ drawn from their respective probability
distributions, and each edge is assigned a random weight drawn from the set {-1, 1}, with equal probability. To
generate each instance variation, the generator randomly flips the sign of each edge weight with probability
`w_jitter`. The graph remains the same across all variations.
When `fix_graph=False`, a new random graph is generated for each instance, with random {-1,1} edge weights.
"""
def __init__(
self,
n: rv_frozen,
p: rv_frozen,
w_jitter: float = 0.0,
fix_graph: bool = False,
):
"""
Initialize the problem generator.
Parameters
----------
n: rv_discrete
Probability distribution for the number of nodes.
p: rv_continuous
Probability distribution for the graph density.
w_jitter: float
Probability that each edge weight flips from -1 to 1. Only applicable if fix_graph is True.
fix_graph: bool
Controls graph generation for instances. If false, a new random graph is
generated for each instance. If true, the same graph is reused across instances.
"""
assert isinstance(n, rv_frozen), "n should be a SciPy probability distribution"
assert isinstance(p, rv_frozen), "p should be a SciPy probability distribution"
self.n = n
self.p = p
self.w_jitter = w_jitter
self.fix_graph = fix_graph
self.graph = None
self.weights = None
if fix_graph:
self.graph = self._generate_graph()
self.weights = self._generate_weights(self.graph)
def generate(self, n_samples: int) -> List[MaxCutData]:
def _sample() -> MaxCutData:
if self.graph is not None:
graph = self.graph
weights = self.weights
jitter = self._generate_jitter(graph)
weights = weights * jitter
else:
graph = self._generate_graph()
weights = self._generate_weights(graph)
return MaxCutData(graph, weights)
return [_sample() for _ in range(n_samples)]
def _generate_graph(self) -> Graph:
return nx.generators.random_graphs.binomial_graph(self.n.rvs(), self.p.rvs())
@staticmethod
def _generate_weights(graph: Graph) -> np.ndarray:
m = graph.number_of_edges()
return np.random.randint(2, size=(m,)) * 2 - 1
def _generate_jitter(self, graph: Graph) -> np.ndarray:
m = graph.number_of_edges()
return (np.random.rand(m) >= self.w_jitter).astype(int) * 2 - 1
def build_maxcut_model_gurobipy(
data: Union[str, MaxCutData],
params: Optional[dict[str, Any]] = None,
) -> GurobiModel:
# Initialize model
model = gp.Model()
_gurobipy_set_params(model, params)
# Read data
data = _maxcut_read(data)
nodes = list(data.graph.nodes())
edges = list(data.graph.edges())
# Add decision variables
x = model.addVars(nodes, vtype=gp.GRB.BINARY, name="x")
# Add the objective function
model.setObjective(
gp.quicksum(
-data.weights[i] * x[e[0]] * (1 - x[e[1]]) for (i, e) in enumerate(edges)
)
)
model.update()
return GurobiModel(model)
def build_maxcut_model_pyomo(
data: Union[str, MaxCutData],
solver: str = "gurobi_persistent",
params: Optional[dict[str, Any]] = None,
) -> PyomoModel:
# Initialize model
model = pe.ConcreteModel()
# Read data
data = _maxcut_read(data)
nodes = pe.Set(initialize=list(data.graph.nodes))
edges = list(data.graph.edges())
# Add decision variables
model.x = pe.Var(nodes, domain=pe.Binary, name="x")
# Add the objective function
model.obj = pe.Objective(
expr=pe.quicksum(
-data.weights[i] * model.x[e[0]]
+ data.weights[i] * model.x[e[0]] * model.x[e[1]]
for (i, e) in enumerate(edges)
),
sense=pe.minimize,
)
model.pprint()
pm = PyomoModel(model, solver)
_pyomo_set_params(model, params, solver)
return pm
def _maxcut_read(data: Union[str, MaxCutData]) -> MaxCutData:
if isinstance(data, str):
data = read_pkl_gz(data)
assert isinstance(data, MaxCutData)
return data

View File

@@ -174,7 +174,9 @@ class MultiKnapsackGenerator:
return [_sample() for _ in range(n_samples)]
def build_multiknapsack_model(data: Union[str, MultiKnapsackData]) -> GurobiModel:
def build_multiknapsack_model_gurobipy(
data: Union[str, MultiKnapsackData]
) -> GurobiModel:
"""Converts multi-knapsack problem data into a concrete Gurobipy model."""
if isinstance(data, str):
data = read_pkl_gz(data)

View File

@@ -141,7 +141,7 @@ class PMedianGenerator:
return [_sample() for _ in range(n_samples)]
def build_pmedian_model(data: Union[str, PMedianData]) -> GurobiModel:
def build_pmedian_model_gurobipy(data: Union[str, PMedianData]) -> GurobiModel:
"""Converts capacitated p-median data into a concrete Gurobipy model."""
if isinstance(data, str):
data = read_pkl_gz(data)

View File

@@ -8,7 +8,7 @@ from typing import List, Union
import gurobipy as gp
import numpy as np
import pyomo.environ as pe
from gurobipy.gurobipy import GRB
from gurobipy import GRB
from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen
@@ -95,7 +95,7 @@ def build_setcover_model_gurobipy(data: Union[str, SetCoverData]) -> GurobiModel
def build_setcover_model_pyomo(
data: Union[str, SetCoverData],
solver="gurobi_persistent",
solver: str = "gurobi_persistent",
) -> PyomoModel:
data = _read_setcover_data(data)
(n_elements, n_sets) = data.incidence_matrix.shape

View File

@@ -7,7 +7,7 @@ from typing import List, Union
import gurobipy as gp
import numpy as np
from gurobipy.gurobipy import GRB
from gurobipy import GRB
from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen
@@ -53,7 +53,7 @@ class SetPackGenerator:
]
def build_setpack_model(data: Union[str, SetPackData]) -> GurobiModel:
def build_setpack_model_gurobipy(data: Union[str, SetPackData]) -> GurobiModel:
if isinstance(data, str):
data = read_pkl_gz(data)
assert isinstance(data, SetPackData)

View File

@@ -2,21 +2,25 @@
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from dataclasses import dataclass
from typing import List, Union
from typing import List, Union, Any, Hashable, Optional
import gurobipy as gp
import networkx as nx
import numpy as np
import pyomo.environ as pe
from gurobipy import GRB, quicksum
from miplearn.io import read_pkl_gz
from miplearn.solvers.gurobi import GurobiModel
from miplearn.solvers.pyomo import PyomoModel
from networkx import Graph
from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen
from miplearn.io import read_pkl_gz
from miplearn.solvers.gurobi import GurobiModel
from miplearn.solvers.pyomo import PyomoModel
from . import _gurobipy_set_params, _pyomo_set_params
logger = logging.getLogger(__name__)
@dataclass
@@ -82,35 +86,96 @@ class MaxWeightStableSetGenerator:
return nx.generators.random_graphs.binomial_graph(self.n.rvs(), self.p.rvs())
def build_stab_model_gurobipy(data: MaxWeightStableSetData) -> GurobiModel:
data = _read_stab_data(data)
def build_stab_model_gurobipy(
data: Union[str, MaxWeightStableSetData],
params: Optional[dict[str, Any]] = None,
) -> GurobiModel:
model = gp.Model()
_gurobipy_set_params(model, params)
data = _stab_read(data)
nodes = list(data.graph.nodes)
# Variables and objective function
x = model.addVars(nodes, vtype=GRB.BINARY, name="x")
model.setObjective(quicksum(-data.weights[i] * x[i] for i in nodes))
for clique in nx.find_cliques(data.graph):
model.addConstr(quicksum(x[i] for i in clique) <= 1)
# Edge inequalities
for i1, i2 in data.graph.edges:
model.addConstr(x[i1] + x[i2] <= 1)
def cuts_separate(m: GurobiModel) -> List[Hashable]:
x_val_dict = m.inner.cbGetNodeRel(x)
x_val = [x_val_dict[i] for i in nodes]
return _stab_separate(data, x_val)
def cuts_enforce(m: GurobiModel, violations: List[Any]) -> None:
logger.info(f"Adding {len(violations)} clique cuts...")
for clique in violations:
m.add_constr(quicksum(x[i] for i in clique) <= 1)
model.update()
return GurobiModel(model)
return GurobiModel(
model,
cuts_separate=cuts_separate,
cuts_enforce=cuts_enforce,
)
def build_stab_model_pyomo(
data: MaxWeightStableSetData,
solver="gurobi_persistent",
solver: str = "gurobi_persistent",
params: Optional[dict[str, Any]] = None,
) -> PyomoModel:
data = _read_stab_data(data)
data = _stab_read(data)
model = pe.ConcreteModel()
nodes = pe.Set(initialize=list(data.graph.nodes))
# Variables and objective function
model.x = pe.Var(nodes, domain=pe.Boolean, name="x")
model.obj = pe.Objective(expr=sum([-data.weights[i] * model.x[i] for i in nodes]))
# Edge inequalities
model.edge_eqs = pe.ConstraintList()
for i1, i2 in data.graph.edges:
model.edge_eqs.add(model.x[i1] + model.x[i2] <= 1)
# Clique inequalities
model.clique_eqs = pe.ConstraintList()
for clique in nx.find_cliques(data.graph):
model.clique_eqs.add(expr=sum(model.x[i] for i in clique) <= 1)
return PyomoModel(model, solver)
def cuts_separate(m: PyomoModel) -> List[Hashable]:
m.solver.cbGetNodeRel([model.x[i] for i in nodes])
x_val = [model.x[i].value for i in nodes]
return _stab_separate(data, x_val)
def cuts_enforce(m: PyomoModel, violations: List[Any]) -> None:
logger.info(f"Adding {len(violations)} clique cuts...")
for clique in violations:
m.add_constr(model.clique_eqs.add(sum(model.x[i] for i in clique) <= 1))
pm = PyomoModel(
model,
solver,
cuts_separate=cuts_separate,
cuts_enforce=cuts_enforce,
)
_pyomo_set_params(pm, params, solver)
return pm
def _read_stab_data(data: Union[str, MaxWeightStableSetData]) -> MaxWeightStableSetData:
def _stab_read(data: Union[str, MaxWeightStableSetData]) -> MaxWeightStableSetData:
if isinstance(data, str):
data = read_pkl_gz(data)
assert isinstance(data, MaxWeightStableSetData)
return data
def _stab_separate(data: MaxWeightStableSetData, x_val: List[float]) -> List:
# Check that we selected at most one vertex for each
# clique in the graph (sum <= 1)
violations: List[Any] = []
for clique in nx.find_cliques(data.graph):
if sum(x_val[i] for i in clique) > 1.0001:
violations.append(sorted(clique))
return violations

View File

@@ -2,20 +2,23 @@
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from dataclasses import dataclass
from typing import List, Tuple, Optional, Any, Union
import gurobipy as gp
import networkx as nx
import numpy as np
import pyomo.environ as pe
from gurobipy import quicksum, GRB, tuplelist
from miplearn.io import read_pkl_gz
from miplearn.problems import _gurobipy_set_params, _pyomo_set_params
from miplearn.solvers.gurobi import GurobiModel
from scipy.spatial.distance import pdist, squareform
from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen
import logging
from miplearn.io import read_pkl_gz
from miplearn.solvers.gurobi import GurobiModel
from miplearn.solvers.pyomo import PyomoModel
logger = logging.getLogger(__name__)
@@ -112,15 +115,17 @@ class TravelingSalesmanGenerator:
return n, cities
def build_tsp_model(data: Union[str, TravelingSalesmanData]) -> GurobiModel:
if isinstance(data, str):
data = read_pkl_gz(data)
assert isinstance(data, TravelingSalesmanData)
def build_tsp_model_gurobipy(
data: Union[str, TravelingSalesmanData],
params: Optional[dict[str, Any]] = None,
) -> GurobiModel:
model = gp.Model()
_gurobipy_set_params(model, params)
data = _tsp_read(data)
edges = tuplelist(
(i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)
)
model = gp.Model()
# Decision variables
x = model.addVars(edges, vtype=GRB.BINARY, name="x")
@@ -142,36 +147,100 @@ def build_tsp_model(data: Union[str, TravelingSalesmanData]) -> GurobiModel:
name="eq_degree",
)
def find_violations(model: GurobiModel) -> List[Any]:
violations = []
x = model.inner.cbGetSolution(model.inner._x)
selected_edges = [e for e in model.inner._edges if x[e] > 0.5]
graph = nx.Graph()
graph.add_edges_from(selected_edges)
for component in list(nx.connected_components(graph)):
if len(component) < model.inner._n_cities:
cut_edges = [
e
for e in model.inner._edges
if (e[0] in component and e[1] not in component)
or (e[0] not in component and e[1] in component)
]
violations.append(cut_edges)
return violations
def lazy_separate(model: GurobiModel) -> List[Any]:
x_val = model.inner.cbGetSolution(model.inner._x)
return _tsp_separate(x_val, edges, data.n_cities)
def fix_violations(model: GurobiModel, violations: List[Any], where: str) -> None:
def lazy_enforce(model: GurobiModel, violations: List[Any]) -> None:
for violation in violations:
constr = quicksum(model.inner._x[e[0], e[1]] for e in violation) >= 2
if where == "cb":
model.inner.cbLazy(constr)
else:
model.inner.addConstr(constr)
model.add_constr(
quicksum(model.inner._x[e[0], e[1]] for e in violation) >= 2
)
logger.info(f"tsp: added {len(violations)} subtour elimination constraints")
model.update()
return GurobiModel(
model,
find_violations=find_violations,
fix_violations=fix_violations,
lazy_separate=lazy_separate,
lazy_enforce=lazy_enforce,
)
def build_tsp_model_pyomo(
data: Union[str, TravelingSalesmanData],
solver: str = "gurobi_persistent",
params: Optional[dict[str, Any]] = None,
) -> PyomoModel:
model = pe.ConcreteModel()
data = _tsp_read(data)
edges = tuplelist(
(i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)
)
# Decision variables
model.x = pe.Var(edges, domain=pe.Boolean, name="x")
model.obj = pe.Objective(
expr=sum(model.x[i, j] * data.distances[i, j] for (i, j) in edges)
)
# Eq: Must choose two edges adjacent to each node
model.degree_eqs = pe.ConstraintList()
for i in range(data.n_cities):
model.degree_eqs.add(
sum(model.x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)
== 2
)
# Eq: Subtour elimination
model.subtour_eqs = pe.ConstraintList()
def lazy_separate(m: PyomoModel) -> List[Any]:
m.solver.cbGetSolution([model.x[e] for e in edges])
x_val = {e: model.x[e].value for e in edges}
return _tsp_separate(x_val, edges, data.n_cities)
def lazy_enforce(m: PyomoModel, violations: List[Any]) -> None:
logger.warning(f"Adding {len(violations)} subtour elimination constraints...")
for violation in violations:
m.add_constr(
model.subtour_eqs.add(sum(model.x[e[0], e[1]] for e in violation) >= 2)
)
pm = PyomoModel(
model,
solver,
lazy_separate=lazy_separate,
lazy_enforce=lazy_enforce,
)
_pyomo_set_params(pm, params, solver)
return pm
def _tsp_read(data: Union[str, TravelingSalesmanData]) -> TravelingSalesmanData:
if isinstance(data, str):
data = read_pkl_gz(data)
assert isinstance(data, TravelingSalesmanData)
return data
def _tsp_separate(
x_val: dict[Tuple[int, int], float],
edges: List[Tuple[int, int]],
n_cities: int,
) -> List:
violations = []
selected_edges = [e for e in edges if x_val[e] > 0.5]
graph = nx.Graph()
graph.add_edges_from(selected_edges)
for component in list(nx.connected_components(graph)):
if len(component) < n_cities:
cut_edges = [
[e[0], e[1]]
for e in edges
if (e[0] in component and e[1] not in component)
or (e[0] not in component and e[1] in component)
]
violations.append(cut_edges)
return violations

View File

@@ -25,6 +25,7 @@ class UnitCommitmentData:
min_downtime: np.ndarray
cost_startup: np.ndarray
cost_prod: np.ndarray
cost_prod_quad: np.ndarray
cost_fixed: np.ndarray
@@ -37,6 +38,7 @@ class UnitCommitmentGenerator:
min_power: rv_frozen = uniform(loc=0.5, scale=0.25),
cost_startup: rv_frozen = uniform(loc=0, scale=10_000),
cost_prod: rv_frozen = uniform(loc=0, scale=50),
cost_prod_quad: rv_frozen = uniform(loc=0, scale=0),
cost_fixed: rv_frozen = uniform(loc=0, scale=1_000),
min_uptime: rv_frozen = randint(low=2, high=8),
min_downtime: rv_frozen = randint(low=2, high=8),
@@ -50,6 +52,7 @@ class UnitCommitmentGenerator:
self.min_power = min_power
self.cost_startup = cost_startup
self.cost_prod = cost_prod
self.cost_prod_quad = cost_prod_quad
self.cost_fixed = cost_fixed
self.min_uptime = min_uptime
self.min_downtime = min_downtime
@@ -72,6 +75,7 @@ class UnitCommitmentGenerator:
min_downtime = self.min_downtime.rvs(G)
cost_startup = self.cost_startup.rvs(G)
cost_prod = self.cost_prod.rvs(G)
cost_prod_quad = self.cost_prod_quad.rvs(G)
cost_fixed = self.cost_fixed.rvs(G)
capacity = max_power.sum()
@@ -91,6 +95,7 @@ class UnitCommitmentGenerator:
min_downtime = self.ref_data.min_downtime
cost_startup = self.ref_data.cost_startup * self.cost_jitter.rvs(G)
cost_prod = self.ref_data.cost_prod * self.cost_jitter.rvs(G)
cost_prod_quad = self.ref_data.cost_prod_quad * self.cost_jitter.rvs(G)
cost_fixed = self.ref_data.cost_fixed * self.cost_jitter.rvs(G)
data = UnitCommitmentData(
@@ -101,6 +106,7 @@ class UnitCommitmentGenerator:
min_downtime,
cost_startup.round(2),
cost_prod.round(2),
cost_prod_quad.round(4),
cost_fixed.round(2),
)
@@ -112,7 +118,7 @@ class UnitCommitmentGenerator:
return [_sample() for _ in range(n_samples)]
def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:
def build_uc_model_gurobipy(data: Union[str, UnitCommitmentData]) -> GurobiModel:
"""
Models the unit commitment problem according to equations (1)-(5) of:
@@ -143,6 +149,7 @@ def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:
is_on[g, t] * data.cost_fixed[g]
+ switch_on[g, t] * data.cost_startup[g]
+ prod[g, t] * data.cost_prod[g]
+ prod[g, t] * prod[g, t] * data.cost_prod_quad[g]
for g in range(G)
for t in range(T)
)

View File

@@ -40,7 +40,9 @@ class MinWeightVertexCoverGenerator:
]
def build_vertexcover_model(data: Union[str, MinWeightVertexCoverData]) -> GurobiModel:
def build_vertexcover_model_gurobipy(
data: Union[str, MinWeightVertexCoverData]
) -> GurobiModel:
if isinstance(data, str):
data = read_pkl_gz(data)
assert isinstance(data, MinWeightVertexCoverData)
@@ -48,7 +50,7 @@ def build_vertexcover_model(data: Union[str, MinWeightVertexCoverData]) -> Gurob
nodes = list(data.graph.nodes)
x = model.addVars(nodes, vtype=GRB.BINARY, name="x")
model.setObjective(quicksum(data.weights[i] * x[i] for i in nodes))
for (v1, v2) in data.graph.edges:
for v1, v2 in data.graph.edges:
model.addConstr(x[v1] + x[v2] >= 1)
model.update()
return GurobiModel(model)

View File

@@ -3,7 +3,7 @@
# Released under the modified BSD license. See COPYING.md for more details.
from abc import ABC, abstractmethod
from typing import Optional, Dict
from typing import Optional, Dict, Callable, Hashable, List, Any
import numpy as np
@@ -16,6 +16,20 @@ class AbstractModel(ABC):
_supports_node_count = False
_supports_solution_pool = False
WHERE_DEFAULT = "default"
WHERE_CUTS = "cuts"
WHERE_LAZY = "lazy"
def __init__(self) -> None:
self._lazy_enforce: Optional[Callable] = None
self._lazy_separate: Optional[Callable] = None
self._lazy: Optional[List[Any]] = None
self._cuts_enforce: Optional[Callable] = None
self._cuts_separate: Optional[Callable] = None
self._cuts: Optional[List[Any]] = None
self._cuts_aot: Optional[List[Any]] = None
self._where = self.WHERE_DEFAULT
@abstractmethod
def add_constrs(
self,
@@ -68,3 +82,16 @@ class AbstractModel(ABC):
@abstractmethod
def write(self, filename: str) -> None:
pass
def set_cuts(self, cuts: List) -> None:
self._cuts_aot = cuts
def lazy_enforce(self, violations: List[Any]) -> None:
if self._lazy_enforce is not None:
self._lazy_enforce(self, violations)
def _lazy_enforce_collected(self) -> None:
"""Adds all lazy constraints identified in the callback as actual model constraints. Useful for generating
a final MPS file with the constraints that were required in this run."""
if self._lazy_enforce is not None:
self._lazy_enforce(self, self._lazy)

View File

@@ -1,17 +1,78 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import Dict, Optional, Callable, Any, List
import logging
import json
from typing import Dict, Optional, Callable, Any, List, Sequence
import gurobipy as gp
from gurobipy import GRB, GurobiError
from gurobipy import GRB, GurobiError, Var
import numpy as np
from scipy.sparse import lil_matrix
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class GurobiModel:
def _gurobi_callback(model: AbstractModel, gp_model: gp.Model, where: int) -> None:
assert isinstance(gp_model, gp.Model)
# Lazy constraints
if model._lazy_separate is not None:
assert model._lazy_enforce is not None
assert model._lazy is not None
if where == GRB.Callback.MIPSOL:
model._where = model.WHERE_LAZY
violations = model._lazy_separate(model)
if len(violations) > 0:
model._lazy.extend(violations)
model._lazy_enforce(model, violations)
# User cuts
if model._cuts_separate is not None:
assert model._cuts_enforce is not None
assert model._cuts is not None
if where == GRB.Callback.MIPNODE:
status = gp_model.cbGet(GRB.Callback.MIPNODE_STATUS)
if status == GRB.OPTIMAL:
model._where = model.WHERE_CUTS
if model._cuts_aot is not None:
violations = model._cuts_aot
model._cuts_aot = None
logger.info(f"Enforcing {len(violations)} cuts ahead-of-time...")
else:
violations = model._cuts_separate(model)
if len(violations) > 0:
model._cuts.extend(violations)
model._cuts_enforce(model, violations)
# Cleanup
model._where = model.WHERE_DEFAULT
def _gurobi_add_constr(gp_model: gp.Model, where: str, constr: Any) -> None:
if where == AbstractModel.WHERE_LAZY:
gp_model.cbLazy(constr)
elif where == AbstractModel.WHERE_CUTS:
gp_model.cbCut(constr)
else:
gp_model.addConstr(constr)
def _gurobi_set_required_params(model: AbstractModel, gp_model: gp.Model) -> None:
# Required parameters for lazy constraints
if model._lazy_enforce is not None:
gp_model.setParam("PreCrush", 1)
gp_model.setParam("LazyConstraints", 1)
# Required parameters for user cuts
if model._cuts_enforce is not None:
gp_model.setParam("PreCrush", 1)
class GurobiModel(AbstractModel):
_supports_basis_status = True
_supports_sensitivity_analysis = True
_supports_node_count = True
@@ -20,13 +81,17 @@ class GurobiModel:
def __init__(
self,
inner: gp.Model,
find_violations: Optional[Callable] = None,
fix_violations: Optional[Callable] = None,
lazy_separate: Optional[Callable] = None,
lazy_enforce: Optional[Callable] = None,
cuts_separate: Optional[Callable] = None,
cuts_enforce: Optional[Callable] = None,
) -> None:
self.fix_violations = fix_violations
self.find_violations = find_violations
super().__init__()
self._lazy_separate = lazy_separate
self._lazy_enforce = lazy_enforce
self._cuts_separate = cuts_separate
self._cuts_enforce = cuts_enforce
self.inner = inner
self.violations_: Optional[List[Any]] = None
def add_constrs(
self,
@@ -44,7 +109,11 @@ class GurobiModel:
assert constrs_sense.shape == (nconstrs,)
assert constrs_rhs.shape == (nconstrs,)
gp_vars = [self.inner.getVarByName(var_name.decode()) for var_name in var_names]
gp_vars: list[Var] = []
for var_name in var_names:
v = self.inner.getVarByName(var_name.decode())
assert v is not None, f"unknown var: {var_name}"
gp_vars.append(v)
self.inner.addMConstr(constrs_lhs, gp_vars, constrs_sense, constrs_rhs)
if stats is not None:
@@ -52,6 +121,9 @@ class GurobiModel:
stats["Added constraints"] = 0
stats["Added constraints"] += nconstrs
def add_constr(self, constr: Any) -> None:
_gurobi_add_constr(self.inner, self._where, constr)
def extract_after_load(self, h5: H5File) -> None:
"""
Given a model that has just been loaded, extracts static problem
@@ -100,6 +172,10 @@ class GurobiModel:
except AttributeError:
pass
self._extract_after_mip_solution_pool(h5)
if self._lazy is not None:
h5.put_scalar("mip_lazy", json.dumps(self._lazy))
if self._cuts is not None:
h5.put_scalar("mip_cuts", json.dumps(self._cuts))
def fix_variables(
self,
@@ -112,31 +188,28 @@ class GurobiModel:
assert var_names.shape == var_values.shape
n_fixed = 0
for (var_idx, var_name) in enumerate(var_names):
for var_idx, var_name in enumerate(var_names):
var_val = var_values[var_idx]
if np.isfinite(var_val):
var = self.inner.getVarByName(var_name.decode())
var.vtype = "C"
var.lb = var_val
var.ub = var_val
assert var is not None, f"unknown var: {var_name}"
var.VType = "c"
var.LB = var_val
var.UB = var_val
n_fixed += 1
if stats is not None:
stats["Fixed variables"] = n_fixed
def optimize(self) -> None:
self.violations_ = []
self._lazy = []
self._cuts = []
def callback(m: gp.Model, where: int) -> None:
assert self.find_violations is not None
assert self.violations_ is not None
assert self.fix_violations is not None
if where == GRB.Callback.MIPSOL:
violations = self.find_violations(self)
self.violations_.extend(violations)
self.fix_violations(self, violations, "cb")
def callback(_: gp.Model, where: int) -> None:
_gurobi_callback(self, self.inner, where)
if self.fix_violations is not None:
self.inner.Params.lazyConstraints = 1
_gurobi_set_required_params(self, self.inner)
if self.lazy_enforce is not None or self.cuts_enforce is not None:
self.inner.optimize(callback)
else:
self.inner.optimize()
@@ -145,7 +218,7 @@ class GurobiModel:
return GurobiModel(self.inner.relax())
def set_time_limit(self, time_limit_sec: float) -> None:
self.inner.params.timeLimit = time_limit_sec
self.inner.params.TimeLimit = time_limit_sec
def set_warm_starts(
self,
@@ -160,12 +233,13 @@ class GurobiModel:
self.inner.numStart = n_starts
for start_idx in range(n_starts):
self.inner.params.startNumber = start_idx
for (var_idx, var_name) in enumerate(var_names):
self.inner.params.StartNumber = start_idx
for var_idx, var_name in enumerate(var_names):
var_val = var_values[start_idx, var_idx]
if np.isfinite(var_val):
var = self.inner.getVarByName(var_name.decode())
var.start = var_val
assert var is not None, f"unknown var: {var_name}"
var.Start = var_val
if stats is not None:
stats["WS: Count"] = n_starts
@@ -175,14 +249,14 @@ class GurobiModel:
def _extract_after_load_vars(self, h5: H5File) -> None:
gp_vars = self.inner.getVars()
for (h5_field, gp_field) in {
for h5_field, gp_field in {
"static_var_names": "varName",
"static_var_types": "vtype",
}.items():
h5.put_array(
h5_field, np.array(self.inner.getAttr(gp_field, gp_vars), dtype="S")
)
for (h5_field, gp_field) in {
for h5_field, gp_field in {
"static_var_upper_bounds": "ub",
"static_var_lower_bounds": "lb",
"static_var_obj_coeffs": "obj",
@@ -190,6 +264,13 @@ class GurobiModel:
h5.put_array(
h5_field, np.array(self.inner.getAttr(gp_field, gp_vars), dtype=float)
)
obj = self.inner.getObjective()
if isinstance(obj, gp.QuadExpr):
nvars = len(self.inner.getVars())
obj_q = np.zeros((nvars, nvars))
for i in range(obj.size()):
obj_q[obj.getVar1(i).index, obj.getVar2(i).index] = obj.getCoeff(i)
h5.put_array("static_var_obj_coeffs_quad", obj_q)
def _extract_after_load_constrs(self, h5: H5File) -> None:
gp_constrs = self.inner.getConstrs()
@@ -199,7 +280,7 @@ class GurobiModel:
names = np.array(self.inner.getAttr("constrName", gp_constrs), dtype="S")
nrows, ncols = len(gp_constrs), len(gp_vars)
tmp = lil_matrix((nrows, ncols), dtype=float)
for (i, gp_constr) in enumerate(gp_constrs):
for i, gp_constr in enumerate(gp_constrs):
expr = self.inner.getRow(gp_constr)
for j in range(expr.size()):
tmp[i, expr.getVar(j).index] = expr.getCoeff(j)
@@ -234,7 +315,7 @@ class GurobiModel:
dtype="S",
),
)
for (h5_field, gp_field) in {
for h5_field, gp_field in {
"lp_var_reduced_costs": "rc",
"lp_var_sa_obj_up": "saobjUp",
"lp_var_sa_obj_down": "saobjLow",
@@ -268,7 +349,7 @@ class GurobiModel:
dtype="S",
),
)
for (h5_field, gp_field) in {
for h5_field, gp_field in {
"lp_constr_dual_values": "pi",
"lp_constr_sa_rhs_up": "saRhsUp",
"lp_constr_sa_rhs_down": "saRhsLow",

View File

@@ -3,32 +3,43 @@
# Released under the modified BSD license. See COPYING.md for more details.
from os.path import exists
from tempfile import NamedTemporaryFile
from typing import List, Any, Union
from typing import List, Any, Union, Dict, Callable, Optional, Tuple
from miplearn.h5 import H5File
from miplearn.io import _to_h5_filename
from miplearn.solvers.abstract import AbstractModel
import shutil
class LearningSolver:
def __init__(self, components: List[Any], skip_lp=False):
def __init__(self, components: List[Any], skip_lp: bool = False) -> None:
self.components = components
self.skip_lp = skip_lp
def fit(self, data_filenames):
def fit(self, data_filenames: List[str]) -> None:
h5_filenames = [_to_h5_filename(f) for f in data_filenames]
for comp in self.components:
comp.fit(h5_filenames)
def optimize(self, model: Union[str, AbstractModel], build_model=None):
def optimize(
self,
model: Union[str, AbstractModel],
build_model: Optional[Callable] = None,
) -> Tuple[AbstractModel, Dict[str, Any]]:
h5_filename, mode = NamedTemporaryFile().name, "w"
if isinstance(model, str):
h5_filename = _to_h5_filename(model)
assert build_model is not None
old_h5_filename = _to_h5_filename(model)
model = build_model(model)
else:
h5_filename = NamedTemporaryFile().name
stats = {}
mode = "r+" if exists(h5_filename) else "w"
assert isinstance(model, AbstractModel)
# If the instance has an associate H5 file, we make a temporary copy of it,
# then work on that copy. We keep the original file unmodified
if exists(old_h5_filename):
shutil.copy(old_h5_filename, h5_filename)
mode = "r+"
stats: Dict[str, Any] = {}
with H5File(h5_filename, mode) as h5:
model.extract_after_load(h5)
if not self.skip_lp:
@@ -36,8 +47,10 @@ class LearningSolver:
relaxed.optimize()
relaxed.extract_after_lp(h5)
for comp in self.components:
comp.before_mip(h5_filename, model, stats)
comp_stats = comp.before_mip(h5_filename, model, stats)
if comp_stats is not None:
stats.update(comp_stats)
model.optimize()
model.extract_after_mip(h5)
return stats
return model, stats

View File

@@ -2,35 +2,65 @@
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from numbers import Number
from typing import Optional, Dict, List, Any
from typing import Optional, Dict, List, Any, Tuple, Callable
import numpy as np
import pyomo
import pyomo.environ as pe
from pyomo.core import Objective, Var, Suffix
from pyomo.core.base import _GeneralVarData
from pyomo.core.base import VarData
from pyomo.core.expr import ProductExpression
from pyomo.core.expr.numeric_expr import SumExpression, MonomialTermExpression
from scipy.sparse import coo_matrix
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
import pyomo.environ as pe
from miplearn.solvers.gurobi import (
_gurobi_callback,
_gurobi_add_constr,
_gurobi_set_required_params,
)
class PyomoModel(AbstractModel):
def __init__(self, model: pe.ConcreteModel, solver_name: str = "gurobi_persistent"):
def __init__(
self,
model: pe.ConcreteModel,
solver_name: str = "gurobi_persistent",
lazy_separate: Optional[Callable] = None,
lazy_enforce: Optional[Callable] = None,
cuts_separate: Optional[Callable] = None,
cuts_enforce: Optional[Callable] = None,
):
super().__init__()
self.inner = model
self.solver_name = solver_name
self._lazy_separate = lazy_separate
self._lazy_enforce = lazy_enforce
self._cuts_separate = cuts_separate
self._cuts_enforce = cuts_enforce
self.solver = pe.SolverFactory(solver_name)
self.is_persistent = hasattr(self.solver, "set_instance")
if self.is_persistent:
self.solver.set_instance(model)
self.results = None
self.results: Optional[Dict] = None
self._is_warm_start_available = False
if not hasattr(self.inner, "dual"):
self.inner.dual = Suffix(direction=Suffix.IMPORT)
self.inner.rc = Suffix(direction=Suffix.IMPORT)
self.inner.slack = Suffix(direction=Suffix.IMPORT)
def add_constr(self, constr: Any) -> None:
assert (
self.solver_name == "gurobi_persistent"
), "Callbacks are currently only supported on gurobi_persistent"
if self._where in [AbstractModel.WHERE_CUTS, AbstractModel.WHERE_LAZY]:
_gurobi_add_constr(self.solver, self._where, constr)
else:
# outside callbacks, add_constr shouldn't do anything, as the constraint
# has already been added to the ConstraintList object
pass
def add_constrs(
self,
var_names: np.ndarray,
@@ -56,7 +86,7 @@ class PyomoModel(AbstractModel):
raise Exception(f"Unknown sense: {sense}")
self.solver.add_constraint(eq)
def _var_names_to_vars(self, var_names):
def _var_names_to_vars(self, var_names: np.ndarray) -> List[Any]:
varname_to_var = {}
for var in self.inner.component_objects(Var):
for idx in var:
@@ -70,12 +100,14 @@ class PyomoModel(AbstractModel):
h5.put_scalar("static_sense", self._get_sense())
def extract_after_lp(self, h5: H5File) -> None:
assert self.results is not None
self._extract_after_lp_vars(h5)
self._extract_after_lp_constrs(h5)
h5.put_scalar("lp_obj_value", self.results["Problem"][0]["Lower bound"])
h5.put_scalar("lp_wallclock_time", self._get_runtime())
def _get_runtime(self):
def _get_runtime(self) -> float:
assert self.results is not None
solver_dict = self.results["Solver"][0]
for key in ["Wallclock time", "User time"]:
if isinstance(solver_dict[key], Number):
@@ -83,6 +115,7 @@ class PyomoModel(AbstractModel):
raise Exception("Time unavailable")
def extract_after_mip(self, h5: H5File) -> None:
assert self.results is not None
h5.put_scalar("mip_wallclock_time", self._get_runtime())
if self.results["Solver"][0]["Termination condition"] == "infeasible":
return
@@ -97,6 +130,10 @@ class PyomoModel(AbstractModel):
h5.put_scalar("mip_obj_value", obj_value)
h5.put_scalar("mip_obj_bound", obj_bound)
h5.put_scalar("mip_gap", self._gap(obj_value, obj_bound))
if self._lazy is not None:
h5.put_scalar("mip_lazy", repr(self._lazy))
if self._cuts is not None:
h5.put_scalar("mip_cuts", repr(self._cuts))
def fix_variables(
self,
@@ -105,12 +142,26 @@ class PyomoModel(AbstractModel):
stats: Optional[Dict] = None,
) -> None:
variables = self._var_names_to_vars(var_names)
for (var, val) in zip(variables, var_values):
for var, val in zip(variables, var_values):
if np.isfinite(val):
var.fix(val)
self.solver.update_var(var)
def optimize(self) -> None:
self._lazy = []
self._cuts = []
if self._lazy_enforce is not None or self._cuts_enforce is not None:
assert (
self.solver_name == "gurobi_persistent"
), "Callbacks are currently only supported on gurobi_persistent"
_gurobi_set_required_params(self, self.solver._solver_model)
def callback(_: Any, __: Any, where: int) -> None:
_gurobi_callback(self, self.solver._solver_model, where)
self.solver.set_callback(callback)
if self.is_persistent:
self.results = self.solver.solve(
tee=True,
@@ -145,31 +196,35 @@ class PyomoModel(AbstractModel):
assert var_names.shape[0] == n_vars
assert n_starts == 1, "Pyomo does not support multiple warm starts"
variables = self._var_names_to_vars(var_names)
for (var, val) in zip(variables, var_values[0, :]):
for var, val in zip(variables, var_values[0, :]):
if np.isfinite(val):
var.value = val
self._is_warm_start_available = True
def _extract_after_load_vars(self, h5):
def _extract_after_load_vars(self, h5: H5File) -> None:
names: List[str] = []
types: List[str] = []
upper_bounds: List[float] = []
lower_bounds: List[float] = []
obj_coeffs: List[float] = []
obj = None
obj_quad, obj_linear = None, None
obj_offset = 0.0
obj_count = 0
for obj in self.inner.component_objects(Objective):
obj, obj_offset = self._parse_pyomo_expr(obj.expr)
obj_quad, obj_linear, obj_offset = self._parse_obj_expr(obj.expr)
obj_count += 1
assert obj_count == 1, f"One objective function expected; found {obj_count}"
assert obj_quad is not None
assert obj_linear is not None
for (i, var) in enumerate(self.inner.component_objects(pyomo.core.Var)):
varname_to_idx: Dict[str, int] = {}
for i, var in enumerate(self.inner.component_objects(pyomo.core.Var)):
for idx in var:
v = var[idx]
# Variable name
varname_to_idx[v.name] = len(names)
if idx is None:
names.append(var.name)
else:
@@ -199,11 +254,22 @@ class PyomoModel(AbstractModel):
lower_bounds.append(float(lb))
# Objective coefficients
if v.name in obj:
obj_coeffs.append(obj[v.name])
if v.name in obj_linear:
obj_coeffs.append(obj_linear[v.name])
else:
obj_coeffs.append(0.0)
if len(obj_quad) > 0:
nvars = len(names)
matrix = np.zeros((nvars, nvars))
for ((left_varname, right_varname), coeff) in obj_quad.items():
assert left_varname in varname_to_idx
assert right_varname in varname_to_idx
left_idx = varname_to_idx[left_varname]
right_idx = varname_to_idx[right_varname]
matrix[left_idx, right_idx] = coeff
h5.put_array("static_var_obj_coeffs_quad", matrix)
h5.put_array("static_var_names", np.array(names, dtype="S"))
h5.put_array("static_var_types", np.array(types, dtype="S"))
h5.put_array("static_var_lower_bounds", np.array(lower_bounds))
@@ -211,7 +277,7 @@ class PyomoModel(AbstractModel):
h5.put_array("static_var_obj_coeffs", np.array(obj_coeffs))
h5.put_scalar("static_obj_offset", obj_offset)
def _extract_after_load_constrs(self, h5):
def _extract_after_load_constrs(self, h5: H5File) -> None:
names: List[str] = []
rhs: List[float] = []
senses: List[str] = []
@@ -219,7 +285,7 @@ class PyomoModel(AbstractModel):
lhs_col: List[int] = []
lhs_data: List[float] = []
varname_to_idx = {}
varname_to_idx: Dict[str, int] = {}
for var in self.inner.component_objects(Var):
for idx in var:
varname = var.name
@@ -252,13 +318,13 @@ class PyomoModel(AbstractModel):
lhs_row.append(row)
lhs_col.append(varname_to_idx[term._args_[1].name])
lhs_data.append(float(term._args_[0]))
elif isinstance(term, _GeneralVarData):
elif isinstance(term, VarData):
lhs_row.append(row)
lhs_col.append(varname_to_idx[term.name])
lhs_data.append(1.0)
else:
raise Exception(f"Unknown term type: {term.__class__.__name__}")
elif isinstance(expr, _GeneralVarData):
elif isinstance(expr, VarData):
lhs_row.append(row)
lhs_col.append(varname_to_idx[expr.name])
lhs_data.append(1.0)
@@ -266,26 +332,25 @@ class PyomoModel(AbstractModel):
raise Exception(f"Unknown expression type: {expr.__class__.__name__}")
curr_row = 0
for (i, constr) in enumerate(
self.inner.component_objects(pyomo.core.Constraint)
):
if len(constr) > 0:
for i, constr in enumerate(self.inner.component_objects(pyomo.core.Constraint)):
if len(constr) > 1:
for idx in constr:
names.append(constr[idx].name)
_parse_constraint(constr[idx], curr_row)
curr_row += 1
else:
elif len(constr) == 1:
names.append(constr.name)
_parse_constraint(constr, curr_row)
curr_row += 1
lhs = coo_matrix((lhs_data, (lhs_row, lhs_col))).tocoo()
h5.put_sparse("static_constr_lhs", lhs)
if len(lhs_data) > 0:
lhs = coo_matrix((lhs_data, (lhs_row, lhs_col))).tocoo()
h5.put_sparse("static_constr_lhs", lhs)
h5.put_array("static_constr_names", np.array(names, dtype="S"))
h5.put_array("static_constr_rhs", np.array(rhs))
h5.put_array("static_constr_sense", np.array(senses, dtype="S"))
def _extract_after_lp_vars(self, h5):
def _extract_after_lp_vars(self, h5: H5File) -> None:
rc = []
values = []
for var in self.inner.component_objects(Var):
@@ -296,7 +361,7 @@ class PyomoModel(AbstractModel):
h5.put_array("lp_var_reduced_costs", np.array(rc))
h5.put_array("lp_var_values", np.array(values))
def _extract_after_lp_constrs(self, h5):
def _extract_after_lp_constrs(self, h5: H5File) -> None:
dual = []
slacks = []
for constr in self.inner.component_objects(pyomo.core.Constraint):
@@ -307,7 +372,7 @@ class PyomoModel(AbstractModel):
h5.put_array("lp_constr_dual_values", np.array(dual))
h5.put_array("lp_constr_slacks", np.array(slacks))
def _extract_after_mip_vars(self, h5):
def _extract_after_mip_vars(self, h5: H5File) -> None:
values = []
for var in self.inner.component_objects(Var):
for idx in var:
@@ -315,34 +380,58 @@ class PyomoModel(AbstractModel):
values.append(v.value)
h5.put_array("mip_var_values", np.array(values))
def _extract_after_mip_constrs(self, h5):
def _extract_after_mip_constrs(self, h5: H5File) -> None:
slacks = []
for constr in self.inner.component_objects(pyomo.core.Constraint):
for idx in constr:
c = constr[idx]
slacks.append(abs(self.inner.slack[c]))
if c in self.inner.slack:
slacks.append(abs(self.inner.slack[c]))
h5.put_array("mip_constr_slacks", np.array(slacks))
def _parse_pyomo_expr(self, expr: Any):
lhs = {}
offset = 0.0
def _parse_term(self, t: Any) -> Tuple[str, float]:
if isinstance(t, MonomialTermExpression):
return t._args_[1].name, float(t._args_[0])
elif isinstance(t, VarData):
return t.name, 1.0
else:
raise Exception(f"Unknown term type: {t.__class__.__name__}")
def _parse_obj_expr(
self, expr: Any
) -> Tuple[Dict[Tuple[str, str], float], Dict[str, float], float]:
obj_coeff_linear = {}
obj_coeff_quadratic = {}
obj_offset = 0.0
if isinstance(expr, SumExpression):
for term in expr._args_:
if isinstance(term, MonomialTermExpression):
lhs[term._args_[1].name] = float(term._args_[0])
elif isinstance(term, _GeneralVarData):
lhs[term.name] = 1.0
elif isinstance(term, Number):
offset += term
if isinstance(term, (int, float)):
# Constant term
obj_offset += term
elif isinstance(term, (MonomialTermExpression, VarData)):
# Linear term
var_name, var_coeff = self._parse_term(term)
if var_name not in obj_coeff_linear:
obj_coeff_linear[var_name] = 0.0
obj_coeff_linear[var_name] += var_coeff
elif isinstance(term, ProductExpression):
# Quadratic terms
left_var_nane, left_coeff = self._parse_term(term._args_[0])
right_var_nane, right_coeff = self._parse_term(term._args_[1])
if (left_var_nane, right_var_nane) not in obj_coeff_quadratic:
obj_coeff_quadratic[(left_var_nane, right_var_nane)] = 0.0
obj_coeff_quadratic[(left_var_nane, right_var_nane)] += (
left_coeff * right_coeff
)
else:
raise Exception(f"Unknown term type: {term.__class__.__name__}")
elif isinstance(expr, _GeneralVarData):
lhs[expr.name] = 1.0
elif isinstance(expr, VarData):
obj_coeff_linear[expr.name] = 1.0
else:
raise Exception(f"Unknown expression type: {expr.__class__.__name__}")
return lhs, offset
return obj_coeff_quadratic, obj_coeff_linear, obj_offset
def _gap(self, zp, zd, tol=1e-6):
def _gap(self, zp: float, zd: float, tol: float = 1e-6) -> float:
# Reference: https://www.gurobi.com/documentation/9.5/refman/mipgap2.html
if abs(zp) < tol:
if abs(zd) < tol:
@@ -352,7 +441,7 @@ class PyomoModel(AbstractModel):
else:
return abs(zp - zd) / abs(zp)
def _get_sense(self):
def _get_sense(self) -> str:
for obj in self.inner.component_objects(Objective):
sense = obj.sense
if sense == pyomo.core.kernel.objective.minimize:
@@ -361,6 +450,7 @@ class PyomoModel(AbstractModel):
return "max"
else:
raise Exception(f"Unknown sense: ${sense}")
raise Exception(f"No objective")
def write(self, filename: str) -> None:
self.inner.write(filename, io_options={"symbolic_solver_labels": True})

View File

@@ -6,7 +6,7 @@ from setuptools import setup, find_namespace_packages
setup(
name="miplearn",
version="0.3.0.dev1",
version="0.4.3",
author="Alinson S. Xavier",
author_email="axavier@anl.gov",
description="Extensible Framework for Learning-Enhanced Mixed-Integer Optimization",
@@ -14,12 +14,11 @@ setup(
packages=find_namespace_packages(),
python_requires=">=3.9",
install_requires=[
"Jinja2<3.1",
"gurobipy>=10,<11",
"gurobipy>=12,<13",
"h5py>=3,<4",
"networkx>=2,<3",
"numpy>=1,<2",
"pandas>=1,<2",
"pandas>=2,<3",
"pathos>=0.2,<0.3",
"pyomo>=6,<7",
"scikit-learn>=1,<2",
@@ -28,17 +27,17 @@ setup(
],
extras_require={
"dev": [
"Sphinx>=3,<4",
"Sphinx>=8,<9",
"black==22.6.0",
"mypy==0.971",
"myst-parser==0.14.0",
"mypy==1.8",
"myst-parser>=4,<5",
"nbsphinx>=0.9,<0.10",
"pyflakes==2.5.0",
"pytest>=7,<8",
"sphinx-book-theme==0.1.0",
"sphinx-multitoc-numbering>=0.1,<0.2",
"twine>=4,<5"
"sphinx-book-theme>=1,<2",
"sphinx-multitoc-numbering==0.1.3",
"twine>=6,<7",
"ipython>=9,<10",
]
},
)

View File

View File

@@ -0,0 +1,75 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2023, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import Any, List, Dict
from unittest.mock import Mock
from miplearn.components.cuts.mem import MemorizingCutsComponent
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.problems.stab import build_stab_model_gurobipy, build_stab_model_pyomo
from miplearn.solvers.learning import LearningSolver
from sklearn.dummy import DummyClassifier
from sklearn.neighbors import KNeighborsClassifier
from typing import Callable
def test_mem_component_gp(
stab_gp_h5: List[str],
stab_pyo_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None:
for h5 in [stab_pyo_h5, stab_gp_h5]:
clf = Mock(wraps=DummyClassifier())
comp = MemorizingCutsComponent(clf=clf, extractor=default_extractor)
comp.fit(h5)
# Should call fit method with correct arguments
clf.fit.assert_called()
x, y = clf.fit.call_args.args
assert x.shape == (3, 50)
assert y.shape == (3, 412)
y = y.tolist()
assert y[0][40:50] == [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
assert y[1][40:50] == [1, 1, 0, 1, 1, 1, 1, 1, 1, 1]
assert y[2][40:50] == [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
# Should store violations
assert comp.constrs_ is not None
assert comp.n_features_ == 50
assert comp.n_targets_ == 412
assert len(comp.constrs_) == 412
# Call before-mip
stats: Dict[str, Any] = {}
model = Mock()
comp.before_mip(h5[0], model, stats)
# Should call predict with correct args
clf.predict.assert_called()
(x_test,) = clf.predict.call_args.args
assert x_test.shape == (1, 50)
# Should call set_cuts
model.set_cuts.assert_called()
(cuts_aot_,) = model.set_cuts.call_args.args
assert cuts_aot_ is not None
assert len(cuts_aot_) == 256
def test_usage_stab(
stab_gp_h5: List[str],
stab_pyo_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None:
for h5, build_model in [
(stab_pyo_h5, build_stab_model_pyomo),
(stab_gp_h5, build_stab_model_gurobipy),
]:
data_filenames = [f.replace(".h5", ".pkl.gz") for f in h5]
clf = KNeighborsClassifier(n_neighbors=1)
comp = MemorizingCutsComponent(clf=clf, extractor=default_extractor)
solver = LearningSolver(components=[comp])
solver.fit(data_filenames)
model, stats = solver.optimize(data_filenames[0], build_model) # type: ignore
assert stats["Cuts: AOT"] > 0

View File

View File

@@ -0,0 +1,69 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import List, Dict, Any
from unittest.mock import Mock
from sklearn.dummy import DummyClassifier
from sklearn.neighbors import KNeighborsClassifier
from miplearn.components.lazy.mem import MemorizingLazyComponent
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.problems.tsp import build_tsp_model_gurobipy, build_tsp_model_pyomo
from miplearn.solvers.learning import LearningSolver
def test_mem_component(
tsp_gp_h5: List[str],
tsp_pyo_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None:
for h5 in [tsp_gp_h5, tsp_pyo_h5]:
clf = Mock(wraps=DummyClassifier())
comp = MemorizingLazyComponent(clf=clf, extractor=default_extractor)
comp.fit(tsp_gp_h5)
# Should call fit method with correct arguments
clf.fit.assert_called()
x, y = clf.fit.call_args.args
assert x.shape == (3, 190)
assert y.tolist() == [
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0],
[1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1],
]
# Should store violations
assert comp.constrs_ is not None
assert comp.n_features_ == 190
assert comp.n_targets_ == 20
assert len(comp.constrs_) == 20
# Call before-mip
stats: Dict[str, Any] = {}
model = Mock()
comp.before_mip(tsp_gp_h5[0], model, stats)
# Should call predict with correct args
clf.predict.assert_called()
(x_test,) = clf.predict.call_args.args
assert x_test.shape == (1, 190)
def test_usage_tsp(
tsp_gp_h5: List[str],
tsp_pyo_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None:
for h5, build_model in [
(tsp_pyo_h5, build_tsp_model_pyomo),
(tsp_gp_h5, build_tsp_model_gurobipy),
]:
data_filenames = [f.replace(".h5", ".pkl.gz") for f in h5]
clf = KNeighborsClassifier(n_neighbors=1)
comp = MemorizingLazyComponent(clf=clf, extractor=default_extractor)
solver = LearningSolver(components=[comp])
solver.fit(data_filenames)
model, stats = solver.optimize(data_filenames[0], build_model) # type: ignore
assert stats["Lazy Constraints: AOT"] > 0

View File

@@ -20,7 +20,8 @@ logger = logging.getLogger(__name__)
def test_mem_component(
multiknapsack_h5: List[str], default_extractor: FeaturesExtractor
multiknapsack_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None:
# Create mock classifier
clf = Mock(wraps=DummyClassifier())

View File

@@ -1,20 +1,69 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import os
import shutil
import tempfile
from glob import glob
from os.path import dirname
from typing import List
from os.path import dirname, basename, isfile
from tempfile import NamedTemporaryFile
from typing import List, Any
import pytest
from miplearn.extractors.fields import H5FieldsExtractor
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.extractors.fields import H5FieldsExtractor
def _h5_fixture(pattern: str, request: Any) -> List[str]:
"""
Create a temporary copy of the provided .h5 files, along with the companion
.pkl.gz files, and return the path to the copy. Also register a finalizer,
so that the temporary folder is removed after the tests.
"""
filenames = glob(f"{dirname(__file__)}/fixtures/{pattern}")
print(filenames)
tmpdir = tempfile.mkdtemp()
def cleanup() -> None:
shutil.rmtree(tmpdir)
request.addfinalizer(cleanup)
print(tmpdir)
for f in filenames:
fbase, _ = os.path.splitext(f)
for ext in [".h5", ".pkl.gz"]:
dest = os.path.join(tmpdir, f"{basename(fbase)}{ext}")
print(dest)
shutil.copy(f"{fbase}{ext}", dest)
assert isfile(dest)
return sorted(glob(f"{tmpdir}/*.h5"))
@pytest.fixture()
def multiknapsack_h5() -> List[str]:
return sorted(glob(f"{dirname(__file__)}/fixtures/multiknapsack*.h5"))
def multiknapsack_h5(request: Any) -> List[str]:
return _h5_fixture("multiknapsack*.h5", request)
@pytest.fixture()
def tsp_gp_h5(request: Any) -> List[str]:
return _h5_fixture("tsp-gp*.h5", request)
@pytest.fixture()
def tsp_pyo_h5(request: Any) -> List[str]:
return _h5_fixture("tsp-pyo*.h5", request)
@pytest.fixture()
def stab_gp_h5(request: Any) -> List[str]:
return _h5_fixture("stab-gp*.h5", request)
@pytest.fixture()
def stab_pyo_h5(request: Any) -> List[str]:
return _h5_fixture("stab-pyo*.h5", request)
@pytest.fixture()

44
tests/fixtures/gen_stab.py vendored Normal file
View File

@@ -0,0 +1,44 @@
from os.path import dirname
import numpy as np
from scipy.stats import uniform, randint
from miplearn.collectors.basic import BasicCollector
from miplearn.io import write_pkl_gz
from miplearn.problems.stab import (
MaxWeightStableSetGenerator,
build_stab_model_gurobipy,
build_stab_model_pyomo,
)
np.random.seed(42)
gen = MaxWeightStableSetGenerator(
w=uniform(10.0, scale=1.0),
n=randint(low=50, high=51),
p=uniform(loc=0.5, scale=0.0),
fix_graph=True,
)
data = gen.generate(3)
params = {"seed": 42, "threads": 1}
# Gurobipy
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="stab-gp-n50-")
collector = BasicCollector()
collector.collect(
data_filenames,
lambda data: build_stab_model_gurobipy(data, params=params),
progress=True,
verbose=True,
)
# Pyomo
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="stab-pyo-n50-")
collector = BasicCollector()
collector.collect(
data_filenames,
lambda model: build_stab_model_pyomo(model, params=params),
progress=True,
verbose=True,
)

46
tests/fixtures/gen_tsp.py vendored Normal file
View File

@@ -0,0 +1,46 @@
from os.path import dirname
import numpy as np
from scipy.stats import uniform, randint
from miplearn.collectors.basic import BasicCollector
from miplearn.io import write_pkl_gz
from miplearn.problems.tsp import (
TravelingSalesmanGenerator,
build_tsp_model_gurobipy,
build_tsp_model_pyomo,
)
np.random.seed(42)
gen = TravelingSalesmanGenerator(
x=uniform(loc=0.0, scale=1000.0),
y=uniform(loc=0.0, scale=1000.0),
n=randint(low=20, high=21),
gamma=uniform(loc=1.0, scale=0.25),
fix_cities=True,
round=True,
)
data = gen.generate(3)
params = {"seed": 42, "threads": 1}
# Gurobipy
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="tsp-gp-n20-")
collector = BasicCollector()
collector.collect(
data_filenames,
lambda d: build_tsp_model_gurobipy(d, params=params),
progress=True,
verbose=True,
)
# Pyomo
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="tsp-pyo-n20-")
collector = BasicCollector()
collector.collect(
data_filenames,
lambda d: build_tsp_model_pyomo(d, params=params),
progress=True,
verbose=True,
)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00000.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00000.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00000.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00001.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00001.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00001.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00002.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00002.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00002.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00000.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00000.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00000.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00001.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00001.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00001.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00002.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00002.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00002.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00000.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00000.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00000.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00001.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00001.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00001.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00002.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00002.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00002.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-pyo-n20-00000.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-pyo-n20-00000.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-pyo-n20-00000.pkl.gz vendored Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More