68 Commits
v0.3.0 ... dev

Author SHA1 Message Date
9f0fa0e500 problems: Allow correlated arguments in random problem generators 2025-12-08 16:08:05 -06:00
485625e07f Implement MinWeightVertexCoverPerturber 2025-12-08 15:37:16 -06:00
146fb6b615 Implement UnitCommitmentPerturber 2025-12-08 15:31:22 -06:00
1d44980a7b Implement TravelingSalesmanPerturber 2025-12-08 15:10:24 -06:00
4137378bb8 Implement SetPackPerturber and SetCoverPerturber 2025-12-08 13:47:33 -06:00
427bd1d806 Implement PMedianPerturber 2025-12-08 13:36:49 -06:00
14e2fe331d Implement MultiKnapsackPerturber 2025-12-08 13:31:52 -06:00
15cdb7e679 Implement MaxCutPerturber 2025-12-08 13:21:04 -06:00
9192bb02eb Implement BinPackPerturber 2025-12-08 13:16:23 -06:00
a4cb46f73e stab: Implement MaxWeightStableSetPerturber; update tests and docs 2025-12-08 10:54:19 -06:00
7fd88b0a3d BasicCollector: Always use p_umap, so that progress bar is visible 2025-12-08 10:32:44 -06:00
1f59ed4065 Fix failing tests 2025-12-08 10:31:58 -06:00
aa291410d8 docs: Minor updates 2025-09-24 10:31:08 -05:00
ca05429203 uc: Add quadratic terms 2025-09-23 11:39:39 -05:00
4eeb1c1ab3 Add maxcut to problems.ipynb 2025-09-23 11:27:03 -05:00
bfaae7c005 BasicCollector: Make log file optional 2025-07-22 12:25:47 -05:00
596f41c477 BasicCollector: save solver log to file 2025-06-12 11:16:16 -05:00
19e1f52b4f BasicCollector: store data_filename in HDF5 file 2025-06-12 11:15:09 -05:00
7ed213d4ce MaxCut: add w_jitter parameter to control edge weight randomization 2025-06-12 10:55:40 -05:00
daa801b5e9 Pyomo: implement build_maxcut_model; add support for quadratic objectives 2025-06-11 14:23:10 -05:00
2ca2794457 GurobiModel: Capture static_var_obj_coeffs_quad 2025-06-11 13:19:36 -05:00
1c6912cc51 Add MaxCut problem 2025-06-11 11:58:57 -05:00
eb914a4bdd Replace NamedTemporaryFile with TemporaryDirectory in tests for better compatibility 2025-06-11 11:14:34 -05:00
a306f0df26 Update docs dependencies; re-run notebooks 2025-06-10 12:28:39 -05:00
e0b4181579 Fix pyomo warning 2025-06-10 11:48:37 -05:00
332b2b9fca Update CHANGELOG 2025-06-10 11:31:32 -05:00
af65069202 Bump version to 0.4.3 2025-06-10 11:29:03 -05:00
dadd2216f1 Make compatible with Gurobi 12 2025-06-10 11:27:02 -05:00
5fefb49566 Update to Gurobi 11 2025-06-10 11:27:02 -05:00
3775c3f780 Update docs; fix Sphinx deps; bump to 0.4.2 2024-12-10 12:15:24 -06:00
e66e6d7660 Update CHANGELOG 2024-12-10 11:04:40 -06:00
8e05a69351 Update dependency: Gurobi 11 2024-12-10 10:58:15 -06:00
7ccb7875b9 Allow components to return stats, instead of modifying in-place
Added for compatibility with Julia.
2024-08-20 16:46:20 -05:00
f085ab538b LearningSolver: return model 2024-05-31 11:53:56 -05:00
7f273ebb70 expert primal: Set value for int variables 2024-05-31 11:48:41 -05:00
26cfab0ebd h5: Store values using float64 2024-05-31 11:16:47 -05:00
52ed34784d Docs: Use single-thread example 2024-05-08 09:19:52 -05:00
0534d50af3 BasicCollector: Do not crash on exception 2024-02-26 16:41:50 -06:00
8a02e22a35 Update docs 2024-02-07 09:17:09 -06:00
702824a3b5 Bump version to 0.4 2024-02-06 16:17:27 -06:00
752885660d Update CHANGELOG 2024-02-06 16:10:22 -06:00
b55554d410 Add _gurobipy suffix to all build_model functions 2024-02-06 16:08:24 -06:00
fb3f219ea8 Add tutorial: Cuts and lazy constraints 2024-02-06 15:59:11 -06:00
714904ea35 Implement ExpertCutsComponent and ExpertLazyComponent 2024-02-06 11:57:11 -06:00
cec56cbd7b AbstractSolver: Fix field name 2024-02-06 11:56:54 -06:00
e75850fab8 LearningSolver: Keep original H5 file unmodified 2024-02-02 14:37:53 -06:00
687c271d4d Bump version to 0.4.0 2024-02-02 10:19:44 -06:00
60d9a68485 Solver: Make attributes private; ensure we're not calling them directly
Helps with Julia/JuMP integration.
2024-02-02 10:15:06 -06:00
33f2cb3d9e Cuts: Do not access attributes directly 2024-02-01 12:02:39 -06:00
5b28595b0b BasicCollector: Make LP and MPS optional 2024-02-01 12:02:23 -06:00
60c7222fbe Cuts: Call set_cuts instead of setting cuts_aot_ directly 2024-02-01 10:18:24 -06:00
281508f44c Store cuts and lazy constraints as JSON in H5 2024-02-01 10:06:21 -06:00
2774edae8c tsp: Remove some code duplication 2024-01-30 16:32:39 -06:00
25bbe20748 Make lazy constr component compatible with Pyomo+Gurobi 2024-01-30 16:25:46 -06:00
c9eef36c4e Make cuts component compatible with Pyomo+Gurobi 2024-01-29 00:41:29 -06:00
d2faa15079 Reformat; remove unused imports 2024-01-28 20:47:16 -06:00
8c2c45417b Update mypy 2024-01-28 20:30:18 -06:00
8805a83c1c Implement MemorizingCutsComponent; STAB: switch to edge formulation 2023-11-07 15:36:31 -06:00
b81815d35b Lazy: Minor fixes; make it compatible with Pyomo 2023-10-27 10:44:21 -05:00
a42cd5ae35 Lazy: Simplify method signature; switch to AbstractModel 2023-10-27 09:14:51 -05:00
7079a36203 Lazy: Rename fields 2023-10-27 08:53:38 -05:00
c1adc0b79e Implement MemorizingLazyConstrComponent 2023-10-26 15:37:05 -05:00
2d07a44f7d Fix mypy errors 2023-10-26 13:41:50 -05:00
e555dffc0c Reformat source code 2023-10-26 13:40:09 -05:00
cd32b0e70d Add test fixtures 2023-10-26 13:39:39 -05:00
40c7f2ffb5 io: Simplify more extensions 2023-06-09 10:57:54 -05:00
25728f5512 Small updates to Makefile 2023-06-09 10:57:41 -05:00
8dd5bb416b Minor fixes to docs and setup.py 2023-06-08 12:37:11 -05:00
119 changed files with 4019 additions and 1658 deletions

View File

@@ -3,31 +3,68 @@
All notable changes to this project will be documented in this file. All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to
[Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.3.0] - 2023-06-08 ## [0.4.3] - 2025-05-10
This is a complete rewrite of the original prototype package, with an entirely new API, focused on performance, scalability and flexibility. ## Changed
- Update dependency: Gurobi 12
## [0.4.2] - 2024-12-10
## Changed
- H5File: Use float64 precision instead of float32
- LearningSolver: optimize now returns (model, stats) instead of just stats
- Update dependency: Gurobi 11
## [0.4.0] - 2024-02-06
### Added ### Added
- Add support for Python/Gurobipy and Julia/JuMP, in addition to the existing Python/Pyomo interface. - Add ML strategies for user cuts
- Add six new random instance generators (bin packing, capacitated p-median, set cover, set packing, unit commitment, vertex cover), in addition to the three existing generators (multiknapsack, stable set, tsp). - Add ML strategies for lazy constraints
- Collect some additional raw training data (e.g. basis status, reduced costs, etc)
- Add new primal solution ML strategies (memorizing, independent vars and joint vars) ### Changed
- Add new primal solution actions (set warm start, fix variables, enforce proximity)
- LearningSolver.solve no longer generates HDF5 files; use a collector instead.
- Add `_gurobipy` suffix to all `build_model` functions; implement some `_pyomo`
and `_jump` functions.
## [0.3.0] - 2023-06-08
This is a complete rewrite of the original prototype package, with an entirely
new API, focused on performance, scalability and flexibility.
### Added
- Add support for Python/Gurobipy and Julia/JuMP, in addition to the existing
Python/Pyomo interface.
- Add six new random instance generators (bin packing, capacitated p-median, set
cover, set packing, unit commitment, vertex cover), in addition to the three
existing generators (multiknapsack, stable set, tsp).
- Collect some additional raw training data (e.g. basis status, reduced costs,
etc)
- Add new primal solution ML strategies (memorizing, independent vars and joint
vars)
- Add new primal solution actions (set warm start, fix variables, enforce
proximity)
- Add runnable tutorials and user guides to the documentation. - Add runnable tutorials and user guides to the documentation.
### Changed ### Changed
- To support large-scale problems and datasets, switch from an in-memory architecture to a file-based architecture, using HDF5 files. - To support large-scale problems and datasets, switch from an in-memory
- To accelerate development cycle, split training data collection from feature extraction. architecture to a file-based architecture, using HDF5 files.
- To accelerate development cycle, split training data collection from feature
extraction.
### Removed ### Removed
- Temporarily remove ML strategies for lazy constraints - Temporarily remove ML strategies for lazy constraints
- Remove benchmarks from documentation. These will be published in a separate paper. - Remove benchmarks from documentation. These will be published in a separate
paper.
## [0.1.0] - 2020-11-23 ## [0.1.0] - 2020-11-23

View File

@@ -3,10 +3,14 @@ PYTEST := pytest
PIP := $(PYTHON) -m pip PIP := $(PYTHON) -m pip
MYPY := $(PYTHON) -m mypy MYPY := $(PYTHON) -m mypy
PYTEST_ARGS := -W ignore::DeprecationWarning -vv --log-level=DEBUG PYTEST_ARGS := -W ignore::DeprecationWarning -vv --log-level=DEBUG
VERSION := 0.3 VERSION := 0.4
all: docs test all: docs test
conda-create:
conda env remove -n miplearn
conda create -n miplearn python=3.12
clean: clean:
rm -rf build/* dist/* rm -rf build/* dist/*
@@ -21,8 +25,8 @@ dist-upload:
docs: docs:
rm -rf ../docs/$(VERSION) rm -rf ../docs/$(VERSION)
cd docs; make clean; make dirhtml cd docs; make dirhtml
rsync -avP --delete-after docs/_build/dirhtml/ ../docs/$(VERSION) rsync -avP --delete-after docs/_build/dirhtml/ ../docs/$(VERSION)/
install-deps: install-deps:
$(PIP) install --upgrade pip $(PIP) install --upgrade pip
@@ -43,6 +47,6 @@ test:
# rm -rf .mypy_cache # rm -rf .mypy_cache
$(MYPY) -p miplearn $(MYPY) -p miplearn
$(MYPY) -p tests $(MYPY) -p tests
$(PYTEST) $(PYTEST_ARGS) $(PYTEST) $(PYTEST_ARGS) .
.PHONY: test test-watch docs install dist .PHONY: test test-watch docs install dist

View File

@@ -14,7 +14,7 @@
</a> </a>
</p> </p>
**MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Linear Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS. **MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS.
Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits. Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits.
@@ -22,21 +22,22 @@ Documentation
------------- -------------
- Tutorials: - Tutorials:
1. [Getting started (Pyomo)](https://anl-ceeesa.github.io/MIPLearn/0.3/tutorials/getting-started-pyomo/) 1. [Getting started (Pyomo)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-pyomo/)
2. [Getting started (Gurobipy)](https://anl-ceeesa.github.io/MIPLearn/0.3/tutorials/getting-started-gurobipy/) 2. [Getting started (Gurobipy)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-gurobipy/)
3. [Getting started (JuMP)](https://anl-ceeesa.github.io/MIPLearn/0.3/tutorials/getting-started-jump/) 3. [Getting started (JuMP)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-jump/)
4. [User cuts and lazy constraints](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/cuts-gurobipy/)
- User Guide - User Guide
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/problems/) 1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/problems/)
2. [Training data collectors](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/collectors/) 2. [Training data collectors](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/collectors/)
3. [Feature extractors](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/features/) 3. [Feature extractors](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/features/)
4. [Primal components](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/primal/) 4. [Primal components](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/primal/)
5. [Learning solver](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/solvers/) 5. [Learning solver](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/solvers/)
- Python API Reference - Python API Reference
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.3/api/problems/) 1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.4/api/problems/)
2. [Collectors & extractors](https://anl-ceeesa.github.io/MIPLearn/0.3/api/collectors/) 2. [Collectors & extractors](https://anl-ceeesa.github.io/MIPLearn/0.4/api/collectors/)
3. [Components](https://anl-ceeesa.github.io/MIPLearn/0.3/api/components/) 3. [Components](https://anl-ceeesa.github.io/MIPLearn/0.4/api/components/)
4. [Solvers](https://anl-ceeesa.github.io/MIPLearn/0.3/api/solvers/) 4. [Solvers](https://anl-ceeesa.github.io/MIPLearn/0.4/api/solvers/)
5. [Helpers](https://anl-ceeesa.github.io/MIPLearn/0.3/api/helpers/) 5. [Helpers](https://anl-ceeesa.github.io/MIPLearn/0.4/api/helpers/)
Authors Authors
------- -------
@@ -58,7 +59,7 @@ Citing MIPLearn
If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows: If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.3)*. Zenodo (2023). DOI: [10.5281/zenodo.4287567](https://doi.org/10.5281/zenodo.4287567) * **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.4)*. Zenodo (2024). DOI: [10.5281/zenodo.4287567](https://doi.org/10.5281/zenodo.4287567)
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed: If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:

View File

@@ -118,3 +118,13 @@ table tr:last-child {
border-bottom: 0; border-bottom: 0;
} }
@media (min-width: 960px) {
.bd-page-width {
max-width: 100rem;
}
}
.bd-sidebar-primary .sidebar-primary-items__end {
margin-bottom: 0;
margin-top: 0;
}

View File

@@ -55,3 +55,9 @@ miplearn.problems.vertexcover
.. automodule:: miplearn.problems.vertexcover .. automodule:: miplearn.problems.vertexcover
:members: :members:
miplearn.problems.maxcut
-----------------------------
.. automodule:: miplearn.problems.maxcut
:members:

View File

@@ -1,7 +1,7 @@
project = "MIPLearn" project = "MIPLearn"
copyright = "2020-2023, UChicago Argonne, LLC" copyright = "2020-2023, UChicago Argonne, LLC"
author = "" author = ""
release = "0.3" release = "0.4"
extensions = [ extensions = [
"myst_parser", "myst_parser",
"nbsphinx", "nbsphinx",

View File

@@ -14,7 +14,7 @@
"\n", "\n",
"## HDF5 Format\n", "## HDF5 Format\n",
"\n", "\n",
"MIPLearn stores all training data in [HDF5](HDF5) (Hierarchical Data Format, Version 5) files. The HDF format was originally developed by the [National Center for Supercomputing Applications][NCSA] (NCSA) for storing and organizing large amounts of data, and supports a variety of data types, including integers, floating-point numbers, strings, and arrays. Compared to other formats, such as CSV, JSON or SQLite, the HDF5 format provides several advantages for MIPLearn, including:\n", "MIPLearn stores all training data in [HDF5][HDF5] (Hierarchical Data Format, Version 5) files. The HDF format was originally developed by the [National Center for Supercomputing Applications][NCSA] (NCSA) for storing and organizing large amounts of data, and supports a variety of data types, including integers, floating-point numbers, strings, and arrays. Compared to other formats, such as CSV, JSON or SQLite, the HDF5 format provides several advantages for MIPLearn, including:\n",
"\n", "\n",
"- *Storage of multiple scalars, vectors and matrices in a single file* --- This allows MIPLearn to store all training data related to a given problem instance in a single file, which makes training data easier to store, organize and transfer.\n", "- *Storage of multiple scalars, vectors and matrices in a single file* --- This allows MIPLearn to store all training data related to a given problem instance in a single file, which makes training data easier to store, organize and transfer.\n",
"- *High-performance partial I/O* --- Partial I/O allows MIPLearn to read a single element from the training data (e.g. value of the optimal solution) without loading the entire file to memory or reading it from beginning to end, which dramatically improves performance and reduces memory requirements. This is especially important when processing a large number of training data files.\n", "- *High-performance partial I/O* --- Partial I/O allows MIPLearn to read a single element from the training data (e.g. value of the optimal solution) without loading the entire file to memory or reading it from beginning to end, which dramatically improves performance and reduces memory requirements. This is especially important when processing a large number of training data files.\n",
@@ -38,9 +38,13 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 1,
"id": "f906fe9c", "id": "f906fe9c",
"metadata": { "metadata": {
"ExecuteTime": {
"end_time": "2024-01-30T22:19:30.826123021Z",
"start_time": "2024-01-30T22:19:30.766066926Z"
},
"collapsed": false, "collapsed": false,
"jupyter": { "jupyter": {
"outputs_hidden": false "outputs_hidden": false
@@ -54,21 +58,21 @@
"x1 = 1\n", "x1 = 1\n",
"x2 = hello world\n", "x2 = hello world\n",
"x3 = [1 2 3]\n", "x3 = [1 2 3]\n",
"x4 = [[0.37454012 0.9507143 0.7319939 ]\n", "x4 = [[0.37454012 0.95071431 0.73199394]\n",
" [0.5986585 0.15601864 0.15599452]\n", " [0.59865848 0.15601864 0.15599452]\n",
" [0.05808361 0.8661761 0.601115 ]]\n", " [0.05808361 0.86617615 0.60111501]]\n",
"x5 = (2, 3)\t0.68030757\n", "x5 = (3, 2)\t0.6803075385877797\n",
" (3, 2)\t0.45049927\n", " (2, 3)\t0.450499251969543\n",
" (4, 0)\t0.013264962\n", " (0, 4)\t0.013264961159866528\n",
" (0, 2)\t0.94220173\n", " (2, 0)\t0.9422017556848528\n",
" (4, 2)\t0.5632882\n", " (2, 4)\t0.5632882178455393\n",
" (2, 1)\t0.3854165\n", " (1, 2)\t0.3854165025399161\n",
" (1, 1)\t0.015966251\n", " (1, 1)\t0.015966252220214194\n",
" (3, 0)\t0.23089382\n", " (0, 3)\t0.230893825622149\n",
" (4, 4)\t0.24102546\n", " (4, 4)\t0.24102546602601171\n",
" (1, 3)\t0.68326354\n", " (3, 1)\t0.6832635188254582\n",
" (3, 1)\t0.6099967\n", " (1, 3)\t0.6099966577826209\n",
" (0, 3)\t0.8331949\n" " (3, 0)\t0.8331949117361643\n"
] ]
} }
], ],
@@ -104,12 +108,6 @@
" print(\"x5 =\", h5.get_sparse(\"x5\"))" " print(\"x5 =\", h5.get_sparse(\"x5\"))"
] ]
}, },
{
"cell_type": "markdown",
"id": "50441907",
"metadata": {},
"source": []
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "d0000c8d", "id": "d0000c8d",
@@ -179,9 +177,13 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 2,
"id": "ac6f8c6f", "id": "ac6f8c6f",
"metadata": { "metadata": {
"ExecuteTime": {
"end_time": "2024-01-30T22:19:30.826707866Z",
"start_time": "2024-01-30T22:19:30.825940503Z"
},
"collapsed": false, "collapsed": false,
"jupyter": { "jupyter": {
"outputs_hidden": false "outputs_hidden": false
@@ -205,7 +207,7 @@
"\n", "\n",
"from miplearn.problems.tsp import (\n", "from miplearn.problems.tsp import (\n",
" TravelingSalesmanGenerator,\n", " TravelingSalesmanGenerator,\n",
" build_tsp_model,\n", " build_tsp_model_gurobipy,\n",
")\n", ")\n",
"from miplearn.io import write_pkl_gz\n", "from miplearn.io import write_pkl_gz\n",
"from miplearn.h5 import H5File\n", "from miplearn.h5 import H5File\n",
@@ -231,7 +233,7 @@
"# Solve all instances and collect basic solution information.\n", "# Solve all instances and collect basic solution information.\n",
"# Process at most four instances in parallel.\n", "# Process at most four instances in parallel.\n",
"bc = BasicCollector()\n", "bc = BasicCollector()\n",
"bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model, n_jobs=4)\n", "bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model_gurobipy, n_jobs=4)\n",
"\n", "\n",
"# Read and print some training data for the first instance.\n", "# Read and print some training data for the first instance.\n",
"with H5File(\"data/tsp/00000.h5\", \"r\") as h5:\n", "with H5File(\"data/tsp/00000.h5\", \"r\") as h5:\n",
@@ -244,6 +246,9 @@
"execution_count": null, "execution_count": null,
"id": "78f0b07a", "id": "78f0b07a",
"metadata": { "metadata": {
"ExecuteTime": {
"start_time": "2024-01-30T22:19:30.826179789Z"
},
"collapsed": false, "collapsed": false,
"jupyter": { "jupyter": {
"outputs_hidden": false "outputs_hidden": false
@@ -269,7 +274,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.16" "version": "3.11.7"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -51,7 +51,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 1,
"id": "ed9a18c8", "id": "ed9a18c8",
"metadata": { "metadata": {
"collapsed": false, "collapsed": false,
@@ -69,22 +69,22 @@
" -709. -605. -543. -321.\n", " -709. -605. -543. -321.\n",
" -674. -571. -341. ]\n", " -674. -571. -341. ]\n",
"variable features (10, 4) \n", "variable features (10, 4) \n",
" [[-1.53124309e+03 -3.50000000e+02 0.00000000e+00 9.43468018e+01]\n", " [[-1.53124309e+03 -3.50000000e+02 0.00000000e+00 9.43467993e+01]\n",
" [-1.53124309e+03 -6.92000000e+02 2.51703322e-01 0.00000000e+00]\n", " [-1.53124309e+03 -6.92000000e+02 2.51703329e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -4.54000000e+02 0.00000000e+00 8.25504150e+01]\n", " [-1.53124309e+03 -4.54000000e+02 0.00000000e+00 8.25504181e+01]\n",
" [-1.53124309e+03 -7.09000000e+02 1.11373022e-01 0.00000000e+00]\n", " [-1.53124309e+03 -7.09000000e+02 1.11373019e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -6.05000000e+02 1.00000000e+00 -1.26055283e+02]\n", " [-1.53124309e+03 -6.05000000e+02 1.00000000e+00 -1.26055279e+02]\n",
" [-1.53124309e+03 -5.43000000e+02 0.00000000e+00 1.68693771e+02]\n", " [-1.53124309e+03 -5.43000000e+02 0.00000000e+00 1.68693775e+02]\n",
" [-1.53124309e+03 -3.21000000e+02 1.07488781e-01 0.00000000e+00]\n", " [-1.53124309e+03 -3.21000000e+02 1.07488781e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -6.74000000e+02 8.82293701e-01 0.00000000e+00]\n", " [-1.53124309e+03 -6.74000000e+02 8.82293687e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -5.71000000e+02 0.00000000e+00 1.41129074e+02]\n", " [-1.53124309e+03 -5.71000000e+02 0.00000000e+00 1.41129074e+02]\n",
" [-1.53124309e+03 -3.41000000e+02 1.28830120e-01 0.00000000e+00]]\n", " [-1.53124309e+03 -3.41000000e+02 1.28830116e-01 0.00000000e+00]]\n",
"constraint features (5, 3) \n", "constraint features (5, 3) \n",
" [[ 1.3100000e+03 -1.5978307e-01 0.0000000e+00]\n", " [[ 1.31000000e+03 -1.59783068e-01 0.00000000e+00]\n",
" [ 9.8800000e+02 -3.2881632e-01 0.0000000e+00]\n", " [ 9.88000000e+02 -3.28816327e-01 0.00000000e+00]\n",
" [ 1.0040000e+03 -4.0601316e-01 0.0000000e+00]\n", " [ 1.00400000e+03 -4.06013164e-01 0.00000000e+00]\n",
" [ 1.2690000e+03 -1.3659772e-01 0.0000000e+00]\n", " [ 1.26900000e+03 -1.36597720e-01 0.00000000e+00]\n",
" [ 1.0070000e+03 -2.8800571e-01 0.0000000e+00]]\n" " [ 1.00700000e+03 -2.88005696e-01 0.00000000e+00]]\n"
] ]
} }
], ],
@@ -101,7 +101,7 @@
"from miplearn.io import write_pkl_gz\n", "from miplearn.io import write_pkl_gz\n",
"from miplearn.problems.multiknapsack import (\n", "from miplearn.problems.multiknapsack import (\n",
" MultiKnapsackGenerator,\n", " MultiKnapsackGenerator,\n",
" build_multiknapsack_model,\n", " build_multiknapsack_model_gurobipy,\n",
")\n", ")\n",
"\n", "\n",
"# Set random seed to make example reproducible\n", "# Set random seed to make example reproducible\n",
@@ -127,7 +127,7 @@
"# Run the basic collector\n", "# Run the basic collector\n",
"BasicCollector().collect(\n", "BasicCollector().collect(\n",
" glob(\"data/multiknapsack/*\"),\n", " glob(\"data/multiknapsack/*\"),\n",
" build_multiknapsack_model,\n", " build_multiknapsack_model_gurobipy,\n",
" n_jobs=4,\n", " n_jobs=4,\n",
")\n", ")\n",
"\n", "\n",
@@ -166,7 +166,7 @@
"\n", "\n",
" # Extract and print constraint features\n", " # Extract and print constraint features\n",
" x3 = ext.get_constr_features(h5)\n", " x3 = ext.get_constr_features(h5)\n",
" print(\"constraint features\", x3.shape, \"\\n\", x3)\n" " print(\"constraint features\", x3.shape, \"\\n\", x3)"
] ]
}, },
{ {
@@ -204,7 +204,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 6, "execution_count": 2,
"id": "a1bc38fe", "id": "a1bc38fe",
"metadata": { "metadata": {
"collapsed": false, "collapsed": false,
@@ -326,7 +326,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.16" "version": "3.11.7"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -15,7 +15,7 @@
"\n", "\n",
"Before presenting the primal components themselves, we briefly discuss the three ways a solution may be provided to the solver. Each approach has benefits and limitations, which we also discuss in this section. All primal components can be configured to use any of the following approaches.\n", "Before presenting the primal components themselves, we briefly discuss the three ways a solution may be provided to the solver. Each approach has benefits and limitations, which we also discuss in this section. All primal components can be configured to use any of the following approaches.\n",
"\n", "\n",
"The first approach is to provide the solution to the solver as a **warm start**. This is implemented by the class [SetWarmStart](SetWarmStart). The main advantage is that this method maintains all optimality and feasibility guarantees of the MIP solver, while still providing significant performance benefits for various classes of problems. If the machine learning model is able to predict multiple solutions, it is also possible to set multiple warm starts. In this case, the solver evaluates each warm start, discards the infeasible ones, then proceeds with the one that has the best objective value. The main disadvantage of this approach, compared to the next two, is that it provides relatively modest speedups for most problem classes, and no speedup at all for many others, even when the machine learning predictions are 100% accurate.\n", "The first approach is to provide the solution to the solver as a **warm start**. This is implemented by the class [SetWarmStart][SetWarmStart]. The main advantage is that this method maintains all optimality and feasibility guarantees of the MIP solver, while still providing significant performance benefits for various classes of problems. If the machine learning model is able to predict multiple solutions, it is also possible to set multiple warm starts. In this case, the solver evaluates each warm start, discards the infeasible ones, then proceeds with the one that has the best objective value. The main disadvantage of this approach, compared to the next two, is that it provides relatively modest speedups for most problem classes, and no speedup at all for many others, even when the machine learning predictions are 100% accurate.\n",
"\n", "\n",
"[SetWarmStart]: ../../api/components/#miplearn.components.primal.actions.SetWarmStart\n", "[SetWarmStart]: ../../api/components/#miplearn.components.primal.actions.SetWarmStart\n",
"\n", "\n",
@@ -120,7 +120,7 @@
" extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n", " extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n",
" constructor=MergeTopSolutions(k=3, thresholds=[0.25, 0.75]),\n", " constructor=MergeTopSolutions(k=3, thresholds=[0.25, 0.75]),\n",
" action=EnforceProximity(3),\n", " action=EnforceProximity(3),\n",
")\n" ")"
] ]
}, },
{ {
@@ -175,7 +175,7 @@
" ),\n", " ),\n",
" extractor=AlvLouWeh2017Extractor(),\n", " extractor=AlvLouWeh2017Extractor(),\n",
" action=SetWarmStart(),\n", " action=SetWarmStart(),\n",
")\n" ")"
] ]
}, },
{ {
@@ -230,7 +230,7 @@
" instance_fields=[\"static_var_obj_coeffs\"],\n", " instance_fields=[\"static_var_obj_coeffs\"],\n",
" ),\n", " ),\n",
" action=SetWarmStart(),\n", " action=SetWarmStart(),\n",
")\n" ")"
] ]
}, },
{ {
@@ -263,7 +263,7 @@
"# Configures an expert primal component, which reads a pre-computed\n", "# Configures an expert primal component, which reads a pre-computed\n",
"# optimal solution from the HDF5 file and provides it to the solver\n", "# optimal solution from the HDF5 file and provides it to the solver\n",
"# as warm start.\n", "# as warm start.\n",
"comp = ExpertPrimalComponent(action=SetWarmStart())\n" "comp = ExpertPrimalComponent(action=SetWarmStart())"
] ]
} }
], ],
@@ -283,7 +283,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.16" "version": "3.11.7"
} }
}, },
"nbformat": 4, "nbformat": 4,

File diff suppressed because it is too large Load Diff

View File

@@ -57,7 +57,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 1,
"id": "92b09b98", "id": "92b09b98",
"metadata": { "metadata": {
"collapsed": false, "collapsed": false,
@@ -70,10 +70,11 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
"\n", "\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n", "Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x6ddcd141\n", "Model fingerprint: 0x6ddcd141\n",
@@ -89,13 +90,20 @@
" 0 6.3600000e+02 1.700000e+01 0.000000e+00 0s\n", " 0 6.3600000e+02 1.700000e+01 0.000000e+00 0s\n",
" 15 2.7610000e+03 0.000000e+00 0.000000e+00 0s\n", " 15 2.7610000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n", "\n",
"Solved in 15 iterations and 0.00 seconds (0.00 work units)\n", "Solved in 15 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 2.761000000e+03\n", "Optimal objective 2.761000000e+03\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n", "\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n", "User-callback calls 56, time in user-callback 0.00 sec\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n", "Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"LazyConstraints 1\n",
"\n", "\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n", "Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x74ca3d0a\n", "Model fingerprint: 0x74ca3d0a\n",
@@ -119,31 +127,20 @@
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n", "\n",
" 0 0 2761.00000 0 - 2796.00000 2761.00000 1.25% - 0s\n", " 0 0 2761.00000 0 - 2796.00000 2761.00000 1.25% - 0s\n",
" 0 0 cutoff 0 2796.00000 2796.00000 0.00% - 0s\n",
"\n", "\n",
"Cutting planes:\n", "Cutting planes:\n",
" Lazy constraints: 3\n", " Lazy constraints: 3\n",
"\n", "\n",
"Explored 1 nodes (16 simplex iterations) in 0.01 seconds (0.00 work units)\n", "Explored 1 nodes (14 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n", "Thread count was 20 (of 20 available processors)\n",
"\n", "\n",
"Solution count 1: 2796 \n", "Solution count 1: 2796 \n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%\n", "Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%\n",
"\n", "\n",
"User-callback calls 110, time in user-callback 0.00 sec\n" "User-callback calls 114, time in user-callback 0.00 sec\n"
] ]
},
{
"data": {
"text/plain": [
"{'WS: Count': 1, 'WS: Number of variables set': 41.0}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
} }
], ],
"source": [ "source": [
@@ -162,7 +159,7 @@
"from miplearn.io import write_pkl_gz\n", "from miplearn.io import write_pkl_gz\n",
"from miplearn.problems.tsp import (\n", "from miplearn.problems.tsp import (\n",
" TravelingSalesmanGenerator,\n", " TravelingSalesmanGenerator,\n",
" build_tsp_model,\n", " build_tsp_model_gurobipy,\n",
")\n", ")\n",
"from miplearn.solvers.learning import LearningSolver\n", "from miplearn.solvers.learning import LearningSolver\n",
"\n", "\n",
@@ -189,7 +186,7 @@
"\n", "\n",
"# Collect training data\n", "# Collect training data\n",
"bc = BasicCollector()\n", "bc = BasicCollector()\n",
"bc.collect(train_data, build_tsp_model, n_jobs=4)\n", "bc.collect(train_data, build_tsp_model_gurobipy, n_jobs=4)\n",
"\n", "\n",
"# Build learning solver\n", "# Build learning solver\n",
"solver = LearningSolver(\n", "solver = LearningSolver(\n",
@@ -211,7 +208,7 @@
"solver.fit(train_data)\n", "solver.fit(train_data)\n",
"\n", "\n",
"# Solve a test instance\n", "# Solve a test instance\n",
"solver.optimize(test_data[0], build_tsp_model)" "solver.optimize(test_data[0], build_tsp_model_gurobipy);"
] ]
}, },
{ {
@@ -239,7 +236,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.12" "version": "3.11.7"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -1,6 +1,6 @@
MIPLearn MIPLearn
======== ========
**MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Linear Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS. **MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS.
Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits. Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits.
@@ -16,6 +16,7 @@ Contents
tutorials/getting-started-pyomo tutorials/getting-started-pyomo
tutorials/getting-started-gurobipy tutorials/getting-started-gurobipy
tutorials/getting-started-jump tutorials/getting-started-jump
tutorials/cuts-gurobipy
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
@@ -60,7 +61,7 @@ Citing MIPLearn
If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows: If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.3)*. Zenodo (2023). DOI: https://doi.org/10.5281/zenodo.4287567 * **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.4)*. Zenodo (2024). DOI: https://doi.org/10.5281/zenodo.4287567
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed: If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:

View File

@@ -0,0 +1,571 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b4bd8bd6-3ce9-4932-852f-f98a44120a3e",
"metadata": {},
"source": [
"# User cuts and lazy constraints\n",
"\n",
"User cuts and lazy constraints are two advanced mixed-integer programming techniques that can accelerate solver performance. User cuts are additional constraints, derived from the constraints already in the model, that can tighten the feasible region and eliminate fractional solutions, thus reducing the size of the branch-and-bound tree. Lazy constraints, on the other hand, are constraints that are potentially part of the problem formulation but are omitted from the initial model to reduce its size; these constraints are added to the formulation only once the solver finds a solution that violates them. While both techniques have been successful, significant computational effort may still be required to generate strong user cuts and to identify violated lazy constraints, which can reduce their effectiveness.\n",
"\n",
"MIPLearn is able to predict which user cuts and which lazy constraints to enforce at the beginning of the optimization process, using machine learning. In this tutorial, we will use the framework to predict subtour elimination constraints for the **traveling salesman problem** using Gurobipy. We assume that MIPLearn has already been correctly installed.\n",
"\n",
"<div class=\"alert alert-info\">\n",
"\n",
"Solver Compatibility\n",
"\n",
"User cuts and lazy constraints are also supported in the Python/Pyomo and Julia/JuMP versions of the package. See the source code of <code>build_tsp_model_pyomo</code> and <code>build_tsp_model_jump</code> for more details. Note, however, the following limitations:\n",
"\n",
"- Python/Pyomo: Only `gurobi_persistent` is currently supported. PRs implementing callbacks for other persistent solvers are welcome.\n",
"- Julia/JuMP: Only solvers supporting solver-independent callbacks are supported. As of JuMP 1.19, this includes Gurobi, CPLEX, XPRESS, SCIP and GLPK. Note that HiGHS and Cbc are not supported. As newer versions of JuMP implement further callback support, MIPLearn should become automatically compatible with these solvers.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "72229e1f-cbd8-43f0-82ee-17d6ec9c3b7d",
"metadata": {},
"source": [
"## Modeling the traveling salesman problem\n",
"\n",
"Given a list of cities and the distances between them, the **traveling salesman problem (TSP)** asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes.\n",
"\n",
"To describe an instance of TSP, we need to specify the number of cities $n$, and an $n \\times n$ matrix of distances. The class `TravelingSalesmanData`, in the `miplearn.problems.tsp` package, can hold this data:"
]
},
{
"cell_type": "markdown",
"id": "4598a1bc-55b6-48cc-a050-2262786c203a",
"metadata": {},
"source": [
"```python\n",
"@dataclass\r\n",
"class TravelingSalesmanData:\r\n",
" n_cities: int\r\n",
" distances: np.ndarray\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "3a43cc12-1207-4247-bdb2-69a6a2910738",
"metadata": {},
"source": [
"MIPLearn also provides `TravelingSalesmandGenerator`, a random generator for TSP instances, and `build_tsp_model_gurobipy`, a function which converts `TravelingSalesmanData` into an actual gurobipy optimization model, and which uses lazy constraints to enforce subtour elimination.\n",
"\n",
"The example below is a simplified and annotated version of `build_tsp_model_gurobipy`, illustrating the usage of callbacks with MIPLearn. Compared the the previous tutorial examples, note that, in addition to defining the variables, objective function and constraints of our problem, we also define two callback functions `lazy_separate` and `lazy_enforce`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e4712a85-0327-439c-8889-933e1ff714e7",
"metadata": {},
"outputs": [],
"source": [
"import gurobipy as gp\n",
"from gurobipy import quicksum, GRB, tuplelist\n",
"from miplearn.solvers.gurobi import GurobiModel\n",
"import networkx as nx\n",
"import numpy as np\n",
"from miplearn.problems.tsp import (\n",
" TravelingSalesmanData,\n",
" TravelingSalesmanGenerator,\n",
")\n",
"from scipy.stats import uniform, randint\n",
"from miplearn.io import write_pkl_gz, read_pkl_gz\n",
"from miplearn.collectors.basic import BasicCollector\n",
"from miplearn.solvers.learning import LearningSolver\n",
"from miplearn.components.lazy.mem import MemorizingLazyComponent\n",
"from miplearn.extractors.fields import H5FieldsExtractor\n",
"from sklearn.neighbors import KNeighborsClassifier\n",
"\n",
"# Set up random seed to make example more reproducible\n",
"np.random.seed(42)\n",
"\n",
"# Set up Python logging\n",
"import logging\n",
"\n",
"logging.basicConfig(level=logging.WARNING)\n",
"\n",
"\n",
"def build_tsp_model_gurobipy_simplified(data):\n",
" # Read data from file if a filename is provided\n",
" if isinstance(data, str):\n",
" data = read_pkl_gz(data)\n",
"\n",
" # Create empty gurobipy model\n",
" model = gp.Model()\n",
"\n",
" # Create set of edges between every pair of cities, for convenience\n",
" edges = tuplelist(\n",
" (i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)\n",
" )\n",
"\n",
" # Add binary variable x[e] for each edge e\n",
" x = model.addVars(edges, vtype=GRB.BINARY, name=\"x\")\n",
"\n",
" # Add objective function\n",
" model.setObjective(quicksum(x[(i, j)] * data.distances[i, j] for (i, j) in edges))\n",
"\n",
" # Add constraint: must choose two edges adjacent to each city\n",
" model.addConstrs(\n",
" (\n",
" quicksum(x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)\n",
" == 2\n",
" for i in range(data.n_cities)\n",
" ),\n",
" name=\"eq_degree\",\n",
" )\n",
"\n",
" def lazy_separate(m: GurobiModel):\n",
" \"\"\"\n",
" Callback function that finds subtours in the current solution.\n",
" \"\"\"\n",
" # Query current value of the x variables\n",
" x_val = m.inner.cbGetSolution(x)\n",
"\n",
" # Initialize empty set of violations\n",
" violations = []\n",
"\n",
" # Build set of edges we have currently selected\n",
" selected_edges = [e for e in edges if x_val[e] > 0.5]\n",
"\n",
" # Build a graph containing the selected edges, using networkx\n",
" graph = nx.Graph()\n",
" graph.add_edges_from(selected_edges)\n",
"\n",
" # For each component of the graph\n",
" for component in list(nx.connected_components(graph)):\n",
"\n",
" # If the component is not the entire graph, we found a\n",
" # subtour. Add the edge cut to the list of violations.\n",
" if len(component) < data.n_cities:\n",
" cut_edges = [\n",
" [e[0], e[1]]\n",
" for e in edges\n",
" if (e[0] in component and e[1] not in component)\n",
" or (e[0] not in component and e[1] in component)\n",
" ]\n",
" violations.append(cut_edges)\n",
"\n",
" # Return the list of violations\n",
" return violations\n",
"\n",
" def lazy_enforce(m: GurobiModel, violations) -> None:\n",
" \"\"\"\n",
" Callback function that, given a list of subtours, adds lazy\n",
" constraints to remove them from the feasible region.\n",
" \"\"\"\n",
" print(f\"Enforcing {len(violations)} subtour elimination constraints\")\n",
" for violation in violations:\n",
" m.add_constr(quicksum(x[e[0], e[1]] for e in violation) >= 2)\n",
"\n",
" return GurobiModel(\n",
" model,\n",
" lazy_separate=lazy_separate,\n",
" lazy_enforce=lazy_enforce,\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "58875042-d6ac-4f93-b3cc-9a5822b11dad",
"metadata": {},
"source": [
"The `lazy_separate` function starts by querying the current fractional solution value through `m.inner.cbGetSolution` (recall that `m.inner` is a regular gurobipy model), then finds the set of violated lazy constraints. Unlike a regular lazy constraint solver callback, note that `lazy_separate` does not add the violated constraints to the model; it simply returns a list of objects that uniquely identifies the set of lazy constraints that should be generated. Enforcing the constraints is the responsbility of the second callback function, `lazy_enforce`. This function takes as input the model and the list of violations found by `lazy_separate`, converts them into actual constraints, and adds them to the model through `m.add_constr`.\n",
"\n",
"During training data generation, MIPLearn calls `lazy_separate` and `lazy_enforce` in sequence, inside a regular solver callback. However, once the machine learning models are trained, MIPLearn calls `lazy_enforce` directly, before the optimization process starts, with a list of **predicted** violations, as we will see in the example below."
]
},
{
"cell_type": "markdown",
"id": "5839728e-406c-4be2-ba81-83f2b873d4b2",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
"\n",
"Constraint Representation\n",
"\n",
"How should user cuts and lazy constraints be represented is a decision that the user can make; MIPLearn is representation agnostic. The objects returned by `lazy_separate`, however, are serialized as JSON and stored in the HDF5 training data files. Therefore, it is recommended to use only simple objects, such as lists, tuples and dictionaries.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "847ae32e-fad7-406a-8797-0d79065a07fd",
"metadata": {},
"source": [
"## Generating training data\n",
"\n",
"To test the callback defined above, we generate a small set of TSP instances, using the provided random instance generator. As in the previous tutorial, we generate some test instances and some training instances, then solve them using `BasicCollector`. Input problem data is stored in `tsp/train/00000.pkl.gz, ...`, whereas solver training data (including list of required lazy constraints) is stored in `tsp/train/00000.h5, ...`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "eb63154a-1fa6-4eac-aa46-6838b9c201f6",
"metadata": {},
"outputs": [],
"source": [
"# Configure generator to produce instances with 50 cities located\n",
"# in the 1000 x 1000 square, and with slightly perturbed distances.\n",
"gen = TravelingSalesmanGenerator(\n",
" x=uniform(loc=0.0, scale=1000.0),\n",
" y=uniform(loc=0.0, scale=1000.0),\n",
" n=randint(low=50, high=51),\n",
" gamma=uniform(loc=1.0, scale=0.25),\n",
" fix_cities=True,\n",
" round=True,\n",
")\n",
"\n",
"# Generate 500 instances and store input data file to .pkl.gz files\n",
"data = gen.generate(500)\n",
"train_data = write_pkl_gz(data[0:450], \"tsp/train\")\n",
"test_data = write_pkl_gz(data[450:500], \"tsp/test\")\n",
"\n",
"# Solve the training instances in parallel, collecting the required lazy\n",
"# constraints, in addition to other information, such as optimal solution.\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_tsp_model_gurobipy_simplified, n_jobs=10)"
]
},
{
"cell_type": "markdown",
"id": "6903c26c-dbe0-4a2e-bced-fdbf93513dde",
"metadata": {},
"source": [
"## Training and solving new instances"
]
},
{
"cell_type": "markdown",
"id": "57cd724a-2d27-4698-a1e6-9ab8345ef31f",
"metadata": {},
"source": [
"After producing the training dataset, we can train the machine learning models to predict which lazy constraints are necessary. In this tutorial, we use the following ML strategy: given a new instance, find the 50 most similar ones in the training dataset and verify how often each lazy constraint was required. If a lazy constraint was required for the majority of the 50 most-similar instances, enforce it ahead-of-time for the current instance. To measure instance similarity, use the objective function only. This ML strategy can be implemented using `MemorizingLazyComponent` with `H5FieldsExtractor` and `KNeighborsClassifier`, as shown below."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "43779e3d-4174-4189-bc75-9f564910e212",
"metadata": {},
"outputs": [],
"source": [
"solver = LearningSolver(\n",
" components=[\n",
" MemorizingLazyComponent(\n",
" extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n",
" clf=KNeighborsClassifier(n_neighbors=100),\n",
" ),\n",
" ],\n",
")\n",
"solver.fit(train_data)"
]
},
{
"cell_type": "markdown",
"id": "12480712-9d3d-4cbc-a6d7-d6c1e2f950f4",
"metadata": {},
"source": [
"Next, we solve one of the test instances using the trained solver. In the run below, we can see that MIPLearn adds many lazy constraints ahead-of-time, before the optimization starts. During the optimization process itself, some additional lazy constraints are required, but very few."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "23f904ad-f1a8-4b5a-81ae-c0b9e813a4b2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter Threads to value 1\n",
"Read parameters from file gurobi.env\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
"Model fingerprint: 0x04d7bec1\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Presolve time: 0.00s\n",
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n",
" 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 66 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 5.588000000e+03\n",
"\n",
"User-callback calls 110, time in user-callback 0.00 sec\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:miplearn.components.cuts.mem:Predicting violated lazy constraints...\n",
"INFO:miplearn.components.lazy.mem:Enforcing 19 constraints ahead-of-time...\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enforcing 19 subtour elimination constraints\n",
"Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"Threads 1\n",
"LazyConstraints 1\n",
"\n",
"Optimize a model with 69 rows, 1225 columns and 6091 nonzeros\n",
"Model fingerprint: 0x09bd34d6\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Found heuristic solution: objective 29853.000000\n",
"Presolve time: 0.00s\n",
"Presolved: 69 rows, 1225 columns, 6091 nonzeros\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"\n",
"Root relaxation: objective 6.139000e+03, 93 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 6139.00000 0 6 29853.0000 6139.00000 79.4% - 0s\n",
"H 0 0 6390.0000000 6139.00000 3.93% - 0s\n",
" 0 0 6165.50000 0 10 6390.00000 6165.50000 3.51% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
" 0 0 6165.50000 0 6 6390.00000 6165.50000 3.51% - 0s\n",
" 0 0 6198.50000 0 16 6390.00000 6198.50000 3.00% - 0s\n",
" 0 0 6210.50000 0 6 6390.00000 6210.50000 2.81% - 0s\n",
" 0 0 6212.60000 0 31 6390.00000 6212.60000 2.78% - 0s\n",
"H 0 0 6241.0000000 6212.60000 0.46% - 0s\n",
"* 0 0 0 6219.0000000 6219.00000 0.00% - 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 6\n",
" Clique: 1\n",
" MIR: 1\n",
" StrongCG: 1\n",
" Zero half: 4\n",
" RLT: 1\n",
" Lazy constraints: 3\n",
"\n",
"Explored 1 nodes (219 simplex iterations) in 0.04 seconds (0.03 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 4: 6219 6241 6390 29853 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 163, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"# Increase log verbosity, so that we can see what is MIPLearn doing\n",
"logging.getLogger(\"miplearn\").setLevel(logging.INFO)\n",
"\n",
"# Solve a new test instance\n",
"solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);"
]
},
{
"cell_type": "markdown",
"id": "79cc3e61-ee2b-4f18-82cb-373d55d67de6",
"metadata": {},
"source": [
"Finally, we solve the same instance, but using a regular solver, without ML prediction. We can see that a much larger number of lazy constraints are added during the optimization process itself. Additionally, the solver requires a larger number of iterations to find the optimal solution. There is not a significant difference in running time because of the small size of these instances."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a015c51c-091a-43b6-b761-9f3577fc083e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
"Model fingerprint: 0x04d7bec1\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Presolve time: 0.00s\n",
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n",
" 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 66 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 5.588000000e+03\n",
"\n",
"User-callback calls 110, time in user-callback 0.00 sec\n",
"Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"Threads 1\n",
"LazyConstraints 1\n",
"\n",
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
"Model fingerprint: 0x77a94572\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Found heuristic solution: objective 29695.000000\n",
"Presolve time: 0.00s\n",
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"\n",
"Root relaxation: objective 5.588000e+03, 68 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 5588.00000 0 12 29695.0000 5588.00000 81.2% - 0s\n",
"Enforcing 9 subtour elimination constraints\n",
"Enforcing 9 subtour elimination constraints\n",
"H 0 0 24919.000000 5588.00000 77.6% - 0s\n",
" 0 0 5847.50000 0 14 24919.0000 5847.50000 76.5% - 0s\n",
"Enforcing 5 subtour elimination constraints\n",
"Enforcing 5 subtour elimination constraints\n",
"Enforcing 3 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
"H 0 0 7764.0000000 5847.50000 24.7% - 0s\n",
"H 0 0 6684.0000000 5847.50000 12.5% - 0s\n",
" 0 0 6013.75000 0 11 6684.00000 6013.75000 10.0% - 0s\n",
"H 0 0 6340.0000000 6013.75000 5.15% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6095.00000 0 10 6340.00000 6095.00000 3.86% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6128.00000 0 - 6340.00000 6128.00000 3.34% - 0s\n",
" 0 0 6139.00000 0 6 6340.00000 6139.00000 3.17% - 0s\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6187.25000 0 17 6340.00000 6187.25000 2.41% - 0s\n",
"Enforcing 2 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6201.00000 0 15 6340.00000 6201.00000 2.19% - 0s\n",
" 0 0 6201.00000 0 15 6340.00000 6201.00000 2.19% - 0s\n",
"H 0 0 6219.0000000 6201.00000 0.29% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
" 0 0 infeasible 0 6219.00000 6219.00000 0.00% - 0s\n",
"\n",
"Cutting planes:\n",
" Lazy constraints: 2\n",
"\n",
"Explored 1 nodes (217 simplex iterations) in 0.12 seconds (0.05 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 6: 6219 6340 6684 ... 29695\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 216, time in user-callback 0.06 sec\n"
]
}
],
"source": [
"solver = LearningSolver(components=[]) # empty set of ML components\n",
"solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);"
]
},
{
"cell_type": "markdown",
"id": "432c99b2-67fe-409b-8224-ccef91de96d1",
"metadata": {},
"source": [
"## Learning user cuts\n",
"\n",
"The example above focused on lazy constraints. To enforce user cuts instead, the procedure is very similar, with the following changes:\n",
"\n",
"- Instead of `lazy_separate` and `lazy_enforce`, use `cuts_separate` and `cuts_enforce`\n",
"- Instead of `m.inner.cbGetSolution`, use `m.inner.cbGetNodeRel`\n",
"\n",
"For a complete example, see `build_stab_model_gurobipy`, `build_stab_model_pyomo` and `build_stab_model_jump`, which solves the maximum-weight stable set problem using user cut callbacks."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e6cb694d-8c43-410f-9a13-01bf9e0763b7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -33,6 +33,7 @@
] ]
}, },
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"id": "02f0a927", "id": "02f0a927",
"metadata": {}, "metadata": {},
@@ -44,53 +45,11 @@
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
"- Julia version, compatible with the JuMP modeling language.\n", "- Julia version, compatible with the JuMP modeling language.\n",
"\n", "\n",
"In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:" "In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
] "\n",
}, "```\n",
{ "$ pip install MIPLearn~=0.4\n",
"cell_type": "code", "```"
"execution_count": 1,
"id": "cd8a69c1",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:18:02.381829278Z",
"start_time": "2023-06-06T20:18:02.381532300Z"
}
},
"outputs": [],
"source": [
"# !pip install MIPLearn==0.3.0"
]
},
{
"cell_type": "markdown",
"id": "e8274543",
"metadata": {},
"source": [
"In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dcc8756c",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:18:15.537811992Z",
"start_time": "2023-06-06T20:18:13.449177860Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: gurobipy<10.1,>=10 in /home/axavier/Software/anaconda3/envs/miplearn/lib/python3.8/site-packages (10.0.1)\n"
]
}
],
"source": [
"!pip install 'gurobipy>=10,<10.1'"
] ]
}, },
{ {
@@ -162,7 +121,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 1,
"id": "22a67170-10b4-43d3-8708-014d91141e73", "id": "22a67170-10b4-43d3-8708-014d91141e73",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -198,7 +157,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 2,
"id": "2f67032f-0d74-4317-b45c-19da0ec859e9", "id": "2f67032f-0d74-4317-b45c-19da0ec859e9",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -214,6 +173,7 @@
"from miplearn.io import read_pkl_gz\n", "from miplearn.io import read_pkl_gz\n",
"from miplearn.solvers.gurobi import GurobiModel\n", "from miplearn.solvers.gurobi import GurobiModel\n",
"\n", "\n",
"\n",
"def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:\n", "def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:\n",
" if isinstance(data, str):\n", " if isinstance(data, str):\n",
" data = read_pkl_gz(data)\n", " data = read_pkl_gz(data)\n",
@@ -223,9 +183,7 @@
" x = model._x = model.addVars(n, vtype=GRB.BINARY, name=\"x\")\n", " x = model._x = model.addVars(n, vtype=GRB.BINARY, name=\"x\")\n",
" y = model._y = model.addVars(n, name=\"y\")\n", " y = model._y = model.addVars(n, name=\"y\")\n",
" model.setObjective(\n", " model.setObjective(\n",
" quicksum(\n", " quicksum(data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n))\n",
" data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n)\n",
" )\n",
" )\n", " )\n",
" model.addConstrs(y[i] <= data.pmax[i] * x[i] for i in range(n))\n", " model.addConstrs(y[i] <= data.pmax[i] * x[i] for i in range(n))\n",
" model.addConstrs(y[i] >= data.pmin[i] * x[i] for i in range(n))\n", " model.addConstrs(y[i] >= data.pmin[i] * x[i] for i in range(n))\n",
@@ -243,7 +201,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 3,
"id": "2a896f47", "id": "2a896f47",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -256,11 +214,16 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Restricted license - for non-production use only - expires 2024-10-28\n", "Set parameter Threads to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Read parameters from file gurobi.env\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n", "Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
"Model fingerprint: 0x58dfdd53\n", "Model fingerprint: 0x58dfdd53\n",
@@ -270,28 +233,28 @@
" Objective range [2e+00, 7e+02]\n", " Objective range [2e+00, 7e+02]\n",
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [1e+02, 1e+02]\n", " RHS range [1e+02, 1e+02]\n",
"Presolve removed 2 rows and 1 columns\n", "Presolve removed 6 rows and 3 columns\n",
"Presolve time: 0.00s\n", "Presolve time: 0.00s\n",
"Presolved: 5 rows, 5 columns, 13 nonzeros\n", "Presolved: 1 rows, 3 columns, 3 nonzeros\n",
"Variable types: 0 continuous, 5 integer (3 binary)\n", "Variable types: 0 continuous, 3 integer (1 binary)\n",
"Found heuristic solution: objective 1400.0000000\n", "Found heuristic solution: objective 1990.0000000\n",
"\n", "\n",
"Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n", "Root relaxation: objective 1.320000e+03, 0 iterations, 0.00 seconds (0.00 work units)\n",
"\n", "\n",
" Nodes | Current Node | Objective Bounds | Work\n", " Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n", "\n",
" 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n",
" 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n",
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n", "* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
"\n", "\n",
"Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n", "Explored 1 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 12 (of 12 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 2: 1320 1400 \n", "Solution count 2: 1320 1990 \n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n", "Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 541, time in user-callback 0.00 sec\n",
"obj = 1320.0\n", "obj = 1320.0\n",
"x = [-0.0, 1.0, 1.0]\n", "x = [-0.0, 1.0, 1.0]\n",
"y = [0.0, 60.0, 40.0]\n" "y = [0.0, 60.0, 40.0]\n"
@@ -351,7 +314,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 6, "execution_count": 4,
"id": "5eb09fab", "id": "5eb09fab",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -397,7 +360,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 7, "execution_count": 5,
"id": "6156752c", "id": "6156752c",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -424,7 +387,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 8, "execution_count": 6,
"id": "7623f002", "id": "7623f002",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -437,7 +400,7 @@
"from miplearn.collectors.basic import BasicCollector\n", "from miplearn.collectors.basic import BasicCollector\n",
"\n", "\n",
"bc = BasicCollector()\n", "bc = BasicCollector()\n",
"bc.collect(train_data, build_uc_model, n_jobs=4)" "bc.collect(train_data, build_uc_model)"
] ]
}, },
{ {
@@ -465,7 +428,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 9, "execution_count": 7,
"id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2", "id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -503,7 +466,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 10, "execution_count": 8,
"id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf", "id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -516,10 +479,13 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa8b70287\n", "Model fingerprint: 0xa8b70287\n",
@@ -536,15 +502,20 @@
" 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n", " 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n",
" 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n", " 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n",
"\n", "\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", "Solved in 1 iterations and 0.02 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n", "Optimal objective 8.290621916e+09\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "User-callback calls 59, time in user-callback 0.00 sec\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x4ccd7ae3\n", "Model fingerprint: 0x892e56b2\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n", "Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n", " Matrix range [1e+00, 2e+06]\n",
@@ -552,35 +523,53 @@
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n", " RHS range [3e+08, 3e+08]\n",
"\n", "\n",
"User MIP start produced solution with objective 8.30129e+09 (0.01s)\n", "User MIP start produced solution with objective 8.29824e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29184e+09 (0.01s)\n", "User MIP start produced solution with objective 8.29398e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29146e+09 (0.01s)\n", "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.29146e+09 (0.01s)\n", "User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
"Loaded user MIP start with objective 8.29146e+09\n", "Loaded user MIP start with objective 8.29153e+09\n",
"\n", "\n",
"Presolve removed 500 rows and 0 columns\n",
"Presolve time: 0.00s\n", "Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", "Presolved: 501 rows, 1000 columns, 2000 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n", "\n",
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", "Root relaxation: objective 8.290622e+09, 501 iterations, 0.00 seconds (0.02 work units)\n",
"\n", "\n",
" Nodes | Current Node | Objective Bounds | Work\n", " Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n", "\n",
" 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n", " 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 - 0 8.2915e+09 8.2907e+09 0.01% - 0s\n",
"\n", "\n",
"Cutting planes:\n", "Cutting planes:\n",
" Cover: 1\n", " Gomory: 1\n",
" Flow cover: 2\n", " RLT: 2\n",
"\n", "\n",
"Explored 1 nodes (512 simplex iterations) in 0.07 seconds (0.01 work units)\n", "Explored 1 nodes (550 simplex iterations) in 0.04 seconds (0.04 work units)\n",
"Thread count was 12 (of 12 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 3: 8.29146e+09 8.29184e+09 8.30129e+09 \n", "Solution count 4: 8.29153e+09 8.29398e+09 8.29695e+09 8.29824e+09 \n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291459497797e+09, best bound 8.290645029670e+09, gap 0.0098%\n" "Best objective 8.291528276179e+09, best bound 8.290709658754e+09, gap 0.0099%\n",
"\n",
"User-callback calls 799, time in user-callback 0.00 sec\n"
] ]
},
{
"data": {
"text/plain": [
"(<miplearn.solvers.gurobi.GurobiModel at 0x7f2bcd72cfd0>,\n",
" {'WS: Count': 1, 'WS: Number of variables set': 477.0})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
} }
], ],
"source": [ "source": [
@@ -588,7 +577,7 @@
"\n", "\n",
"solver_ml = LearningSolver(components=[comp])\n", "solver_ml = LearningSolver(components=[comp])\n",
"solver_ml.fit(train_data)\n", "solver_ml.fit(train_data)\n",
"solver_ml.optimize(test_data[0], build_uc_model);" "solver_ml.optimize(test_data[0], build_uc_model)"
] ]
}, },
{ {
@@ -601,7 +590,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 11, "execution_count": 9,
"id": "2ff391ed-e855-4228-aa09-a7641d8c2893", "id": "2ff391ed-e855-4228-aa09-a7641d8c2893",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -614,10 +603,13 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa8b70287\n", "Model fingerprint: 0xa8b70287\n",
@@ -636,10 +628,15 @@
"\n", "\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n", "Optimal objective 8.290621916e+09\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "User-callback calls 59, time in user-callback 0.00 sec\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x4cbbf7c7\n", "Model fingerprint: 0x4cbbf7c7\n",
@@ -649,41 +646,52 @@
" Objective range [1e+00, 6e+07]\n", " Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n", " RHS range [3e+08, 3e+08]\n",
"Presolve removed 500 rows and 0 columns\n",
"Presolve time: 0.00s\n", "Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", "Presolved: 501 rows, 1000 columns, 2000 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"Found heuristic solution: objective 9.757128e+09\n", "Found heuristic solution: objective 1.729688e+10\n",
"\n", "\n",
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", "Root relaxation: objective 8.290622e+09, 501 iterations, 0.00 seconds (0.02 work units)\n",
"\n", "\n",
" Nodes | Current Node | Objective Bounds | Work\n", " Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n", "\n",
" 0 0 8.2906e+09 0 1 9.7571e+09 8.2906e+09 15.0% - 0s\n", " 0 0 8.2906e+09 0 1 1.7297e+10 8.2906e+09 52.1% - 0s\n",
"H 0 0 8.298273e+09 8.2906e+09 0.09% - 0s\n", "H 0 0 8.298243e+09 8.2906e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n", " 0 0 8.2907e+09 0 3 8.2982e+09 8.2907e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2983e+09 8.2907e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n",
"H 0 0 8.293980e+09 8.2907e+09 0.04% - 0s\n", "H 0 0 8.293980e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 4 8.2940e+09 8.2907e+09 0.04% - 0s\n",
"H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n", " 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2940e+09 8.2907e+09 0.04% - 0s\n",
"H 0 0 8.291961e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 2 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 3 8.2920e+09 8.2908e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 5 8.2920e+09 8.2908e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 5 8.2920e+09 8.2908e+09 0.01% - 0s\n",
" 0 2 8.2908e+09 0 5 8.2920e+09 8.2908e+09 0.01% - 0s\n",
"H 9 9 8.291298e+09 8.2908e+09 0.01% 1.4 0s\n",
"\n", "\n",
"Cutting planes:\n", "Cutting planes:\n",
" Gomory: 2\n", " MIR: 2\n",
" MIR: 1\n",
"\n", "\n",
"Explored 1 nodes (1031 simplex iterations) in 0.07 seconds (0.03 work units)\n", "Explored 10 nodes (759 simplex iterations) in 0.09 seconds (0.11 work units)\n",
"Thread count was 12 (of 12 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n", "Solution count 6: 8.2913e+09 8.29196e+09 8.29398e+09 ... 1.72969e+10\n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n" "Best objective 8.291298126440e+09, best bound 8.290812450252e+09, gap 0.0059%\n",
"\n",
"User-callback calls 910, time in user-callback 0.00 sec\n"
] ]
} }
], ],
@@ -710,12 +718,12 @@
"source": [ "source": [
"## Accessing the solution\n", "## Accessing the solution\n",
"\n", "\n",
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver." "In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 12, "execution_count": 10,
"id": "67a6cd18", "id": "67a6cd18",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -728,10 +736,13 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x19042f12\n", "Model fingerprint: 0x19042f12\n",
@@ -748,15 +759,20 @@
" 0 6.5917580e+09 5.627453e+04 0.000000e+00 0s\n", " 0 6.5917580e+09 5.627453e+04 0.000000e+00 0s\n",
" 1 8.2535968e+09 0.000000e+00 0.000000e+00 0s\n", " 1 8.2535968e+09 0.000000e+00 0.000000e+00 0s\n",
"\n", "\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", "Solved in 1 iterations and 0.00 seconds (0.00 work units)\n",
"Optimal objective 8.253596777e+09\n", "Optimal objective 8.253596777e+09\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "User-callback calls 59, time in user-callback 0.00 sec\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x8ee64638\n", "Model fingerprint: 0x6926c32f\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n", "Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n", " Matrix range [1e+00, 2e+06]\n",
@@ -764,46 +780,44 @@
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n", " RHS range [3e+08, 3e+08]\n",
"\n", "\n",
"User MIP start produced solution with objective 8.25814e+09 (0.01s)\n", "User MIP start produced solution with objective 8.25989e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25512e+09 (0.01s)\n", "User MIP start produced solution with objective 8.25699e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n", "User MIP start produced solution with objective 8.25678e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n", "User MIP start produced solution with objective 8.25668e+09 (0.05s)\n",
"Loaded user MIP start with objective 8.25459e+09\n", "User MIP start produced solution with objective 8.2554e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25448e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25448e+09 (0.05s)\n",
"Loaded user MIP start with objective 8.25448e+09\n",
"\n", "\n",
"Presolve time: 0.01s\n", "Presolve removed 500 rows and 0 columns\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", "Presolve time: 0.00s\n",
"Presolved: 501 rows, 1000 columns, 2000 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n", "\n",
"Root relaxation: objective 8.253597e+09, 512 iterations, 0.00 seconds (0.00 work units)\n", "Root relaxation: objective 8.253597e+09, 501 iterations, 0.00 seconds (0.02 work units)\n",
"\n", "\n",
" Nodes | Current Node | Objective Bounds | Work\n", " Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n", "\n",
" 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n", " 0 0 8.2536e+09 0 1 8.2545e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n", "H 0 0 8.254435e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n", " 0 0 - 0 8.2544e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n",
"\n", "\n",
"Cutting planes:\n", "Cutting planes:\n",
" Cover: 1\n", " RLT: 2\n",
" MIR: 2\n",
" StrongCG: 1\n",
" Flow cover: 1\n",
"\n", "\n",
"Explored 1 nodes (575 simplex iterations) in 0.12 seconds (0.01 work units)\n", "Explored 1 nodes (503 simplex iterations) in 0.07 seconds (0.03 work units)\n",
"Thread count was 12 (of 12 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 3: 8.25459e+09 8.25512e+09 8.25814e+09 \n", "Solution count 7: 8.25443e+09 8.25448e+09 8.2554e+09 ... 8.25989e+09\n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n", "Best objective 8.254434593504e+09, best bound 8.253676932849e+09, gap 0.0092%\n",
"obj = 8254590409.969726\n", "\n",
"User-callback calls 787, time in user-callback 0.00 sec\n",
"obj = 8254434593.503945\n",
"x = [1.0, 1.0, 0.0]\n", "x = [1.0, 1.0, 0.0]\n",
"y = [935662.0949263407, 1604270.0218116897, 0.0]\n" "y = [935662.09492646, 1604270.0218116897, 0.0]\n"
] ]
} }
], ],
@@ -841,7 +855,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.16" "version": "3.11.7"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -41,7 +41,7 @@
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the [official Julia website for more instructions](https://julialang.org/downloads/). After Julia is installed, launch the Julia REPL, type `]` to enter package mode, then install MIPLearn:\n", "In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the [official Julia website for more instructions](https://julialang.org/downloads/). After Julia is installed, launch the Julia REPL, type `]` to enter package mode, then install MIPLearn:\n",
"\n", "\n",
"```\n", "```\n",
"pkg> add MIPLearn@0.3\n", "pkg> add MIPLearn@0.4\n",
"```" "```"
] ]
}, },
@@ -592,7 +592,7 @@
"source": [ "source": [
"## Accessing the solution\n", "## Accessing the solution\n",
"\n", "\n",
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver." "In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver."
] ]
}, },
{ {

View File

@@ -33,6 +33,7 @@
] ]
}, },
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"id": "02f0a927", "id": "02f0a927",
"metadata": {}, "metadata": {},
@@ -44,53 +45,11 @@
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n", "- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
"- Julia version, compatible with the JuMP modeling language.\n", "- Julia version, compatible with the JuMP modeling language.\n",
"\n", "\n",
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:" "In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
] "\n",
}, "```\n",
{ "$ pip install MIPLearn~=0.4\n",
"cell_type": "code", "```"
"execution_count": 1,
"id": "cd8a69c1",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T19:57:33.202580815Z",
"start_time": "2023-06-06T19:57:33.198341886Z"
}
},
"outputs": [],
"source": [
"# !pip install MIPLearn==0.3.0"
]
},
{
"cell_type": "markdown",
"id": "e8274543",
"metadata": {},
"source": [
"In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dcc8756c",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T19:57:35.756831801Z",
"start_time": "2023-06-06T19:57:33.201767088Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: gurobipy<10.1,>=10 in /home/axavier/Software/anaconda3/envs/miplearn/lib/python3.8/site-packages (10.0.1)\n"
]
}
],
"source": [
"!pip install 'gurobipy>=10,<10.1'"
] ]
}, },
{ {
@@ -162,7 +121,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 1,
"id": "22a67170-10b4-43d3-8708-014d91141e73", "id": "22a67170-10b4-43d3-8708-014d91141e73",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -198,7 +157,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 2,
"id": "2f67032f-0d74-4317-b45c-19da0ec859e9", "id": "2f67032f-0d74-4317-b45c-19da0ec859e9",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -248,7 +207,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 3,
"id": "2a896f47", "id": "2a896f47",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -261,12 +220,19 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Restricted license - for non-production use only - expires 2024-10-28\n", "Set parameter Threads to value 1\n",
"Read parameters from file gurobi.env\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n", "Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n", "Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
"Model fingerprint: 0x15c7a953\n", "Model fingerprint: 0x15c7a953\n",
@@ -276,25 +242,23 @@
" Objective range [2e+00, 7e+02]\n", " Objective range [2e+00, 7e+02]\n",
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [1e+02, 1e+02]\n", " RHS range [1e+02, 1e+02]\n",
"Presolve removed 2 rows and 1 columns\n", "Presolve removed 6 rows and 3 columns\n",
"Presolve time: 0.00s\n", "Presolve time: 0.00s\n",
"Presolved: 5 rows, 5 columns, 13 nonzeros\n", "Presolved: 1 rows, 3 columns, 3 nonzeros\n",
"Variable types: 0 continuous, 5 integer (3 binary)\n", "Variable types: 0 continuous, 3 integer (1 binary)\n",
"Found heuristic solution: objective 1400.0000000\n", "Found heuristic solution: objective 1990.0000000\n",
"\n", "\n",
"Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n", "Root relaxation: objective 1.320000e+03, 0 iterations, 0.00 seconds (0.00 work units)\n",
"\n", "\n",
" Nodes | Current Node | Objective Bounds | Work\n", " Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n", "\n",
" 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n",
" 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n",
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n", "* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
"\n", "\n",
"Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n", "Explored 1 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 12 (of 12 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 2: 1320 1400 \n", "Solution count 2: 1320 1990 \n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n", "Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n",
@@ -359,7 +323,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 6, "execution_count": 4,
"id": "5eb09fab", "id": "5eb09fab",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -405,7 +369,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 7, "execution_count": 5,
"id": "6156752c", "id": "6156752c",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -432,7 +396,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 8, "execution_count": 6,
"id": "7623f002", "id": "7623f002",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -473,7 +437,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 9, "execution_count": 7,
"id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2", "id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -511,7 +475,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 10, "execution_count": 8,
"id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf", "id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -524,11 +488,16 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n", "Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x5e67c6ee\n", "Model fingerprint: 0x5e67c6ee\n",
@@ -547,14 +516,19 @@
"\n", "\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n", "Optimal objective 8.290621916e+09\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n", "Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa4a7961e\n", "Model fingerprint: 0xff6a55c5\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n", "Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n", " Matrix range [1e+00, 2e+06]\n",
@@ -562,37 +536,49 @@
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n", " RHS range [3e+08, 3e+08]\n",
"\n", "\n",
"User MIP start produced solution with objective 8.30129e+09 (0.01s)\n", "User MIP start produced solution with objective 8.29153e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29184e+09 (0.01s)\n", "User MIP start produced solution with objective 8.29153e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29146e+09 (0.01s)\n", "Loaded user MIP start with objective 8.29153e+09\n",
"User MIP start produced solution with objective 8.29146e+09 (0.02s)\n",
"Loaded user MIP start with objective 8.29146e+09\n",
"\n", "\n",
"Presolve time: 0.01s\n", "Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n", "\n",
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.01 seconds (0.00 work units)\n", "Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n",
"\n", "\n",
" Nodes | Current Node | Objective Bounds | Work\n", " Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n", "\n",
" 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n", " 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 - 0 8.2915e+09 8.2907e+09 0.01% - 0s\n",
"\n", "\n",
"Cutting planes:\n", "Cutting planes:\n",
" Gomory: 1\n",
" Cover: 1\n", " Cover: 1\n",
" Flow cover: 2\n", " Flow cover: 2\n",
"\n", "\n",
"Explored 1 nodes (512 simplex iterations) in 0.09 seconds (0.01 work units)\n", "Explored 1 nodes (564 simplex iterations) in 0.03 seconds (0.01 work units)\n",
"Thread count was 12 (of 12 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 3: 8.29146e+09 8.29184e+09 8.30129e+09 \n", "Solution count 1: 8.29153e+09 \n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291459497797e+09, best bound 8.290645029670e+09, gap 0.0098%\n", "Best objective 8.291528276179e+09, best bound 8.290729173948e+09, gap 0.0096%\n",
"WARNING: Cannot get reduced costs for MIP.\n", "WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n" "WARNING: Cannot get duals for MIP.\n"
] ]
},
{
"data": {
"text/plain": [
"(<miplearn.solvers.pyomo.PyomoModel at 0x7fdb38952450>, {})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
} }
], ],
"source": [ "source": [
@@ -600,7 +586,7 @@
"\n", "\n",
"solver_ml = LearningSolver(components=[comp])\n", "solver_ml = LearningSolver(components=[comp])\n",
"solver_ml.fit(train_data)\n", "solver_ml.fit(train_data)\n",
"solver_ml.optimize(test_data[0], build_uc_model);" "solver_ml.optimize(test_data[0], build_uc_model)"
] ]
}, },
{ {
@@ -613,7 +599,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 11, "execution_count": 9,
"id": "2ff391ed-e855-4228-aa09-a7641d8c2893", "id": "2ff391ed-e855-4228-aa09-a7641d8c2893",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -626,11 +612,16 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n", "Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x5e67c6ee\n", "Model fingerprint: 0x5e67c6ee\n",
@@ -640,7 +631,7 @@
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n", " RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n", "Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.01s\n", "Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n", "Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n", "\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n", "Iteration Objective Primal Inf. Dual Inf. Time\n",
@@ -649,11 +640,16 @@
"\n", "\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n", "Optimal objective 8.290621916e+09\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n", "Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x8a0f9587\n", "Model fingerprint: 0x8a0f9587\n",
@@ -682,31 +678,44 @@
" 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n", " 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", " 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n", " 0 0 8.2908e+09 0 3 8.2940e+09 8.2908e+09 0.04% - 0s\n",
"H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n", " 0 0 8.2908e+09 0 3 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 2 8.2908e+09 0 3 8.2940e+09 8.2908e+09 0.04% - 0s\n",
"H 9 9 8.292471e+09 8.2908e+09 0.02% 1.3 0s\n",
"* 90 41 44 8.291525e+09 8.2908e+09 0.01% 1.5 0s\n",
"\n", "\n",
"Cutting planes:\n", "Cutting planes:\n",
" Gomory: 2\n", " Gomory: 1\n",
" MIR: 1\n", " Cover: 1\n",
" MIR: 2\n",
"\n", "\n",
"Explored 1 nodes (1025 simplex iterations) in 0.08 seconds (0.03 work units)\n", "Explored 91 nodes (1166 simplex iterations) in 0.06 seconds (0.05 work units)\n",
"Thread count was 12 (of 12 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n", "Solution count 7: 8.29152e+09 8.29247e+09 8.29398e+09 ... 1.0319e+10\n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n", "Best objective 8.291524908632e+09, best bound 8.290823611882e+09, gap 0.0085%\n",
"WARNING: Cannot get reduced costs for MIP.\n", "WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n" "WARNING: Cannot get duals for MIP.\n"
] ]
},
{
"data": {
"text/plain": [
"(<miplearn.solvers.pyomo.PyomoModel at 0x7fdb2f563f50>, {})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
} }
], ],
"source": [ "source": [
"solver_baseline = LearningSolver(components=[])\n", "solver_baseline = LearningSolver(components=[])\n",
"solver_baseline.fit(train_data)\n", "solver_baseline.fit(train_data)\n",
"solver_baseline.optimize(test_data[0], build_uc_model);" "solver_baseline.optimize(test_data[0], build_uc_model)"
] ]
}, },
{ {
@@ -726,12 +735,12 @@
"source": [ "source": [
"## Accessing the solution\n", "## Accessing the solution\n",
"\n", "\n",
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver." "In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 12, "execution_count": 10,
"id": "67a6cd18", "id": "67a6cd18",
"metadata": { "metadata": {
"ExecuteTime": { "ExecuteTime": {
@@ -744,11 +753,16 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n", "Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x2dfe4e1c\n", "Model fingerprint: 0x2dfe4e1c\n",
@@ -758,7 +772,7 @@
" Bounds range [1e+00, 1e+00]\n", " Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n", " RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n", "Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.01s\n", "Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n", "Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n", "\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n", "Iteration Objective Primal Inf. Dual Inf. Time\n",
@@ -767,14 +781,19 @@
"\n", "\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n", "Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.253596777e+09\n", "Optimal objective 8.253596777e+09\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n", "Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n", "Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n", "\n",
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n", "CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n", "\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n", "Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x20637200\n", "Model fingerprint: 0xd941f1ed\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n", "Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n", " Matrix range [1e+00, 2e+06]\n",
@@ -784,11 +803,11 @@
"\n", "\n",
"User MIP start produced solution with objective 8.25814e+09 (0.01s)\n", "User MIP start produced solution with objective 8.25814e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25512e+09 (0.01s)\n", "User MIP start produced solution with objective 8.25512e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n", "User MIP start produced solution with objective 8.25448e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n", "User MIP start produced solution with objective 8.25448e+09 (0.02s)\n",
"Loaded user MIP start with objective 8.25459e+09\n", "Loaded user MIP start with objective 8.25448e+09\n",
"\n", "\n",
"Presolve time: 0.01s\n", "Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n", "Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n", "Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n", "\n",
@@ -797,33 +816,25 @@
" Nodes | Current Node | Objective Bounds | Work\n", " Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n", " Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n", "\n",
" 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n", " 0 0 8.2536e+09 0 1 8.2545e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n", " 0 0 - 0 8.2545e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n",
" 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n",
"\n", "\n",
"Cutting planes:\n", "Cutting planes:\n",
" Cover: 1\n", " Cover: 1\n",
" MIR: 2\n", " Flow cover: 2\n",
" StrongCG: 1\n",
" Flow cover: 1\n",
"\n", "\n",
"Explored 1 nodes (575 simplex iterations) in 0.11 seconds (0.01 work units)\n", "Explored 1 nodes (514 simplex iterations) in 0.03 seconds (0.01 work units)\n",
"Thread count was 12 (of 12 available processors)\n", "Thread count was 1 (of 20 available processors)\n",
"\n", "\n",
"Solution count 3: 8.25459e+09 8.25512e+09 8.25814e+09 \n", "Solution count 3: 8.25448e+09 8.25512e+09 8.25814e+09 \n",
"\n", "\n",
"Optimal solution found (tolerance 1.00e-04)\n", "Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n", "Best objective 8.254479145594e+09, best bound 8.253676932849e+09, gap 0.0097%\n",
"WARNING: Cannot get reduced costs for MIP.\n", "WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n", "WARNING: Cannot get duals for MIP.\n",
"obj = 8254590409.96973\n", "obj = 8254479145.594172\n",
" x = [1.0, 1.0, 0.0, 1.0, 1.0]\n", " x = [1.0, 1.0, 0.0, 1.0, 1.0]\n",
" y = [935662.0949263407, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]\n" " y = [935662.0949262811, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]\n"
] ]
} }
], ],
@@ -861,7 +872,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.8.13" "version": "3.11.7"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@@ -0,0 +1 @@
Threads 1

BIN
miplearn/.io.py.swp Normal file

Binary file not shown.

View File

@@ -54,7 +54,7 @@ class MinProbabilityClassifier(BaseEstimator):
y_pred = [] y_pred = []
for sample_idx in range(n_samples): for sample_idx in range(n_samples):
yi = float("nan") yi = float("nan")
for (class_idx, class_val) in enumerate(self.classes_): for class_idx, class_val in enumerate(self.classes_):
if y_proba[sample_idx, class_idx] >= self.thresholds[class_idx]: if y_proba[sample_idx, class_idx] >= self.thresholds[class_idx]:
yi = class_val yi = class_val
y_pred.append(yi) y_pred.append(yi)

View File

@@ -4,9 +4,12 @@
import json import json
import os import os
import sys
from io import StringIO from io import StringIO
from os.path import exists from os.path import exists
from typing import Callable, List from typing import Callable, List, Any
import traceback
from ..h5 import H5File from ..h5 import H5File
from ..io import _RedirectOutput, gzip, _to_h5_filename from ..io import _RedirectOutput, gzip, _to_h5_filename
@@ -14,73 +17,87 @@ from ..parallel import p_umap
class BasicCollector: class BasicCollector:
def __init__(
self,
skip_lp: bool = False,
write_mps: bool = True,
write_log: bool = True,
) -> None:
self.skip_lp = skip_lp
self.write_mps = write_mps
self.write_log = write_log
def collect( def collect(
self, self,
filenames: List[str], filenames: List[str],
build_model: Callable, build_model: Callable,
n_jobs: int = 1, n_jobs: int = 1,
progress: bool = False, progress: bool = False,
verbose: bool = False,
) -> None: ) -> None:
def _collect(data_filename): def _collect(data_filename: str) -> None:
h5_filename = _to_h5_filename(data_filename) try:
mps_filename = h5_filename.replace(".h5", ".mps") h5_filename = _to_h5_filename(data_filename)
mps_filename = h5_filename.replace(".h5", ".mps")
log_filename = h5_filename.replace(".h5", ".h5.log")
if exists(h5_filename): if exists(h5_filename):
# Try to read optimal solution # Try to read optimal solution
mip_var_values = None mip_var_values = None
try: try:
with H5File(h5_filename, "r") as h5: with H5File(h5_filename, "r") as h5:
mip_var_values = h5.get_array("mip_var_values") mip_var_values = h5.get_array("mip_var_values")
except: except:
pass pass
if mip_var_values is None: if mip_var_values is None:
print(f"Removing empty/corrupted h5 file: {h5_filename}") print(f"Removing empty/corrupted h5 file: {h5_filename}")
os.remove(h5_filename) os.remove(h5_filename)
else: else:
return return
with H5File(h5_filename, "w") as h5: with H5File(h5_filename, "w") as h5:
streams = [StringIO()] h5.put_scalar("data_filename", data_filename)
with _RedirectOutput(streams): streams: List[Any] = [StringIO()]
# Load and extract static features if verbose:
model = build_model(data_filename) streams += [sys.stdout]
model.extract_after_load(h5) with _RedirectOutput(streams):
# Load and extract static features
model = build_model(data_filename)
model.extract_after_load(h5)
# Solve LP relaxation if not self.skip_lp:
relaxed = model.relax() # Solve LP relaxation
relaxed.optimize() relaxed = model.relax()
relaxed.extract_after_lp(h5) relaxed.optimize()
relaxed.extract_after_lp(h5)
# Solve MIP # Solve MIP
model.optimize() model.optimize()
model.extract_after_mip(h5) model.extract_after_mip(h5)
# Add lazy constraints to model if self.write_mps:
if ( # Add lazy constraints to model
hasattr(model, "fix_violations") model._lazy_enforce_collected()
and model.fix_violations is not None
):
model.fix_violations(model, model.violations_, "aot")
h5.put_scalar(
"mip_constr_violations", json.dumps(model.violations_)
)
# Save MPS file # Save MPS file
model.write(mps_filename) model.write(mps_filename)
gzip(mps_filename) gzip(mps_filename)
h5.put_scalar("mip_log", streams[0].getvalue()) log = streams[0].getvalue()
h5.put_scalar("mip_log", log)
if self.write_log:
with open(log_filename, "w") as log_file:
log_file.write(log)
except:
print(f"Error processing: data_filename")
traceback.print_exc()
if n_jobs > 1: p_umap(
p_umap( _collect,
_collect, filenames,
filenames, num_cpus=n_jobs,
num_cpus=n_jobs, desc="collect",
desc="collect", smoothing=0,
smoothing=0, disable=not progress,
disable=not progress, )
)
else:
for filename in filenames:
_collect(filename)

View File

@@ -1,117 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from io import StringIO
from typing import Callable
import gurobipy as gp
import numpy as np
from gurobipy import GRB, LinExpr
from ..h5 import H5File
from ..io import _RedirectOutput
class LazyCollector:
def __init__(
self,
min_constrs: int = 100_000,
time_limit: float = 900,
) -> None:
self.min_constrs = min_constrs
self.time_limit = time_limit
def collect(
self, data_filename: str, build_model: Callable, tol: float = 1e-6
) -> None:
h5_filename = f"{data_filename}.h5"
with H5File(h5_filename, "r+") as h5:
streams = [StringIO()]
lazy = None
with _RedirectOutput(streams):
slacks = h5.get_array("mip_constr_slacks")
assert slacks is not None
# Check minimum problem size
if len(slacks) < self.min_constrs:
print("Problem is too small. Skipping.")
h5.put_array("mip_constr_lazy", np.zeros(len(slacks)))
return
# Load model
print("Loading model...")
model = build_model(data_filename)
model.params.LazyConstraints = True
model.params.timeLimit = self.time_limit
gp_constrs = np.array(model.getConstrs())
gp_vars = np.array(model.getVars())
# Load constraints
lhs = h5.get_sparse("static_constr_lhs")
rhs = h5.get_array("static_constr_rhs")
sense = h5.get_array("static_constr_sense")
assert lhs is not None
assert rhs is not None
assert sense is not None
lhs_csr = lhs.tocsr()
lhs_csc = lhs.tocsc()
constr_idx = np.array(range(len(rhs)))
lazy = np.zeros(len(rhs))
# Drop loose constraints
selected = (slacks > 0) & ((sense == b"<") | (sense == b">"))
loose_constrs = gp_constrs[selected]
print(
f"Removing {len(loose_constrs):,d} constraints (out of {len(rhs):,d})..."
)
model.remove(list(loose_constrs))
# Filter to constraints that were dropped
lhs_csr = lhs_csr[selected, :]
lhs_csc = lhs_csc[selected, :]
rhs = rhs[selected]
sense = sense[selected]
constr_idx = constr_idx[selected]
lazy[selected] = 1
# Load warm start
var_names = h5.get_array("static_var_names")
var_values = h5.get_array("mip_var_values")
assert var_values is not None
assert var_names is not None
for (var_idx, var_name) in enumerate(var_names):
var = model.getVarByName(var_name.decode())
var.start = var_values[var_idx]
print("Solving MIP with lazy constraints callback...")
def callback(model: gp.Model, where: int) -> None:
assert rhs is not None
assert lazy is not None
assert sense is not None
if where == GRB.Callback.MIPSOL:
x_val = np.array(model.cbGetSolution(model.getVars()))
slack = lhs_csc * x_val - rhs
slack[sense == b">"] *= -1
is_violated = slack > tol
for (j, rhs_j) in enumerate(rhs):
if is_violated[j]:
lazy[constr_idx[j]] = 0
expr = LinExpr(
lhs_csr[j, :].data, gp_vars[lhs_csr[j, :].indices]
)
if sense[j] == b"<":
model.cbLazy(expr <= rhs_j)
elif sense[j] == b">":
model.cbLazy(expr >= rhs_j)
else:
raise RuntimeError(f"Unknown sense: {sense[j]}")
model.optimize(callback)
print(f"Marking {lazy.sum():,.0f} constraints as lazy...")
h5.put_array("mip_constr_lazy", lazy)
h5.put_scalar("mip_constr_lazy_log", streams[0].getvalue())

View File

View File

@@ -0,0 +1,35 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import logging
from typing import Dict, Any, List
from miplearn.components.cuts.mem import convert_lists_to_tuples
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class ExpertCutsComponent:
def fit(
self,
_: List[str],
) -> None:
pass
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
with H5File(test_h5, "r") as h5:
cuts_str = h5.get_scalar("mip_cuts")
assert cuts_str is not None
assert isinstance(cuts_str, str)
cuts = list(set(convert_lists_to_tuples(json.loads(cuts_str))))
model.set_cuts(cuts)
stats["Cuts: AOT"] = len(cuts)

View File

@@ -0,0 +1,113 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import logging
from typing import List, Dict, Any, Hashable
import numpy as np
from sklearn.preprocessing import MultiLabelBinarizer
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
def convert_lists_to_tuples(obj: Any) -> Any:
if isinstance(obj, list):
return tuple(convert_lists_to_tuples(item) for item in obj)
elif isinstance(obj, dict):
return {key: convert_lists_to_tuples(value) for key, value in obj.items()}
else:
return obj
class _BaseMemorizingConstrComponent:
def __init__(self, clf: Any, extractor: FeaturesExtractor, field: str) -> None:
self.clf = clf
self.extractor = extractor
self.constrs_: List[Hashable] = []
self.n_features_: int = 0
self.n_targets_: int = 0
self.field = field
def fit(
self,
train_h5: List[str],
) -> None:
logger.info("Reading training data...")
n_samples = len(train_h5)
x, y, constrs, n_features = [], [], [], None
constr_to_idx: Dict[Hashable, int] = {}
for h5_filename in train_h5:
with H5File(h5_filename, "r") as h5:
# Store constraints
sample_constrs_str = h5.get_scalar(self.field)
assert sample_constrs_str is not None
assert isinstance(sample_constrs_str, str)
sample_constrs = convert_lists_to_tuples(json.loads(sample_constrs_str))
y_sample = []
for c in sample_constrs:
if c not in constr_to_idx:
constr_to_idx[c] = len(constr_to_idx)
constrs.append(c)
y_sample.append(constr_to_idx[c])
y.append(y_sample)
# Extract features
x_sample = self.extractor.get_instance_features(h5)
assert len(x_sample.shape) == 1
if n_features is None:
n_features = len(x_sample)
else:
assert len(x_sample) == n_features
x.append(x_sample)
logger.info("Constructing matrices...")
assert n_features is not None
self.n_features_ = n_features
self.constrs_ = constrs
self.n_targets_ = len(constr_to_idx)
x_np = np.vstack(x)
assert x_np.shape == (n_samples, n_features)
y_np = MultiLabelBinarizer().fit_transform(y)
assert y_np.shape == (n_samples, self.n_targets_)
logger.info(
f"Dataset has {n_samples:,d} samples, "
f"{n_features:,d} features and {self.n_targets_:,d} targets"
)
logger.info("Training classifier...")
self.clf.fit(x_np, y_np)
def predict(
self,
msg: str,
test_h5: str,
) -> List[Hashable]:
with H5File(test_h5, "r") as h5:
x_sample = self.extractor.get_instance_features(h5)
assert x_sample.shape == (self.n_features_,)
x_sample = x_sample.reshape(1, -1)
logger.info(msg)
y = self.clf.predict(x_sample)
assert y.shape == (1, self.n_targets_)
y = y.reshape(-1)
return [self.constrs_[i] for (i, yi) in enumerate(y) if yi > 0.5]
class MemorizingCutsComponent(_BaseMemorizingConstrComponent):
def __init__(self, clf: Any, extractor: FeaturesExtractor) -> None:
super().__init__(clf, extractor, "mip_cuts")
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
assert self.constrs_ is not None
cuts = self.predict("Predicting cutting planes...", test_h5)
model.set_cuts(cuts)
stats["Cuts: AOT"] = len(cuts)

View File

@@ -1,43 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
from typing import Any, Dict, List
import gurobipy as gp
from ..h5 import H5File
class ExpertLazyComponent:
def __init__(self) -> None:
pass
def fit(self, train_h5: List[str]) -> None:
pass
def before_mip(self, test_h5: str, model: gp.Model, stats: Dict[str, Any]) -> None:
with H5File(test_h5, "r") as h5:
constr_names = h5.get_array("static_constr_names")
constr_lazy = h5.get_array("mip_constr_lazy")
constr_violations = h5.get_scalar("mip_constr_violations")
assert constr_names is not None
assert constr_violations is not None
# Static lazy constraints
n_static_lazy = 0
if constr_lazy is not None:
for (constr_idx, constr_name) in enumerate(constr_names):
if constr_lazy[constr_idx]:
constr = model.getConstrByName(constr_name.decode())
constr.lazy = 3
n_static_lazy += 1
stats.update({"Static lazy constraints": n_static_lazy})
# Dynamic lazy constraints
if hasattr(model, "_fix_violations"):
violations = json.loads(constr_violations)
model._fix_violations(model, violations, "aot")
stats.update({"Dynamic lazy constraints": len(violations)})

View File

View File

@@ -0,0 +1,36 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import logging
from typing import Dict, Any, List
from miplearn.components.cuts.mem import convert_lists_to_tuples
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class ExpertLazyComponent:
def fit(
self,
_: List[str],
) -> None:
pass
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
with H5File(test_h5, "r") as h5:
violations_str = h5.get_scalar("mip_lazy")
assert violations_str is not None
assert isinstance(violations_str, str)
violations = list(set(convert_lists_to_tuples(json.loads(violations_str))))
logger.info(f"Enforcing {len(violations)} constraints ahead-of-time...")
model.lazy_enforce(violations)
stats["Lazy Constraints: AOT"] = len(violations)

View File

@@ -0,0 +1,31 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from typing import List, Dict, Any, Hashable
from miplearn.components.cuts.mem import (
_BaseMemorizingConstrComponent,
)
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class MemorizingLazyComponent(_BaseMemorizingConstrComponent):
def __init__(self, clf: Any, extractor: FeaturesExtractor) -> None:
super().__init__(clf, extractor, "mip_lazy")
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
assert self.constrs_ is not None
violations = self.predict("Predicting violated lazy constraints...", test_h5)
logger.info(f"Enforcing {len(violations)} constraints ahead-of-time...")
model.lazy_enforce(violations)
stats["Lazy Constraints: AOT"] = len(violations)

View File

@@ -1,29 +1,53 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization # MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from typing import Tuple from typing import Tuple, List
import numpy as np import numpy as np
from miplearn.h5 import H5File from miplearn.h5 import H5File
def _extract_bin_var_names_values( def _extract_var_names_values(
h5: H5File, h5: H5File,
selected_var_types: List[bytes],
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
bin_var_names, bin_var_indices = _extract_bin_var_names(h5) bin_var_names, bin_var_indices = _extract_var_names(h5, selected_var_types)
var_values = h5.get_array("mip_var_values") var_values = h5.get_array("mip_var_values")
assert var_values is not None assert var_values is not None
bin_var_values = var_values[bin_var_indices].astype(int) bin_var_values = var_values[bin_var_indices].astype(int)
return bin_var_names, bin_var_values, bin_var_indices return bin_var_names, bin_var_values, bin_var_indices
def _extract_bin_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]: def _extract_var_names(
h5: H5File,
selected_var_types: List[bytes],
) -> Tuple[np.ndarray, np.ndarray]:
var_types = h5.get_array("static_var_types") var_types = h5.get_array("static_var_types")
var_names = h5.get_array("static_var_names") var_names = h5.get_array("static_var_names")
assert var_types is not None assert var_types is not None
assert var_names is not None assert var_names is not None
bin_var_indices = np.where(var_types == b"B")[0] bin_var_indices = np.where(np.isin(var_types, selected_var_types))[0]
bin_var_names = var_names[bin_var_indices] bin_var_names = var_names[bin_var_indices]
assert len(bin_var_names.shape) == 1 assert len(bin_var_names.shape) == 1
return bin_var_names, bin_var_indices return bin_var_names, bin_var_indices
def _extract_bin_var_names_values(
h5: H5File,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
return _extract_var_names_values(h5, [b"B"])
def _extract_bin_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
return _extract_var_names(h5, [b"B"])
def _extract_int_var_names_values(
h5: H5File,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
return _extract_var_names_values(h5, [b"B", b"I"])
def _extract_int_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
return _extract_var_names(h5, [b"B", b"I"])

View File

@@ -71,7 +71,7 @@ class EnforceProximity(PrimalComponentAction):
constr_lhs = [] constr_lhs = []
constr_vars = [] constr_vars = []
constr_rhs = 0.0 constr_rhs = 0.0
for (i, var_name) in enumerate(var_names): for i, var_name in enumerate(var_names):
if np.isnan(var_values[i]): if np.isnan(var_values[i]):
continue continue
constr_lhs.append(1.0 if var_values[i] < 0.5 else -1.0) constr_lhs.append(1.0 if var_values[i] < 0.5 else -1.0)

View File

@@ -5,7 +5,7 @@
import logging import logging
from typing import Any, Dict, List from typing import Any, Dict, List
from . import _extract_bin_var_names_values from . import _extract_int_var_names_values
from .actions import PrimalComponentAction from .actions import PrimalComponentAction
from ...solvers.abstract import AbstractModel from ...solvers.abstract import AbstractModel
from ...h5 import H5File from ...h5 import H5File
@@ -28,5 +28,5 @@ class ExpertPrimalComponent:
self, test_h5: str, model: AbstractModel, stats: Dict[str, Any] self, test_h5: str, model: AbstractModel, stats: Dict[str, Any]
) -> None: ) -> None:
with H5File(test_h5, "r") as h5: with H5File(test_h5, "r") as h5:
names, values, _ = _extract_bin_var_names_values(h5) names, values, _ = _extract_int_var_names_values(h5)
self.action.perform(model, names, values.reshape(1, -1), stats) self.action.perform(model, names, values.reshape(1, -1), stats)

View File

@@ -91,7 +91,7 @@ class IndependentVarsPrimalComponent:
logger.info(f"Training {n_bin_vars} classifiers...") logger.info(f"Training {n_bin_vars} classifiers...")
self.clf_ = {} self.clf_ = {}
for (var_idx, var_name) in enumerate(self.bin_var_names_): for var_idx, var_name in enumerate(self.bin_var_names_):
self.clf_[var_name] = self.clone_fn(self.base_clf) self.clf_[var_name] = self.clone_fn(self.base_clf)
self.clf_[var_name].fit( self.clf_[var_name].fit(
x_np[var_idx::n_bin_vars, :], y_np[var_idx::n_bin_vars] x_np[var_idx::n_bin_vars, :], y_np[var_idx::n_bin_vars]
@@ -117,7 +117,7 @@ class IndependentVarsPrimalComponent:
# Predict optimal solution # Predict optimal solution
logger.info("Predicting warm starts...") logger.info("Predicting warm starts...")
y_pred = [] y_pred = []
for (var_idx, var_name) in enumerate(self.bin_var_names_): for var_idx, var_name in enumerate(self.bin_var_names_):
x_var = x_sample[var_idx, :].reshape(1, -1) x_var = x_sample[var_idx, :].reshape(1, -1)
y_var = self.clf_[var_name].predict(x_var) y_var = self.clf_[var_name].predict(x_var)
assert y_var.shape == (1,) assert y_var.shape == (1,)

View File

@@ -25,7 +25,8 @@ class ExpertBranchPriorityComponent:
assert var_priority is not None assert var_priority is not None
assert var_names is not None assert var_names is not None
for (var_idx, var_name) in enumerate(var_names): for var_idx, var_name in enumerate(var_names):
if np.isfinite(var_priority[var_idx]): if np.isfinite(var_priority[var_idx]):
var = model.getVarByName(var_name.decode()) var = model.getVarByName(var_name.decode())
var.branchPriority = int(log(1 + var_priority[var_idx])) assert var is not None, f"unknown var: {var_name}"
var.BranchPriority = int(log(1 + var_priority[var_idx]))

View File

@@ -22,7 +22,7 @@ class AlvLouWeh2017Extractor(FeaturesExtractor):
self.with_m3 = with_m3 self.with_m3 = with_m3
def get_instance_features(self, h5: H5File) -> np.ndarray: def get_instance_features(self, h5: H5File) -> np.ndarray:
raise NotImplemented() raise NotImplementedError()
def get_var_features(self, h5: H5File) -> np.ndarray: def get_var_features(self, h5: H5File) -> np.ndarray:
""" """
@@ -197,7 +197,7 @@ class AlvLouWeh2017Extractor(FeaturesExtractor):
return features return features
def get_constr_features(self, h5: H5File) -> np.ndarray: def get_constr_features(self, h5: H5File) -> np.ndarray:
raise NotImplemented() raise NotImplementedError()
def _fix_infinity(m: Optional[np.ndarray]) -> None: def _fix_infinity(m: Optional[np.ndarray]) -> None:

View File

@@ -31,9 +31,9 @@ class H5FieldsExtractor(FeaturesExtractor):
data = h5.get_scalar(field) data = h5.get_scalar(field)
assert data is not None assert data is not None
x.append(data) x.append(data)
x = np.hstack(x) x_np = np.hstack(x)
assert len(x.shape) == 1 assert len(x_np.shape) == 1
return x return x_np
def get_var_features(self, h5: H5File) -> np.ndarray: def get_var_features(self, h5: H5File) -> np.ndarray:
var_types = h5.get_array("static_var_types") var_types = h5.get_array("static_var_types")
@@ -51,13 +51,14 @@ class H5FieldsExtractor(FeaturesExtractor):
raise Exception("No constr fields provided") raise Exception("No constr fields provided")
return self._extract(h5, self.constr_fields, n_constr) return self._extract(h5, self.constr_fields, n_constr)
def _extract(self, h5, fields, n_expected): def _extract(self, h5: H5File, fields: List[str], n_expected: int) -> np.ndarray:
x = [] x = []
for field in fields: for field in fields:
try: try:
data = h5.get_array(field) data = h5.get_array(field)
except ValueError: except ValueError:
v = h5.get_scalar(field) v = h5.get_scalar(field)
assert v is not None
data = np.repeat(v, n_expected) data = np.repeat(v, n_expected)
assert data is not None assert data is not None
assert len(data.shape) == 1 assert len(data.shape) == 1

View File

@@ -68,7 +68,7 @@ class H5File:
return return
self._assert_is_array(value) self._assert_is_array(value)
if value.dtype.kind == "f": if value.dtype.kind == "f":
value = value.astype("float32") value = value.astype("float64")
if key in self.file: if key in self.file:
del self.file[key] del self.file[key]
return self.file.create_dataset(key, data=value, compression="gzip") return self.file.create_dataset(key, data=value, compression="gzip")
@@ -111,7 +111,7 @@ class H5File:
), f"bytes expected; found: {value.__class__}" # type: ignore ), f"bytes expected; found: {value.__class__}" # type: ignore
self.put_array(key, np.frombuffer(value, dtype="uint8")) self.put_array(key, np.frombuffer(value, dtype="uint8"))
def close(self): def close(self) -> None:
self.file.close() self.file.close()
def __enter__(self) -> "H5File": def __enter__(self) -> "H5File":

View File

@@ -86,7 +86,11 @@ def read_pkl_gz(filename: str) -> Any:
def _to_h5_filename(data_filename: str) -> str: def _to_h5_filename(data_filename: str) -> str:
output = f"{data_filename}.h5" output = f"{data_filename}.h5"
output = output.replace(".pkl.gz.h5", ".h5") output = output.replace(".gz.h5", ".h5")
output = output.replace(".pkl.h5", ".h5") output = output.replace(".csv.h5", ".h5")
output = output.replace(".jld2.h5", ".h5") output = output.replace(".jld2.h5", ".h5")
output = output.replace(".json.h5", ".h5")
output = output.replace(".lp.h5", ".h5")
output = output.replace(".mps.h5", ".h5")
output = output.replace(".pkl.h5", ".h5")
return output return output

View File

@@ -1,3 +1,28 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization # MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from typing import Any, Optional
import gurobipy as gp
from pyomo import environ as pe
def _gurobipy_set_params(model: gp.Model, params: Optional[dict[str, Any]]) -> None:
assert isinstance(model, gp.Model)
if params is not None:
for param_name, param_value in params.items():
setattr(model.params, param_name, param_value)
def _pyomo_set_params(
model: pe.ConcreteModel,
params: Optional[dict[str, Any]],
solver: str,
) -> None:
assert (
solver == "gurobi_persistent"
), "setting parameters is only supported with gurobi_persistent"
if solver == "gurobi_persistent" and params is not None:
for param_name, param_value in params.items():
model.solver.set_gurobi_param(param_name, param_value)

View File

@@ -34,19 +34,10 @@ class BinPackData:
class BinPackGenerator: class BinPackGenerator:
"""Random instance generator for the bin packing problem. """Random instance generator for the bin packing problem.
If `fix_items=False`, the class samples the user-provided probability distributions Generates instances by sampling the user-provided probability distributions
n, sizes and capacity to decide, respectively, the number of items, the sizes of n, sizes and capacity to decide, respectively, the number of items, the sizes of
the items and capacity of the bin. All values are sampled independently. the items and capacity of the bin. All values are sampled independently.
If `fix_items=True`, the class creates a reference instance, using the method
previously described, then generates additional instances by perturbing its item
sizes and bin capacity. More specifically, the sizes of the items are set to `s_i
* gamma_i` where `s_i` is the size of the i-th item in the reference instance and
`gamma_i` is sampled from `sizes_jitter`. Similarly, the bin capacity is set to `B *
beta`, where `B` is the reference bin capacity and `beta` is sampled from
`capacity_jitter`. The number of items remains the same across all generated
instances.
Args Args
---- ----
n n
@@ -55,13 +46,6 @@ class BinPackGenerator:
Probability distribution for the item sizes. Probability distribution for the item sizes.
capacity capacity
Probability distribution for the bin capacity. Probability distribution for the bin capacity.
sizes_jitter
Probability distribution for the item size randomization.
capacity_jitter
Probability distribution for the bin capacity.
fix_items
If `True`, generates a reference instance, then applies some perturbation to it.
If `False`, generates completely different instances.
""" """
def __init__( def __init__(
@@ -69,17 +53,10 @@ class BinPackGenerator:
n: rv_frozen, n: rv_frozen,
sizes: rv_frozen, sizes: rv_frozen,
capacity: rv_frozen, capacity: rv_frozen,
sizes_jitter: rv_frozen,
capacity_jitter: rv_frozen,
fix_items: bool,
) -> None: ) -> None:
self.n = n self.n = n
self.sizes = sizes self.sizes = sizes
self.capacity = capacity self.capacity = capacity
self.sizes_jitter = sizes_jitter
self.capacity_jitter = capacity_jitter
self.fix_items = fix_items
self.ref_data: Optional[BinPackData] = None
def generate(self, n_samples: int) -> List[BinPackData]: def generate(self, n_samples: int) -> List[BinPackData]:
"""Generates random instances. """Generates random instances.
@@ -91,25 +68,65 @@ class BinPackGenerator:
""" """
def _sample() -> BinPackData: def _sample() -> BinPackData:
if self.ref_data is None: n = self.n.rvs()
n = self.n.rvs() sizes = self.sizes.rvs(n)
sizes = self.sizes.rvs(n) capacity = self.capacity.rvs()
capacity = self.capacity.rvs()
if self.fix_items:
self.ref_data = BinPackData(sizes, capacity)
else:
n = self.ref_data.sizes.shape[0]
sizes = self.ref_data.sizes
capacity = self.ref_data.capacity
sizes = sizes * self.sizes_jitter.rvs(n)
capacity = capacity * self.capacity_jitter.rvs()
return BinPackData(sizes.round(2), capacity.round(2)) return BinPackData(sizes.round(2), capacity.round(2))
return [_sample() for n in range(n_samples)] return [_sample() for _ in range(n_samples)]
def build_binpack_model(data: Union[str, BinPackData]) -> GurobiModel: class BinPackPerturber:
"""Perturbation generator for existing bin packing instances.
Takes an existing BinPackData instance and generates new instances by perturbing
its item sizes and bin capacity. The sizes of the items are set to `s_i * gamma_i`
where `s_i` is the size of the i-th item in the reference instance and `gamma_i`
is sampled from `sizes_jitter`. Similarly, the bin capacity is set to `B * beta`,
where `B` is the reference bin capacity and `beta` is sampled from `capacity_jitter`.
The number of items remains the same across all generated instances.
Args
----
sizes_jitter
Probability distribution for the item size randomization.
capacity_jitter
Probability distribution for the bin capacity randomization.
"""
def __init__(
self,
sizes_jitter: rv_frozen,
capacity_jitter: rv_frozen,
) -> None:
self.sizes_jitter = sizes_jitter
self.capacity_jitter = capacity_jitter
def perturb(
self,
instance: BinPackData,
n_samples: int,
) -> List[BinPackData]:
"""Generates perturbed instances.
Parameters
----------
instance
The reference instance to perturb.
n_samples
Number of samples to generate.
"""
def _sample() -> BinPackData:
n = instance.sizes.shape[0]
sizes = instance.sizes * self.sizes_jitter.rvs(n)
capacity = instance.capacity * self.capacity_jitter.rvs()
return BinPackData(sizes.round(2), capacity.round(2))
return [_sample() for _ in range(n_samples)]
def build_binpack_model_gurobipy(data: Union[str, BinPackData]) -> GurobiModel:
"""Converts bin packing problem data into a concrete Gurobipy model.""" """Converts bin packing problem data into a concrete Gurobipy model."""
if isinstance(data, str): if isinstance(data, str):
data = read_pkl_gz(data) data = read_pkl_gz(data)

174
miplearn/problems/maxcut.py Normal file
View File

@@ -0,0 +1,174 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2025, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from dataclasses import dataclass
from typing import List, Union, Optional, Any
import gurobipy as gp
import networkx as nx
import numpy as np
import pyomo.environ as pe
from networkx import Graph
from scipy.stats.distributions import rv_frozen
from miplearn.io import read_pkl_gz
from miplearn.problems import _gurobipy_set_params, _pyomo_set_params
from miplearn.solvers.gurobi import GurobiModel
from miplearn.solvers.pyomo import PyomoModel
@dataclass
class MaxCutData:
graph: Graph
weights: np.ndarray
class MaxCutGenerator:
"""Random instance generator for the Maximum Cut Problem.
Generates instances by creating a new random Erdős-Rényi graph $G_{n,p}$ for each
instance, where $n$ and $p$ are sampled from user-provided probability distributions.
For each instance, the generator assigns random edge weights drawn from the set {-1, 1}
with equal probability.
"""
def __init__(
self,
n: rv_frozen,
p: rv_frozen,
):
"""
Initialize the problem generator.
Parameters
----------
n: rv_discrete
Probability distribution for the number of nodes.
p: rv_continuous
Probability distribution for the graph density.
"""
assert isinstance(n, rv_frozen), "n should be a SciPy probability distribution"
assert isinstance(p, rv_frozen), "p should be a SciPy probability distribution"
self.n = n
self.p = p
def generate(self, n_samples: int) -> List[MaxCutData]:
def _sample() -> MaxCutData:
graph = self._generate_graph()
weights = self._generate_weights(graph)
return MaxCutData(graph, weights)
return [_sample() for _ in range(n_samples)]
def _generate_graph(self) -> Graph:
return nx.generators.random_graphs.binomial_graph(self.n.rvs(), self.p.rvs())
@staticmethod
def _generate_weights(graph: Graph) -> np.ndarray:
m = graph.number_of_edges()
return np.random.randint(2, size=(m,)) * 2 - 1
class MaxCutPerturber:
"""Perturbation generator for existing Maximum Cut instances.
Takes an existing MaxCutData instance and generates new instances by randomly
flipping the sign of each edge weight with a given probability while keeping
the graph structure fixed.
"""
def __init__(
self,
w_jitter: float = 0.05,
):
"""Initialize the perturbation generator.
Parameters
----------
w_jitter: float
Probability that each edge weight flips sign (from -1 to 1 or vice versa).
"""
assert 0.0 <= w_jitter <= 1.0, "w_jitter should be between 0.0 and 1.0"
self.w_jitter = w_jitter
def perturb(
self,
instance: MaxCutData,
n_samples: int,
) -> List[MaxCutData]:
def _sample() -> MaxCutData:
jitter = self._generate_jitter(instance.graph)
weights = instance.weights * jitter
return MaxCutData(instance.graph, weights)
return [_sample() for _ in range(n_samples)]
def _generate_jitter(self, graph: Graph) -> np.ndarray:
m = graph.number_of_edges()
return (np.random.rand(m) >= self.w_jitter).astype(int) * 2 - 1
def build_maxcut_model_gurobipy(
data: Union[str, MaxCutData],
params: Optional[dict[str, Any]] = None,
) -> GurobiModel:
# Initialize model
model = gp.Model()
_gurobipy_set_params(model, params)
# Read data
data = _maxcut_read(data)
nodes = list(data.graph.nodes())
edges = list(data.graph.edges())
# Add decision variables
x = model.addVars(nodes, vtype=gp.GRB.BINARY, name="x")
# Add the objective function
model.setObjective(
gp.quicksum(
-data.weights[i] * x[e[0]] * (1 - x[e[1]]) for (i, e) in enumerate(edges)
)
)
model.update()
return GurobiModel(model)
def build_maxcut_model_pyomo(
data: Union[str, MaxCutData],
solver: str = "gurobi_persistent",
params: Optional[dict[str, Any]] = None,
) -> PyomoModel:
# Initialize model
model = pe.ConcreteModel()
# Read data
data = _maxcut_read(data)
nodes = pe.Set(initialize=list(data.graph.nodes))
edges = list(data.graph.edges())
# Add decision variables
model.x = pe.Var(nodes, domain=pe.Binary, name="x")
# Add the objective function
model.obj = pe.Objective(
expr=pe.quicksum(
-data.weights[i] * model.x[e[0]]
+ data.weights[i] * model.x[e[0]] * model.x[e[1]]
for (i, e) in enumerate(edges)
),
sense=pe.minimize,
)
model.pprint()
pm = PyomoModel(model, solver)
_pyomo_set_params(model, params, solver)
return pm
def _maxcut_read(data: Union[str, MaxCutData]) -> MaxCutData:
if isinstance(data, str):
data = read_pkl_gz(data)
assert isinstance(data, MaxCutData)
return data

View File

@@ -3,7 +3,7 @@
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from dataclasses import dataclass from dataclasses import dataclass
from typing import List, Optional, Union from typing import List, Optional, Union, Callable
import gurobipy as gp import gurobipy as gp
import numpy as np import numpy as np
@@ -38,43 +38,19 @@ class MultiKnapsackData:
class MultiKnapsackGenerator: class MultiKnapsackGenerator:
"""Random instance generator for the multi-dimensional knapsack problem. """Random instance generator for the multi-dimensional knapsack problem.
Instances have a random number of items (or variables) and a random number of Generates new instances by creating random items and knapsacks according to the
knapsacks (or constraints), as specified by the provided probability provided probability distributions. Each instance has a random number of items
distributions `n` and `m`, respectively. The weight of each item `i` on knapsack (variables) and knapsacks (constraints), with weights, prices, and capacities
`j` is sampled independently from the provided distribution `w`. The capacity of sampled independently.
knapsack `j` is set to ``alpha_j * sum(w[i,j] for i in range(n))``,
where `alpha_j`, the tightness ratio, is sampled from the provided probability
distribution `alpha`.
To make the instances more challenging, the costs of the items are linearly
correlated to their average weights. More specifically, the weight of each item
`i` is set to ``sum(w[i,j]/m for j in range(m)) + K * u_i``, where `K`,
the correlation coefficient, and `u_i`, the correlation multiplier, are sampled
from the provided probability distributions. Note that `K` is only sample once
for the entire instance.
If `fix_w=True`, then `weights[i,j]` are kept the same in all generated
instances. This also implies that n and m are kept fixed. Although the prices and
capacities are derived from `weights[i,j]`, as long as `u` and `K` are not
constants, the generated instances will still not be completely identical.
If a probability distribution `w_jitter` is provided, then item weights will be
set to ``w[i,j] * gamma[i,j]`` where `gamma[i,j]` is sampled from `w_jitter`.
When combined with `fix_w=True`, this argument may be used to generate instances
where the weight of each item is roughly the same, but not exactly identical,
across all instances. The prices of the items and the capacities of the knapsacks
will be calculated as above, but using these perturbed weights instead.
By default, all generated prices, weights and capacities are rounded to the
nearest integer number. If `round=False` is provided, this rounding will be
disabled.
Parameters Parameters
---------- ----------
n: rv_discrete n: rv_discrete
Probability distribution for the number of items (or variables). Probability distribution for the number of items (or variables).
m: rv_discrete m: rv_discrete or callable
Probability distribution for the number of knapsacks (or constraints). Probability distribution for the number of knapsacks (or constraints), or a
callable that takes the numer of items and returns the number of knapsacks
(e.g., lambda n: n//3).
w: rv_continuous w: rv_continuous
Probability distribution for the item weights. Probability distribution for the item weights.
K: rv_continuous K: rv_continuous
@@ -83,11 +59,6 @@ class MultiKnapsackGenerator:
Probability distribution for the profit multiplier. Probability distribution for the profit multiplier.
alpha: rv_continuous alpha: rv_continuous
Probability distribution for the tightness ratio. Probability distribution for the tightness ratio.
fix_w: boolean
If true, weights are kept the same (minus the noise from w_jitter) in all
instances.
w_jitter: rv_continuous
Probability distribution for random noise added to the weights.
round: boolean round: boolean
If true, all prices, weights and capacities are rounded to the nearest If true, all prices, weights and capacities are rounded to the nearest
integer. integer.
@@ -96,28 +67,23 @@ class MultiKnapsackGenerator:
def __init__( def __init__(
self, self,
n: rv_frozen = randint(low=100, high=101), n: rv_frozen = randint(low=100, high=101),
m: rv_frozen = randint(low=30, high=31), m: Union[rv_frozen, Callable] = randint(low=30, high=31),
w: rv_frozen = randint(low=0, high=1000), w: rv_frozen = randint(low=0, high=1000),
K: rv_frozen = randint(low=500, high=501), K: rv_frozen = randint(low=500, high=501),
u: rv_frozen = uniform(loc=0.0, scale=1.0), u: rv_frozen = uniform(loc=0.0, scale=1.0),
alpha: rv_frozen = uniform(loc=0.25, scale=0.0), alpha: rv_frozen = uniform(loc=0.25, scale=0.0),
fix_w: bool = False,
w_jitter: rv_frozen = uniform(loc=1.0, scale=0.0),
p_jitter: rv_frozen = uniform(loc=1.0, scale=0.0),
round: bool = True, round: bool = True,
): ):
assert isinstance(n, rv_frozen), "n should be a SciPy probability distribution" assert isinstance(n, rv_frozen), "n should be a SciPy probability distribution"
assert isinstance(m, rv_frozen), "m should be a SciPy probability distribution" assert isinstance(m, rv_frozen) or callable(
m
), "m should be a SciPy probability distribution or callable"
assert isinstance(w, rv_frozen), "w should be a SciPy probability distribution" assert isinstance(w, rv_frozen), "w should be a SciPy probability distribution"
assert isinstance(K, rv_frozen), "K should be a SciPy probability distribution" assert isinstance(K, rv_frozen), "K should be a SciPy probability distribution"
assert isinstance(u, rv_frozen), "u should be a SciPy probability distribution" assert isinstance(u, rv_frozen), "u should be a SciPy probability distribution"
assert isinstance( assert isinstance(
alpha, rv_frozen alpha, rv_frozen
), "alpha should be a SciPy probability distribution" ), "alpha should be a SciPy probability distribution"
assert isinstance(fix_w, bool), "fix_w should be boolean"
assert isinstance(
w_jitter, rv_frozen
), "w_jitter should be a SciPy probability distribution"
self.n = n self.n = n
self.m = m self.m = m
@@ -125,45 +91,20 @@ class MultiKnapsackGenerator:
self.u = u self.u = u
self.K = K self.K = K
self.alpha = alpha self.alpha = alpha
self.w_jitter = w_jitter
self.p_jitter = p_jitter
self.round = round self.round = round
self.fix_n: Optional[int] = None
self.fix_m: Optional[int] = None
self.fix_w: Optional[np.ndarray] = None
self.fix_u: Optional[np.ndarray] = None
self.fix_K: Optional[float] = None
if fix_w:
self.fix_n = self.n.rvs()
self.fix_m = self.m.rvs()
self.fix_w = np.array([self.w.rvs(self.fix_n) for _ in range(self.fix_m)])
self.fix_u = self.u.rvs(self.fix_n)
self.fix_K = self.K.rvs()
def generate(self, n_samples: int) -> List[MultiKnapsackData]: def generate(self, n_samples: int) -> List[MultiKnapsackData]:
def _sample() -> MultiKnapsackData: def _sample() -> MultiKnapsackData:
if self.fix_w is not None: n = self.n.rvs()
assert self.fix_m is not None if callable(self.m):
assert self.fix_n is not None m = self.m(n)
assert self.fix_u is not None
assert self.fix_K is not None
n = self.fix_n
m = self.fix_m
w = self.fix_w
u = self.fix_u
K = self.fix_K
else: else:
n = self.n.rvs()
m = self.m.rvs() m = self.m.rvs()
w = np.array([self.w.rvs(n) for _ in range(m)]) w = np.array([self.w.rvs(n) for _ in range(m)])
u = self.u.rvs(n) u = self.u.rvs(n)
K = self.K.rvs() K = self.K.rvs()
w = w * np.array([self.w_jitter.rvs(n) for _ in range(m)])
alpha = self.alpha.rvs(m) alpha = self.alpha.rvs(m)
p = np.array( p = np.array([w[:, j].sum() / m + K * u[j] for j in range(n)])
[w[:, j].sum() / m + K * u[j] for j in range(n)]
) * self.p_jitter.rvs(n)
b = np.array([w[i, :].sum() * alpha[i] for i in range(m)]) b = np.array([w[i, :].sum() * alpha[i] for i in range(m)])
if self.round: if self.round:
p = p.round() p = p.round()
@@ -174,7 +115,75 @@ class MultiKnapsackGenerator:
return [_sample() for _ in range(n_samples)] return [_sample() for _ in range(n_samples)]
def build_multiknapsack_model(data: Union[str, MultiKnapsackData]) -> GurobiModel: class MultiKnapsackPerturber:
"""Perturbation generator for existing multi-dimensional knapsack instances.
Takes an existing MultiKnapsackData instance and generates new instances by
applying randomization factors to the existing weights and prices while keeping
the structure (number of items and knapsacks) fixed.
Parameters
----------
w_jitter: rv_continuous
Probability distribution for randomization factors applied to item weights.
p_jitter: rv_continuous
Probability distribution for randomization factors applied to item prices.
alpha_jitter: rv_continuous
Probability distribution for randomization factors applied to knapsack capacities.
round: boolean
If true, all perturbed prices, weights and capacities are rounded to the
nearest integer.
"""
def __init__(
self,
w_jitter: rv_frozen = uniform(loc=0.9, scale=0.2),
p_jitter: rv_frozen = uniform(loc=0.9, scale=0.2),
alpha_jitter: rv_frozen = uniform(loc=0.9, scale=0.2),
round: bool = True,
):
assert isinstance(
w_jitter, rv_frozen
), "w_jitter should be a SciPy probability distribution"
assert isinstance(
p_jitter, rv_frozen
), "p_jitter should be a SciPy probability distribution"
assert isinstance(
alpha_jitter, rv_frozen
), "alpha_jitter should be a SciPy probability distribution"
self.w_jitter = w_jitter
self.p_jitter = p_jitter
self.alpha_jitter = alpha_jitter
self.round = round
def perturb(
self,
instance: MultiKnapsackData,
n_samples: int,
) -> List[MultiKnapsackData]:
def _sample() -> MultiKnapsackData:
m, n = instance.weights.shape
w_factors = np.array([self.w_jitter.rvs(n) for _ in range(m)])
p_factors = self.p_jitter.rvs(n)
alpha_factors = self.alpha_jitter.rvs(m)
w = instance.weights * w_factors
p = instance.prices * p_factors
b = instance.capacities * alpha_factors
if self.round:
p = p.round()
b = b.round()
w = w.round()
return MultiKnapsackData(p, b, w)
return [_sample() for _ in range(n_samples)]
def build_multiknapsack_model_gurobipy(
data: Union[str, MultiKnapsackData]
) -> GurobiModel:
"""Converts multi-knapsack problem data into a concrete Gurobipy model.""" """Converts multi-knapsack problem data into a concrete Gurobipy model."""
if isinstance(data, str): if isinstance(data, str):
data = read_pkl_gz(data) data = read_pkl_gz(data)

View File

@@ -3,7 +3,7 @@
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from dataclasses import dataclass from dataclasses import dataclass
from typing import List, Optional, Union from typing import List, Optional, Union, Callable
import gurobipy as gp import gurobipy as gp
import numpy as np import numpy as np
@@ -49,15 +49,6 @@ class PMedianGenerator:
`demands` and `capacities`, respectively. Finally, the costs `w[i,j]` are set to `demands` and `capacities`, respectively. Finally, the costs `w[i,j]` are set to
the Euclidean distance between the locations of customers `i` and `j`. the Euclidean distance between the locations of customers `i` and `j`.
If `fixed=True`, then the number of customers, their locations, the parameter
`p`, the demands and the capacities are only sampled from their respective
distributions exactly once, to build a reference instance which is then
perturbed. Specifically, for each perturbation, the distances, demands and
capacities are multiplied by factors sampled from the distributions
`distances_jitter`, `demands_jitter` and `capacities_jitter`, respectively. The
result is a list of instances that have the same set of customers, but slightly
different demands, capacities and distances.
Parameters Parameters
---------- ----------
x x
@@ -67,19 +58,12 @@ class PMedianGenerator:
n n
Probability distribution for the number of customer. Probability distribution for the number of customer.
p p
Probability distribution for the number of medians. Probability distribution for the number of medians, or a callable that takes
the number of customers and returns the number of medians (e.g., lambda n: n//10).
demands demands
Probability distribution for the customer demands. Probability distribution for the customer demands.
capacities capacities
Probability distribution for the facility capacities. Probability distribution for the facility capacities.
distances_jitter
Probability distribution for the random scaling factor applied to distances.
demands_jitter
Probability distribution for the random scaling factor applied to demands.
capacities_jitter
Probability distribution for the random scaling factor applied to capacities.
fixed
If `True`, then customer are kept the same across instances.
""" """
def __init__( def __init__(
@@ -87,44 +71,41 @@ class PMedianGenerator:
x: rv_frozen = uniform(loc=0.0, scale=100.0), x: rv_frozen = uniform(loc=0.0, scale=100.0),
y: rv_frozen = uniform(loc=0.0, scale=100.0), y: rv_frozen = uniform(loc=0.0, scale=100.0),
n: rv_frozen = randint(low=100, high=101), n: rv_frozen = randint(low=100, high=101),
p: rv_frozen = randint(low=10, high=11), p: Union[rv_frozen, Callable] = randint(low=10, high=11),
demands: rv_frozen = uniform(loc=0, scale=20), demands: rv_frozen = uniform(loc=0, scale=20),
capacities: rv_frozen = uniform(loc=0, scale=100), capacities: rv_frozen = uniform(loc=0, scale=100),
distances_jitter: rv_frozen = uniform(loc=1.0, scale=0.0),
demands_jitter: rv_frozen = uniform(loc=1.0, scale=0.0),
capacities_jitter: rv_frozen = uniform(loc=1.0, scale=0.0),
fixed: bool = True,
): ):
assert isinstance(x, rv_frozen), "x should be a SciPy probability distribution"
assert isinstance(y, rv_frozen), "y should be a SciPy probability distribution"
assert isinstance(n, rv_frozen), "n should be a SciPy probability distribution"
assert isinstance(p, rv_frozen) or callable(
p
), "p should be a SciPy probability distribution or callable"
assert isinstance(
demands, rv_frozen
), "demands should be a SciPy probability distribution"
assert isinstance(
capacities, rv_frozen
), "capacities should be a SciPy probability distribution"
self.x = x self.x = x
self.y = y self.y = y
self.n = n self.n = n
self.p = p self.p = p
self.demands = demands self.demands = demands
self.capacities = capacities self.capacities = capacities
self.distances_jitter = distances_jitter
self.demands_jitter = demands_jitter
self.capacities_jitter = capacities_jitter
self.fixed = fixed
self.ref_data: Optional[PMedianData] = None
def generate(self, n_samples: int) -> List[PMedianData]: def generate(self, n_samples: int) -> List[PMedianData]:
def _sample() -> PMedianData: def _sample() -> PMedianData:
if self.ref_data is None: n = self.n.rvs()
n = self.n.rvs() if callable(self.p):
p = self.p.rvs() p = self.p(n)
loc = np.array([(self.x.rvs(), self.y.rvs()) for _ in range(n)])
distances = squareform(pdist(loc))
demands = self.demands.rvs(n)
capacities = self.capacities.rvs(n)
else: else:
n = self.ref_data.demands.shape[0] p = self.p.rvs()
distances = self.ref_data.distances * self.distances_jitter.rvs( loc = np.array([(self.x.rvs(), self.y.rvs()) for _ in range(n)])
size=(n, n) distances = squareform(pdist(loc))
) demands = self.demands.rvs(n)
distances = np.tril(distances) + np.triu(distances.T, 1) capacities = self.capacities.rvs(n)
demands = self.ref_data.demands * self.demands_jitter.rvs(n)
capacities = self.ref_data.capacities * self.capacities_jitter.rvs(n)
p = self.ref_data.p
data = PMedianData( data = PMedianData(
distances=distances.round(2), distances=distances.round(2),
@@ -133,15 +114,63 @@ class PMedianGenerator:
capacities=capacities.round(2), capacities=capacities.round(2),
) )
if self.fixed and self.ref_data is None:
self.ref_data = data
return data return data
return [_sample() for _ in range(n_samples)] return [_sample() for _ in range(n_samples)]
def build_pmedian_model(data: Union[str, PMedianData]) -> GurobiModel: class PMedianPerturber:
"""Perturbation generator for existing p-median instances.
Takes an existing PMedianData instance and generates new instances by applying
randomization factors to the existing distances, demands, and capacities while
keeping the graph structure and parameter p fixed.
"""
def __init__(
self,
distances_jitter: rv_frozen = uniform(loc=1.0, scale=0.0),
demands_jitter: rv_frozen = uniform(loc=1.0, scale=0.0),
capacities_jitter: rv_frozen = uniform(loc=1.0, scale=0.0),
):
"""Initialize the perturbation generator.
Parameters
----------
distances_jitter
Probability distribution for randomization factors applied to distances.
demands_jitter
Probability distribution for randomization factors applied to demands.
capacities_jitter
Probability distribution for randomization factors applied to capacities.
"""
self.distances_jitter = distances_jitter
self.demands_jitter = demands_jitter
self.capacities_jitter = capacities_jitter
def perturb(
self,
instance: PMedianData,
n_samples: int,
) -> List[PMedianData]:
def _sample() -> PMedianData:
n = instance.demands.shape[0]
distances = instance.distances * self.distances_jitter.rvs(size=(n, n))
distances = np.tril(distances) + np.triu(distances.T, 1)
demands = instance.demands * self.demands_jitter.rvs(n)
capacities = instance.capacities * self.capacities_jitter.rvs(n)
return PMedianData(
distances=distances.round(2),
demands=demands.round(2),
p=instance.p,
capacities=capacities.round(2),
)
return [_sample() for _ in range(n_samples)]
def build_pmedian_model_gurobipy(data: Union[str, PMedianData]) -> GurobiModel:
"""Converts capacitated p-median data into a concrete Gurobipy model.""" """Converts capacitated p-median data into a concrete Gurobipy model."""
if isinstance(data, str): if isinstance(data, str):
data = read_pkl_gz(data) data = read_pkl_gz(data)

View File

@@ -3,12 +3,12 @@
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from dataclasses import dataclass from dataclasses import dataclass
from typing import List, Union from typing import List, Union, Callable
import gurobipy as gp import gurobipy as gp
import numpy as np import numpy as np
import pyomo.environ as pe import pyomo.environ as pe
from gurobipy.gurobipy import GRB from gurobipy import GRB
from scipy.stats import uniform, randint from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen from scipy.stats.distributions import rv_frozen
@@ -24,56 +24,79 @@ class SetCoverData:
class SetCoverGenerator: class SetCoverGenerator:
"""Random instance generator for the Set Cover Problem.
Generates instances by creating a new random incidence matrix for each
instance, where the number of elements, sets, density, and costs are sampled
from user-provided probability distributions.
"""
def __init__( def __init__(
self, self,
n_elements: rv_frozen = randint(low=50, high=51), n_elements: rv_frozen = randint(low=50, high=51),
n_sets: rv_frozen = randint(low=100, high=101), n_sets: Union[rv_frozen, Callable] = randint(low=100, high=101),
costs: rv_frozen = uniform(loc=0.0, scale=100.0), costs: rv_frozen = uniform(loc=0.0, scale=100.0),
costs_jitter: rv_frozen = uniform(loc=-5.0, scale=10.0),
K: rv_frozen = uniform(loc=25.0, scale=0.0), K: rv_frozen = uniform(loc=25.0, scale=0.0),
density: rv_frozen = uniform(loc=0.02, scale=0.00), density: rv_frozen = uniform(loc=0.02, scale=0.00),
fix_sets: bool = True,
): ):
"""Initialize the problem generator.
Parameters
----------
n_elements: rv_discrete
Probability distribution for number of elements.
n_sets: rv_discrete or callable
Probability distribution for number of sets, or a callable that takes
the number of elements and returns the number of sets.
costs: rv_continuous
Probability distribution for base set costs.
K: rv_continuous
Probability distribution for cost scaling factor based on set size.
density: rv_continuous
Probability distribution for incidence matrix density.
"""
assert isinstance(
n_elements, rv_frozen
), "n_elements should be a SciPy probability distribution"
assert isinstance(n_sets, rv_frozen) or callable(
n_sets
), "n_sets should be a SciPy probability distribution or callable"
assert isinstance(
costs, rv_frozen
), "costs should be a SciPy probability distribution"
assert isinstance(K, rv_frozen), "K should be a SciPy probability distribution"
assert isinstance(
density, rv_frozen
), "density should be a SciPy probability distribution"
self.n_elements = n_elements self.n_elements = n_elements
self.n_sets = n_sets self.n_sets = n_sets
self.costs = costs self.costs = costs
self.costs_jitter = costs_jitter
self.density = density self.density = density
self.K = K self.K = K
self.fix_sets = fix_sets
self.fixed_costs = None
self.fixed_matrix = None
def generate(self, n_samples: int) -> List[SetCoverData]: def generate(self, n_samples: int) -> List[SetCoverData]:
def _sample() -> SetCoverData: def _sample() -> SetCoverData:
if self.fixed_matrix is None: n_elements = self.n_elements.rvs()
n_sets = self.n_sets.rvs() if callable(self.n_sets):
n_elements = self.n_elements.rvs() n_sets = self.n_sets(n_elements)
density = self.density.rvs()
incidence_matrix = np.random.rand(n_elements, n_sets) < density
incidence_matrix = incidence_matrix.astype(int)
# Ensure each element belongs to at least one set
for j in range(n_elements):
if incidence_matrix[j, :].sum() == 0:
incidence_matrix[j, randint(low=0, high=n_sets).rvs()] = 1
# Ensure each set contains at least one element
for i in range(n_sets):
if incidence_matrix[:, i].sum() == 0:
incidence_matrix[randint(low=0, high=n_elements).rvs(), i] = 1
costs = self.costs.rvs(n_sets) + self.K.rvs() * incidence_matrix.sum(
axis=0
)
if self.fix_sets:
self.fixed_matrix = incidence_matrix
self.fixed_costs = costs
else: else:
incidence_matrix = self.fixed_matrix n_sets = self.n_sets.rvs()
(_, n_sets) = incidence_matrix.shape density = self.density.rvs()
costs = self.fixed_costs * self.costs_jitter.rvs(n_sets)
incidence_matrix = np.random.rand(n_elements, n_sets) < density
incidence_matrix = incidence_matrix.astype(int)
# Ensure each element belongs to at least one set
for j in range(n_elements):
if incidence_matrix[j, :].sum() == 0:
incidence_matrix[j, randint(low=0, high=n_sets).rvs()] = 1
# Ensure each set contains at least one element
for i in range(n_sets):
if incidence_matrix[:, i].sum() == 0:
incidence_matrix[randint(low=0, high=n_elements).rvs(), i] = 1
costs = self.costs.rvs(n_sets) + self.K.rvs() * incidence_matrix.sum(axis=0)
return SetCoverData( return SetCoverData(
costs=costs.round(2), costs=costs.round(2),
incidence_matrix=incidence_matrix, incidence_matrix=incidence_matrix,
@@ -82,6 +105,47 @@ class SetCoverGenerator:
return [_sample() for _ in range(n_samples)] return [_sample() for _ in range(n_samples)]
class SetCoverPerturber:
"""Perturbation generator for existing Set Cover instances.
Takes an existing SetCoverData instance and generates new instances
by applying randomization factors to the existing costs while keeping the
incidence matrix fixed.
"""
def __init__(
self,
costs_jitter: rv_frozen = uniform(loc=0.9, scale=0.2),
):
"""Initialize the perturbation generator.
Parameters
----------
costs_jitter: rv_continuous
Probability distribution for randomization factors applied to set costs.
"""
assert isinstance(
costs_jitter, rv_frozen
), "costs_jitter should be a SciPy probability distribution"
self.costs_jitter = costs_jitter
def perturb(
self,
instance: SetCoverData,
n_samples: int,
) -> List[SetCoverData]:
def _sample() -> SetCoverData:
(_, n_sets) = instance.incidence_matrix.shape
jitter_factors = self.costs_jitter.rvs(n_sets)
costs = np.round(instance.costs * jitter_factors, 2)
return SetCoverData(
costs=costs,
incidence_matrix=instance.incidence_matrix,
)
return [_sample() for _ in range(n_samples)]
def build_setcover_model_gurobipy(data: Union[str, SetCoverData]) -> GurobiModel: def build_setcover_model_gurobipy(data: Union[str, SetCoverData]) -> GurobiModel:
data = _read_setcover_data(data) data = _read_setcover_data(data)
(n_elements, n_sets) = data.incidence_matrix.shape (n_elements, n_sets) = data.incidence_matrix.shape
@@ -95,7 +159,7 @@ def build_setcover_model_gurobipy(data: Union[str, SetCoverData]) -> GurobiModel
def build_setcover_model_pyomo( def build_setcover_model_pyomo(
data: Union[str, SetCoverData], data: Union[str, SetCoverData],
solver="gurobi_persistent", solver: str = "gurobi_persistent",
) -> PyomoModel: ) -> PyomoModel:
data = _read_setcover_data(data) data = _read_setcover_data(data)
(n_elements, n_sets) = data.incidence_matrix.shape (n_elements, n_sets) = data.incidence_matrix.shape

View File

@@ -3,15 +3,15 @@
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from dataclasses import dataclass from dataclasses import dataclass
from typing import List, Union from typing import List, Union, Callable
import gurobipy as gp import gurobipy as gp
import numpy as np import numpy as np
from gurobipy.gurobipy import GRB from gurobipy import GRB
from scipy.stats import uniform, randint from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen from scipy.stats.distributions import rv_frozen
from .setcover import SetCoverGenerator from .setcover import SetCoverGenerator, SetCoverPerturber
from miplearn.solvers.gurobi import GurobiModel from miplearn.solvers.gurobi import GurobiModel
from ..io import read_pkl_gz from ..io import read_pkl_gz
@@ -23,24 +23,56 @@ class SetPackData:
class SetPackGenerator: class SetPackGenerator:
"""Random instance generator for the Set Packing Problem.
Generates instances by creating a new random incidence matrix for each
instance, where the number of elements, sets, density, and costs are sampled
from user-provided probability distributions.
"""
def __init__( def __init__(
self, self,
n_elements: rv_frozen = randint(low=50, high=51), n_elements: rv_frozen = randint(low=50, high=51),
n_sets: rv_frozen = randint(low=100, high=101), n_sets: Union[rv_frozen, Callable] = randint(low=100, high=101),
costs: rv_frozen = uniform(loc=0.0, scale=100.0), costs: rv_frozen = uniform(loc=0.0, scale=100.0),
costs_jitter: rv_frozen = uniform(loc=-5.0, scale=10.0),
K: rv_frozen = uniform(loc=25.0, scale=0.0), K: rv_frozen = uniform(loc=25.0, scale=0.0),
density: rv_frozen = uniform(loc=0.02, scale=0.00), density: rv_frozen = uniform(loc=0.02, scale=0.00),
fix_sets: bool = True,
) -> None: ) -> None:
"""Initialize the problem generator.
Parameters
----------
n_elements: rv_discrete
Probability distribution for number of elements.
n_sets: rv_discrete or callable
Probability distribution for number of sets, or a callable that takes
the number of elements and returns the number of sets.
costs: rv_continuous
Probability distribution for base set costs.
K: rv_continuous
Probability distribution for cost scaling factor based on set size.
density: rv_continuous
Probability distribution for incidence matrix density.
"""
assert isinstance(
n_elements, rv_frozen
), "n_elements should be a SciPy probability distribution"
assert isinstance(n_sets, rv_frozen) or callable(
n_sets
), "n_sets should be a SciPy probability distribution or callable"
assert isinstance(
costs, rv_frozen
), "costs should be a SciPy probability distribution"
assert isinstance(K, rv_frozen), "K should be a SciPy probability distribution"
assert isinstance(
density, rv_frozen
), "density should be a SciPy probability distribution"
self.gen = SetCoverGenerator( self.gen = SetCoverGenerator(
n_elements=n_elements, n_elements=n_elements,
n_sets=n_sets, n_sets=n_sets,
costs=costs, costs=costs,
costs_jitter=costs_jitter,
K=K, K=K,
density=density, density=density,
fix_sets=fix_sets,
) )
def generate(self, n_samples: int) -> List[SetPackData]: def generate(self, n_samples: int) -> List[SetPackData]:
@@ -53,7 +85,48 @@ class SetPackGenerator:
] ]
def build_setpack_model(data: Union[str, SetPackData]) -> GurobiModel: class SetPackPerturber:
"""Perturbation generator for existing Set Packing instances.
Takes an existing SetPackData instance and generates new instances
by applying randomization factors to the existing costs while keeping the
incidence matrix fixed.
"""
def __init__(
self,
costs_jitter: rv_frozen = uniform(loc=0.9, scale=0.2),
):
"""Initialize the perturbation generator.
Parameters
----------
costs_jitter: rv_continuous
Probability distribution for randomization factors applied to set costs.
"""
assert isinstance(
costs_jitter, rv_frozen
), "costs_jitter should be a SciPy probability distribution"
self.costs_jitter = costs_jitter
def perturb(
self,
instance: SetPackData,
n_samples: int,
) -> List[SetPackData]:
def _sample() -> SetPackData:
(_, n_sets) = instance.incidence_matrix.shape
jitter_factors = self.costs_jitter.rvs(n_sets)
costs = np.round(instance.costs * jitter_factors, 2)
return SetPackData(
costs=costs,
incidence_matrix=instance.incidence_matrix,
)
return [_sample() for _ in range(n_samples)]
def build_setpack_model_gurobipy(data: Union[str, SetPackData]) -> GurobiModel:
if isinstance(data, str): if isinstance(data, str):
data = read_pkl_gz(data) data = read_pkl_gz(data)
assert isinstance(data, SetPackData) assert isinstance(data, SetPackData)

View File

@@ -2,21 +2,25 @@
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
import logging
from dataclasses import dataclass from dataclasses import dataclass
from typing import List, Union from typing import List, Union, Any, Hashable, Optional
import gurobipy as gp import gurobipy as gp
import networkx as nx import networkx as nx
import numpy as np import numpy as np
import pyomo.environ as pe import pyomo.environ as pe
from gurobipy import GRB, quicksum from gurobipy import GRB, quicksum
from miplearn.io import read_pkl_gz
from miplearn.solvers.gurobi import GurobiModel
from miplearn.solvers.pyomo import PyomoModel
from networkx import Graph from networkx import Graph
from scipy.stats import uniform, randint from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen from scipy.stats.distributions import rv_frozen
from miplearn.io import read_pkl_gz from . import _gurobipy_set_params, _pyomo_set_params
from miplearn.solvers.gurobi import GurobiModel
from miplearn.solvers.pyomo import PyomoModel logger = logging.getLogger(__name__)
@dataclass @dataclass
@@ -28,14 +32,10 @@ class MaxWeightStableSetData:
class MaxWeightStableSetGenerator: class MaxWeightStableSetGenerator:
"""Random instance generator for the Maximum-Weight Stable Set Problem. """Random instance generator for the Maximum-Weight Stable Set Problem.
The generator has two modes of operation. When `fix_graph=True` is provided, Generates instances by creating a new random Erdős-Rényi graph $G_{n,p}$ for each
one random Erdős-Rényi graph $G_{n,p}$ is generated in the constructor, where $n$ instance, where $n$ and $p$ are sampled from user-provided probability distributions
and $p$ are sampled from user-provided probability distributions `n` and `p`. To `n` and `p`. For each instance, the generator independently samples each $w_v$ from
generate each instance, the generator independently samples each $w_v$ from the the user-provided probability distribution `w`.
user-provided probability distribution `w`.
When `fix_graph=False`, a new random graph is generated for each instance; the
remaining parameters are sampled in the same way.
""" """
def __init__( def __init__(
@@ -43,7 +43,6 @@ class MaxWeightStableSetGenerator:
w: rv_frozen = uniform(loc=10.0, scale=1.0), w: rv_frozen = uniform(loc=10.0, scale=1.0),
n: rv_frozen = randint(low=250, high=251), n: rv_frozen = randint(low=250, high=251),
p: rv_frozen = uniform(loc=0.05, scale=0.0), p: rv_frozen = uniform(loc=0.05, scale=0.0),
fix_graph: bool = True,
): ):
"""Initialize the problem generator. """Initialize the problem generator.
@@ -62,17 +61,10 @@ class MaxWeightStableSetGenerator:
self.w = w self.w = w
self.n = n self.n = n
self.p = p self.p = p
self.fix_graph = fix_graph
self.graph = None
if fix_graph:
self.graph = self._generate_graph()
def generate(self, n_samples: int) -> List[MaxWeightStableSetData]: def generate(self, n_samples: int) -> List[MaxWeightStableSetData]:
def _sample() -> MaxWeightStableSetData: def _sample() -> MaxWeightStableSetData:
if self.graph is not None: graph = self._generate_graph()
graph = self.graph
else:
graph = self._generate_graph()
weights = np.round(self.w.rvs(graph.number_of_nodes()), 2) weights = np.round(self.w.rvs(graph.number_of_nodes()), 2)
return MaxWeightStableSetData(graph, weights) return MaxWeightStableSetData(graph, weights)
@@ -82,35 +74,132 @@ class MaxWeightStableSetGenerator:
return nx.generators.random_graphs.binomial_graph(self.n.rvs(), self.p.rvs()) return nx.generators.random_graphs.binomial_graph(self.n.rvs(), self.p.rvs())
def build_stab_model_gurobipy(data: MaxWeightStableSetData) -> GurobiModel: class MaxWeightStableSetPerturber:
data = _read_stab_data(data) """Perturbation generator for existing Maximum-Weight Stable Set instances.
Takes an existing MaxWeightStableSetData instance and generates new instances
by applying randomization factors to the existing weights while keeping the graph fixed.
"""
def __init__(
self,
w_jitter: rv_frozen = uniform(loc=0.9, scale=0.2),
):
"""Initialize the perturbation generator.
Parameters
----------
w_jitter: rv_continuous
Probability distribution for randomization factors applied to vertex weights.
"""
assert isinstance(
w_jitter, rv_frozen
), "w_jitter should be a SciPy probability distribution"
self.w_jitter = w_jitter
def perturb(
self,
instance: MaxWeightStableSetData,
n_samples: int,
) -> List[MaxWeightStableSetData]:
def _sample() -> MaxWeightStableSetData:
jitter_factors = self.w_jitter.rvs(instance.graph.number_of_nodes())
weights = np.round(instance.weights * jitter_factors, 2)
return MaxWeightStableSetData(instance.graph, weights)
return [_sample() for _ in range(n_samples)]
def build_stab_model_gurobipy(
data: Union[str, MaxWeightStableSetData],
params: Optional[dict[str, Any]] = None,
) -> GurobiModel:
model = gp.Model() model = gp.Model()
_gurobipy_set_params(model, params)
data = _stab_read(data)
nodes = list(data.graph.nodes) nodes = list(data.graph.nodes)
# Variables and objective function
x = model.addVars(nodes, vtype=GRB.BINARY, name="x") x = model.addVars(nodes, vtype=GRB.BINARY, name="x")
model.setObjective(quicksum(-data.weights[i] * x[i] for i in nodes)) model.setObjective(quicksum(-data.weights[i] * x[i] for i in nodes))
for clique in nx.find_cliques(data.graph):
model.addConstr(quicksum(x[i] for i in clique) <= 1) # Edge inequalities
for i1, i2 in data.graph.edges:
model.addConstr(x[i1] + x[i2] <= 1)
def cuts_separate(m: GurobiModel) -> List[Hashable]:
x_val_dict = m.inner.cbGetNodeRel(x)
x_val = [x_val_dict[i] for i in nodes]
return _stab_separate(data, x_val)
def cuts_enforce(m: GurobiModel, violations: List[Any]) -> None:
logger.info(f"Adding {len(violations)} clique cuts...")
for clique in violations:
m.add_constr(quicksum(x[i] for i in clique) <= 1)
model.update() model.update()
return GurobiModel(model)
return GurobiModel(
model,
cuts_separate=cuts_separate,
cuts_enforce=cuts_enforce,
)
def build_stab_model_pyomo( def build_stab_model_pyomo(
data: MaxWeightStableSetData, data: MaxWeightStableSetData,
solver="gurobi_persistent", solver: str = "gurobi_persistent",
params: Optional[dict[str, Any]] = None,
) -> PyomoModel: ) -> PyomoModel:
data = _read_stab_data(data) data = _stab_read(data)
model = pe.ConcreteModel() model = pe.ConcreteModel()
nodes = pe.Set(initialize=list(data.graph.nodes)) nodes = pe.Set(initialize=list(data.graph.nodes))
# Variables and objective function
model.x = pe.Var(nodes, domain=pe.Boolean, name="x") model.x = pe.Var(nodes, domain=pe.Boolean, name="x")
model.obj = pe.Objective(expr=sum([-data.weights[i] * model.x[i] for i in nodes])) model.obj = pe.Objective(expr=sum([-data.weights[i] * model.x[i] for i in nodes]))
# Edge inequalities
model.edge_eqs = pe.ConstraintList()
for i1, i2 in data.graph.edges:
model.edge_eqs.add(model.x[i1] + model.x[i2] <= 1)
# Clique inequalities
model.clique_eqs = pe.ConstraintList() model.clique_eqs = pe.ConstraintList()
for clique in nx.find_cliques(data.graph):
model.clique_eqs.add(expr=sum(model.x[i] for i in clique) <= 1) def cuts_separate(m: PyomoModel) -> List[Hashable]:
return PyomoModel(model, solver) m.solver.cbGetNodeRel([model.x[i] for i in nodes])
x_val = [model.x[i].value for i in nodes]
return _stab_separate(data, x_val)
def cuts_enforce(m: PyomoModel, violations: List[Any]) -> None:
logger.info(f"Adding {len(violations)} clique cuts...")
for clique in violations:
m.add_constr(model.clique_eqs.add(sum(model.x[i] for i in clique) <= 1))
pm = PyomoModel(
model,
solver,
cuts_separate=cuts_separate,
cuts_enforce=cuts_enforce,
)
_pyomo_set_params(pm, params, solver)
return pm
def _read_stab_data(data: Union[str, MaxWeightStableSetData]) -> MaxWeightStableSetData: def _stab_read(data: Union[str, MaxWeightStableSetData]) -> MaxWeightStableSetData:
if isinstance(data, str): if isinstance(data, str):
data = read_pkl_gz(data) data = read_pkl_gz(data)
assert isinstance(data, MaxWeightStableSetData) assert isinstance(data, MaxWeightStableSetData)
return data return data
def _stab_separate(data: MaxWeightStableSetData, x_val: List[float]) -> List:
# Check that we selected at most one vertex for each
# clique in the graph (sum <= 1)
violations: List[Any] = []
for clique in nx.find_cliques(data.graph):
if sum(x_val[i] for i in clique) > 1.0001:
violations.append(sorted(clique))
return violations

View File

@@ -2,20 +2,23 @@
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
import logging
from dataclasses import dataclass from dataclasses import dataclass
from typing import List, Tuple, Optional, Any, Union from typing import List, Tuple, Optional, Any, Union
import gurobipy as gp import gurobipy as gp
import networkx as nx import networkx as nx
import numpy as np import numpy as np
import pyomo.environ as pe
from gurobipy import quicksum, GRB, tuplelist from gurobipy import quicksum, GRB, tuplelist
from miplearn.io import read_pkl_gz
from miplearn.problems import _gurobipy_set_params, _pyomo_set_params
from miplearn.solvers.gurobi import GurobiModel
from scipy.spatial.distance import pdist, squareform from scipy.spatial.distance import pdist, squareform
from scipy.stats import uniform, randint from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen from scipy.stats.distributions import rv_frozen
import logging
from miplearn.io import read_pkl_gz from miplearn.solvers.pyomo import PyomoModel
from miplearn.solvers.gurobi import GurobiModel
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -24,10 +27,21 @@ logger = logging.getLogger(__name__)
class TravelingSalesmanData: class TravelingSalesmanData:
n_cities: int n_cities: int
distances: np.ndarray distances: np.ndarray
cities: np.ndarray
class TravelingSalesmanGenerator: class TravelingSalesmanGenerator:
"""Random generator for the Traveling Salesman Problem.""" """Random instance generator for the Traveling Salesman Problem.
Generates instances by creating n cities (x_1,y_1),...,(x_n,y_n) where n,
x_i and y_i are sampled independently from the provided probability
distributions `n`, `x` and `y`. For each (unordered) pair of cities (i,j),
the distance d[i,j] between them is set to:
d[i,j] = gamma[i,j] \\sqrt{(x_i - x_j)^2 + (y_i - y_j)^2}
where gamma is sampled from the provided probability distribution `gamma`.
"""
def __init__( def __init__(
self, self,
@@ -35,27 +49,10 @@ class TravelingSalesmanGenerator:
y: rv_frozen = uniform(loc=0.0, scale=1000.0), y: rv_frozen = uniform(loc=0.0, scale=1000.0),
n: rv_frozen = randint(low=100, high=101), n: rv_frozen = randint(low=100, high=101),
gamma: rv_frozen = uniform(loc=1.0, scale=0.0), gamma: rv_frozen = uniform(loc=1.0, scale=0.0),
fix_cities: bool = True,
round: bool = True, round: bool = True,
) -> None: ) -> None:
"""Initializes the problem generator. """Initializes the problem generator.
Initially, the generator creates n cities (x_1,y_1),...,(x_n,y_n) where n,
x_i and y_i are sampled independently from the provided probability
distributions `n`, `x` and `y`. For each (unordered) pair of cities (i,j),
the distance d[i,j] between them is set to:
d[i,j] = gamma[i,j] \\sqrt{(x_i - x_j)^2 + (y_i - y_j)^2}
where gamma is sampled from the provided probability distribution `gamma`.
If fix_cities=True, the list of cities is kept the same for all generated
instances. The gamma values, and therefore also the distances, are still
different.
By default, all distances d[i,j] are rounded to the nearest integer. If
`round=False` is provided, this rounding will be disabled.
Arguments Arguments
--------- ---------
x: rv_continuous x: rv_continuous
@@ -64,9 +61,8 @@ class TravelingSalesmanGenerator:
Probability distribution for the y-coordinate of each city. Probability distribution for the y-coordinate of each city.
n: rv_discrete n: rv_discrete
Probability distribution for the number of cities. Probability distribution for the number of cities.
fix_cities: bool gamma: rv_continuous
If False, cities will be resampled for every generated instance. Otherwise, list Probability distribution for distance perturbation factors.
of cities will be computed once, during the constructor.
round: bool round: bool
If True, distances are rounded to the nearest integer. If True, distances are rounded to the nearest integer.
""" """
@@ -83,26 +79,11 @@ class TravelingSalesmanGenerator:
self.gamma = gamma self.gamma = gamma
self.round = round self.round = round
if fix_cities:
self.fixed_n: Optional[int]
self.fixed_cities: Optional[np.ndarray]
self.fixed_n, self.fixed_cities = self._generate_cities()
else:
self.fixed_n = None
self.fixed_cities = None
def generate(self, n_samples: int) -> List[TravelingSalesmanData]: def generate(self, n_samples: int) -> List[TravelingSalesmanData]:
def _sample() -> TravelingSalesmanData: def _sample() -> TravelingSalesmanData:
if self.fixed_cities is not None: n, cities = self._generate_cities()
assert self.fixed_n is not None distances = self._compute_distances(cities, self.gamma, self.round)
n, cities = self.fixed_n, self.fixed_cities return TravelingSalesmanData(n, distances, cities)
else:
n, cities = self._generate_cities()
distances = squareform(pdist(cities)) * self.gamma.rvs(size=(n, n))
distances = np.tril(distances) + np.triu(distances.T, 1)
if self.round:
distances = distances.round()
return TravelingSalesmanData(n, distances)
return [_sample() for _ in range(n_samples)] return [_sample() for _ in range(n_samples)]
@@ -111,16 +92,74 @@ class TravelingSalesmanGenerator:
cities = np.array([(self.x.rvs(), self.y.rvs()) for _ in range(n)]) cities = np.array([(self.x.rvs(), self.y.rvs()) for _ in range(n)])
return n, cities return n, cities
@staticmethod
def _compute_distances(
cities: np.ndarray, gamma: rv_frozen, round: bool
) -> np.ndarray:
n = len(cities)
distances = squareform(pdist(cities)) * gamma.rvs(size=(n, n))
distances = np.tril(distances) + np.triu(distances.T, 1)
if round:
distances = distances.round()
return distances
def build_tsp_model(data: Union[str, TravelingSalesmanData]) -> GurobiModel:
if isinstance(data, str):
data = read_pkl_gz(data)
assert isinstance(data, TravelingSalesmanData)
class TravelingSalesmanPerturber:
"""Perturbation generator for existing Traveling Salesman Problem instances.
Takes an existing TravelingSalesmanData instance and generates new instances
by applying randomization factors to the distances computed from the original cities.
"""
def __init__(
self,
gamma: rv_frozen = uniform(loc=1.0, scale=0.0),
round: bool = True,
) -> None:
"""Initialize the perturbation generator.
Parameters
----------
gamma: rv_continuous
Probability distribution for randomization factors applied to distances.
round: bool
If True, perturbed distances are rounded to the nearest integer.
"""
assert isinstance(
gamma, rv_frozen
), "gamma should be a SciPy probability distribution"
self.gamma = gamma
self.round = round
def perturb(
self,
instance: TravelingSalesmanData,
n_samples: int,
) -> List[TravelingSalesmanData]:
def _sample() -> TravelingSalesmanData:
new_distances = TravelingSalesmanGenerator._compute_distances(
instance.cities,
self.gamma,
self.round,
)
return TravelingSalesmanData(
instance.n_cities, new_distances, instance.cities
)
return [_sample() for _ in range(n_samples)]
def build_tsp_model_gurobipy(
data: Union[str, TravelingSalesmanData],
params: Optional[dict[str, Any]] = None,
) -> GurobiModel:
model = gp.Model()
_gurobipy_set_params(model, params)
data = _tsp_read(data)
edges = tuplelist( edges = tuplelist(
(i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities) (i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)
) )
model = gp.Model()
# Decision variables # Decision variables
x = model.addVars(edges, vtype=GRB.BINARY, name="x") x = model.addVars(edges, vtype=GRB.BINARY, name="x")
@@ -142,36 +181,100 @@ def build_tsp_model(data: Union[str, TravelingSalesmanData]) -> GurobiModel:
name="eq_degree", name="eq_degree",
) )
def find_violations(model: GurobiModel) -> List[Any]: def lazy_separate(model: GurobiModel) -> List[Any]:
violations = [] x_val = model.inner.cbGetSolution(model.inner._x)
x = model.inner.cbGetSolution(model.inner._x) return _tsp_separate(x_val, edges, data.n_cities)
selected_edges = [e for e in model.inner._edges if x[e] > 0.5]
graph = nx.Graph()
graph.add_edges_from(selected_edges)
for component in list(nx.connected_components(graph)):
if len(component) < model.inner._n_cities:
cut_edges = [
e
for e in model.inner._edges
if (e[0] in component and e[1] not in component)
or (e[0] not in component and e[1] in component)
]
violations.append(cut_edges)
return violations
def fix_violations(model: GurobiModel, violations: List[Any], where: str) -> None: def lazy_enforce(model: GurobiModel, violations: List[Any]) -> None:
for violation in violations: for violation in violations:
constr = quicksum(model.inner._x[e[0], e[1]] for e in violation) >= 2 model.add_constr(
if where == "cb": quicksum(model.inner._x[e[0], e[1]] for e in violation) >= 2
model.inner.cbLazy(constr) )
else:
model.inner.addConstr(constr)
logger.info(f"tsp: added {len(violations)} subtour elimination constraints") logger.info(f"tsp: added {len(violations)} subtour elimination constraints")
model.update() model.update()
return GurobiModel( return GurobiModel(
model, model,
find_violations=find_violations, lazy_separate=lazy_separate,
fix_violations=fix_violations, lazy_enforce=lazy_enforce,
) )
def build_tsp_model_pyomo(
data: Union[str, TravelingSalesmanData],
solver: str = "gurobi_persistent",
params: Optional[dict[str, Any]] = None,
) -> PyomoModel:
model = pe.ConcreteModel()
data = _tsp_read(data)
edges = tuplelist(
(i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)
)
# Decision variables
model.x = pe.Var(edges, domain=pe.Boolean, name="x")
model.obj = pe.Objective(
expr=sum(model.x[i, j] * data.distances[i, j] for (i, j) in edges)
)
# Eq: Must choose two edges adjacent to each node
model.degree_eqs = pe.ConstraintList()
for i in range(data.n_cities):
model.degree_eqs.add(
sum(model.x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)
== 2
)
# Eq: Subtour elimination
model.subtour_eqs = pe.ConstraintList()
def lazy_separate(m: PyomoModel) -> List[Any]:
m.solver.cbGetSolution([model.x[e] for e in edges])
x_val = {e: model.x[e].value for e in edges}
return _tsp_separate(x_val, edges, data.n_cities)
def lazy_enforce(m: PyomoModel, violations: List[Any]) -> None:
logger.warning(f"Adding {len(violations)} subtour elimination constraints...")
for violation in violations:
m.add_constr(
model.subtour_eqs.add(sum(model.x[e[0], e[1]] for e in violation) >= 2)
)
pm = PyomoModel(
model,
solver,
lazy_separate=lazy_separate,
lazy_enforce=lazy_enforce,
)
_pyomo_set_params(pm, params, solver)
return pm
def _tsp_read(data: Union[str, TravelingSalesmanData]) -> TravelingSalesmanData:
if isinstance(data, str):
data = read_pkl_gz(data)
assert isinstance(data, TravelingSalesmanData)
return data
def _tsp_separate(
x_val: dict[Tuple[int, int], float],
edges: List[Tuple[int, int]],
n_cities: int,
) -> List:
violations = []
selected_edges = [e for e in edges if x_val[e] > 0.5]
graph = nx.Graph()
graph.add_edges_from(selected_edges)
for component in list(nx.connected_components(graph)):
if len(component) < n_cities:
cut_edges = [
[e[0], e[1]]
for e in edges
if (e[0] in component and e[1] not in component)
or (e[0] not in component and e[1] in component)
]
violations.append(cut_edges)
return violations

View File

@@ -4,7 +4,7 @@
from dataclasses import dataclass from dataclasses import dataclass
from math import pi from math import pi
from typing import List, Optional, Union from typing import List, Optional, Union, Callable
import gurobipy as gp import gurobipy as gp
import numpy as np import numpy as np
@@ -25,75 +25,102 @@ class UnitCommitmentData:
min_downtime: np.ndarray min_downtime: np.ndarray
cost_startup: np.ndarray cost_startup: np.ndarray
cost_prod: np.ndarray cost_prod: np.ndarray
cost_prod_quad: np.ndarray
cost_fixed: np.ndarray cost_fixed: np.ndarray
class UnitCommitmentGenerator: class UnitCommitmentGenerator:
"""Random instance generator for the Unit Commitment Problem.
Generates instances by creating new random unit commitment problems with
parameters sampled from user-provided probability distributions.
"""
def __init__( def __init__(
self, self,
n_units: rv_frozen = randint(low=1_000, high=1_001), n_units: rv_frozen = randint(low=1_000, high=1_001),
n_periods: rv_frozen = randint(low=72, high=73), n_periods: Union[rv_frozen, Callable] = randint(low=72, high=73),
max_power: rv_frozen = uniform(loc=50, scale=450), max_power: rv_frozen = uniform(loc=50, scale=450),
min_power: rv_frozen = uniform(loc=0.5, scale=0.25), min_power: rv_frozen = uniform(loc=0.5, scale=0.25),
cost_startup: rv_frozen = uniform(loc=0, scale=10_000), cost_startup: rv_frozen = uniform(loc=0, scale=10_000),
cost_prod: rv_frozen = uniform(loc=0, scale=50), cost_prod: rv_frozen = uniform(loc=0, scale=50),
cost_prod_quad: rv_frozen = uniform(loc=0, scale=0),
cost_fixed: rv_frozen = uniform(loc=0, scale=1_000), cost_fixed: rv_frozen = uniform(loc=0, scale=1_000),
min_uptime: rv_frozen = randint(low=2, high=8), min_uptime: rv_frozen = randint(low=2, high=8),
min_downtime: rv_frozen = randint(low=2, high=8), min_downtime: rv_frozen = randint(low=2, high=8),
cost_jitter: rv_frozen = uniform(loc=0.75, scale=0.5),
demand_jitter: rv_frozen = uniform(loc=0.9, scale=0.2),
fix_units: bool = False,
) -> None: ) -> None:
"""Initialize the problem generator.
Parameters
----------
n_units: rv_frozen
Probability distribution for number of units.
n_periods: rv_frozen or callable
Probability distribution for number of periods, or a callable that takes
the number of units and returns the number of periods.
max_power: rv_frozen
Probability distribution for maximum power output.
min_power: rv_frozen
Probability distribution for minimum power output (as fraction of max_power).
cost_startup: rv_frozen
Probability distribution for startup costs.
cost_prod: rv_frozen
Probability distribution for production costs.
cost_prod_quad: rv_frozen
Probability distribution for quadratic production costs.
cost_fixed: rv_frozen
Probability distribution for fixed costs.
min_uptime: rv_frozen
Probability distribution for minimum uptime.
min_downtime: rv_frozen
Probability distribution for minimum downtime.
"""
assert isinstance(
n_units, rv_frozen
), "n_units should be a SciPy probability distribution"
assert isinstance(n_periods, rv_frozen) or callable(
n_periods
), "n_periods should be a SciPy probability distribution or callable"
self.n_units = n_units self.n_units = n_units
self.n_periods = n_periods self.n_periods = n_periods
self.max_power = max_power self.max_power = max_power
self.min_power = min_power self.min_power = min_power
self.cost_startup = cost_startup self.cost_startup = cost_startup
self.cost_prod = cost_prod self.cost_prod = cost_prod
self.cost_prod_quad = cost_prod_quad
self.cost_fixed = cost_fixed self.cost_fixed = cost_fixed
self.min_uptime = min_uptime self.min_uptime = min_uptime
self.min_downtime = min_downtime self.min_downtime = min_downtime
self.cost_jitter = cost_jitter
self.demand_jitter = demand_jitter
self.fix_units = fix_units
self.ref_data: Optional[UnitCommitmentData] = None
def generate(self, n_samples: int) -> List[UnitCommitmentData]: def generate(self, n_samples: int) -> List[UnitCommitmentData]:
def _sample() -> UnitCommitmentData: def _sample() -> UnitCommitmentData:
if self.ref_data is None: G = self.n_units.rvs()
T = self.n_periods.rvs() if callable(self.n_periods):
G = self.n_units.rvs() T = self.n_periods(G)
# Generate unit parameteres
max_power = self.max_power.rvs(G)
min_power = max_power * self.min_power.rvs(G)
max_power = max_power
min_uptime = self.min_uptime.rvs(G)
min_downtime = self.min_downtime.rvs(G)
cost_startup = self.cost_startup.rvs(G)
cost_prod = self.cost_prod.rvs(G)
cost_fixed = self.cost_fixed.rvs(G)
capacity = max_power.sum()
# Generate periodic demand in the range [0.4, 0.8] * capacity, with a peak every 12 hours.
demand = np.sin([i / 6 * pi for i in range(T)])
demand *= uniform(loc=0, scale=1).rvs(T)
demand -= demand.min()
demand /= demand.max() / 0.4
demand += 0.4
demand *= capacity
else: else:
T, G = len(self.ref_data.demand), len(self.ref_data.max_power) T = self.n_periods.rvs()
demand = self.ref_data.demand * self.demand_jitter.rvs(T)
min_power = self.ref_data.min_power
max_power = self.ref_data.max_power
min_uptime = self.ref_data.min_uptime
min_downtime = self.ref_data.min_downtime
cost_startup = self.ref_data.cost_startup * self.cost_jitter.rvs(G)
cost_prod = self.ref_data.cost_prod * self.cost_jitter.rvs(G)
cost_fixed = self.ref_data.cost_fixed * self.cost_jitter.rvs(G)
data = UnitCommitmentData( # Generate unit parameteres
max_power = self.max_power.rvs(G)
min_power = max_power * self.min_power.rvs(G)
max_power = max_power
min_uptime = self.min_uptime.rvs(G)
min_downtime = self.min_downtime.rvs(G)
cost_startup = self.cost_startup.rvs(G)
cost_prod = self.cost_prod.rvs(G)
cost_prod_quad = self.cost_prod_quad.rvs(G)
cost_fixed = self.cost_fixed.rvs(G)
capacity = max_power.sum()
# Generate periodic demand in the range [0.4, 0.8] * capacity, with a peak every 12 hours.
demand = np.sin([i / 6 * pi for i in range(T)])
demand *= uniform(loc=0, scale=1).rvs(T)
demand -= demand.min()
demand /= demand.max() / 0.4
demand += 0.4
demand *= capacity
return UnitCommitmentData(
demand.round(2), demand.round(2),
min_power.round(2), min_power.round(2),
max_power.round(2), max_power.round(2),
@@ -101,18 +128,73 @@ class UnitCommitmentGenerator:
min_downtime, min_downtime,
cost_startup.round(2), cost_startup.round(2),
cost_prod.round(2), cost_prod.round(2),
cost_prod_quad.round(4),
cost_fixed.round(2), cost_fixed.round(2),
) )
if self.ref_data is None and self.fix_units:
self.ref_data = data
return data
return [_sample() for _ in range(n_samples)] return [_sample() for _ in range(n_samples)]
def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel: class UnitCommitmentPerturber:
"""Perturbation generator for existing Unit Commitment instances.
Takes an existing UnitCommitmentData instance and generates new instances
by applying randomization factors to the existing costs and demand while
keeping the unit structure fixed.
"""
def __init__(
self,
cost_jitter: rv_frozen = uniform(loc=0.75, scale=0.5),
demand_jitter: rv_frozen = uniform(loc=0.9, scale=0.2),
) -> None:
"""Initialize the perturbation generator.
Parameters
----------
cost_jitter: rv_frozen
Probability distribution for randomization factors applied to costs.
demand_jitter: rv_frozen
Probability distribution for randomization factors applied to demand.
"""
assert isinstance(
cost_jitter, rv_frozen
), "cost_jitter should be a SciPy probability distribution"
assert isinstance(
demand_jitter, rv_frozen
), "demand_jitter should be a SciPy probability distribution"
self.cost_jitter = cost_jitter
self.demand_jitter = demand_jitter
def perturb(
self,
instance: UnitCommitmentData,
n_samples: int,
) -> List[UnitCommitmentData]:
def _sample() -> UnitCommitmentData:
T, G = len(instance.demand), len(instance.max_power)
demand = instance.demand * self.demand_jitter.rvs(T)
cost_startup = instance.cost_startup * self.cost_jitter.rvs(G)
cost_prod = instance.cost_prod * self.cost_jitter.rvs(G)
cost_prod_quad = instance.cost_prod_quad * self.cost_jitter.rvs(G)
cost_fixed = instance.cost_fixed * self.cost_jitter.rvs(G)
return UnitCommitmentData(
demand.round(2),
instance.min_power,
instance.max_power,
instance.min_uptime,
instance.min_downtime,
cost_startup.round(2),
cost_prod.round(2),
cost_prod_quad.round(4),
cost_fixed.round(2),
)
return [_sample() for _ in range(n_samples)]
def build_uc_model_gurobipy(data: Union[str, UnitCommitmentData]) -> GurobiModel:
""" """
Models the unit commitment problem according to equations (1)-(5) of: Models the unit commitment problem according to equations (1)-(5) of:
@@ -143,6 +225,7 @@ def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:
is_on[g, t] * data.cost_fixed[g] is_on[g, t] * data.cost_fixed[g]
+ switch_on[g, t] * data.cost_startup[g] + switch_on[g, t] * data.cost_startup[g]
+ prod[g, t] * data.cost_prod[g] + prod[g, t] * data.cost_prod[g]
+ prod[g, t] * prod[g, t] * data.cost_prod_quad[g]
for g in range(G) for g in range(G)
for t in range(T) for t in range(T)
) )

View File

@@ -12,7 +12,11 @@ from networkx import Graph
from scipy.stats import uniform, randint from scipy.stats import uniform, randint
from scipy.stats.distributions import rv_frozen from scipy.stats.distributions import rv_frozen
from .stab import MaxWeightStableSetGenerator from .stab import (
MaxWeightStableSetGenerator,
MaxWeightStableSetPerturber,
MaxWeightStableSetData,
)
from miplearn.solvers.gurobi import GurobiModel from miplearn.solvers.gurobi import GurobiModel
from ..io import read_pkl_gz from ..io import read_pkl_gz
@@ -24,14 +28,35 @@ class MinWeightVertexCoverData:
class MinWeightVertexCoverGenerator: class MinWeightVertexCoverGenerator:
"""Random instance generator for the Minimum-Weight Vertex Cover Problem.
Generates instances by creating a new random Erdős-Rényi graph $G_{n,p}$ for each
instance, where $n$ and $p$ are sampled from user-provided probability distributions
`n` and `p`. For each instance, the generator independently samples each $w_v$ from
the user-provided probability distribution `w`.
"""
def __init__( def __init__(
self, self,
w: rv_frozen = uniform(loc=10.0, scale=1.0), w: rv_frozen = uniform(loc=10.0, scale=1.0),
n: rv_frozen = randint(low=250, high=251), n: rv_frozen = randint(low=250, high=251),
p: rv_frozen = uniform(loc=0.05, scale=0.0), p: rv_frozen = uniform(loc=0.05, scale=0.0),
fix_graph: bool = True,
): ):
self._generator = MaxWeightStableSetGenerator(w, n, p, fix_graph) """Initialize the problem generator.
Parameters
----------
w: rv_continuous
Probability distribution for vertex weights.
n: rv_discrete
Probability distribution for parameter $n$ in Erdős-Rényi model.
p: rv_continuous
Probability distribution for parameter $p$ in Erdős-Rényi model.
"""
assert isinstance(w, rv_frozen), "w should be a SciPy probability distribution"
assert isinstance(n, rv_frozen), "n should be a SciPy probability distribution"
assert isinstance(p, rv_frozen), "p should be a SciPy probability distribution"
self._generator = MaxWeightStableSetGenerator(w, n, p)
def generate(self, n_samples: int) -> List[MinWeightVertexCoverData]: def generate(self, n_samples: int) -> List[MinWeightVertexCoverData]:
return [ return [
@@ -40,7 +65,41 @@ class MinWeightVertexCoverGenerator:
] ]
def build_vertexcover_model(data: Union[str, MinWeightVertexCoverData]) -> GurobiModel: class MinWeightVertexCoverPerturber:
"""Perturbation generator for existing Minimum-Weight Vertex Cover instances.
Takes an existing MinWeightVertexCoverData instance and generates new instances
by applying randomization factors to the existing weights while keeping the graph fixed.
"""
def __init__(
self,
w_jitter: rv_frozen = uniform(loc=0.9, scale=0.2),
):
"""Initialize the perturbation generator.
Parameters
----------
w_jitter: rv_continuous
Probability distribution for randomization factors applied to vertex weights.
"""
self._perturber = MaxWeightStableSetPerturber(w_jitter)
def perturb(
self,
instance: MinWeightVertexCoverData,
n_samples: int,
) -> List[MinWeightVertexCoverData]:
stab_instance = MaxWeightStableSetData(instance.graph, instance.weights)
perturbed_instances = self._perturber.perturb(stab_instance, n_samples)
return [
MinWeightVertexCoverData(s.graph, s.weights) for s in perturbed_instances
]
def build_vertexcover_model_gurobipy(
data: Union[str, MinWeightVertexCoverData]
) -> GurobiModel:
if isinstance(data, str): if isinstance(data, str):
data = read_pkl_gz(data) data = read_pkl_gz(data)
assert isinstance(data, MinWeightVertexCoverData) assert isinstance(data, MinWeightVertexCoverData)
@@ -48,7 +107,7 @@ def build_vertexcover_model(data: Union[str, MinWeightVertexCoverData]) -> Gurob
nodes = list(data.graph.nodes) nodes = list(data.graph.nodes)
x = model.addVars(nodes, vtype=GRB.BINARY, name="x") x = model.addVars(nodes, vtype=GRB.BINARY, name="x")
model.setObjective(quicksum(data.weights[i] * x[i] for i in nodes)) model.setObjective(quicksum(data.weights[i] * x[i] for i in nodes))
for (v1, v2) in data.graph.edges: for v1, v2 in data.graph.edges:
model.addConstr(x[v1] + x[v2] >= 1) model.addConstr(x[v1] + x[v2] >= 1)
model.update() model.update()
return GurobiModel(model) return GurobiModel(model)

View File

@@ -3,7 +3,7 @@
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import Optional, Dict from typing import Optional, Dict, Callable, Hashable, List, Any
import numpy as np import numpy as np
@@ -16,6 +16,20 @@ class AbstractModel(ABC):
_supports_node_count = False _supports_node_count = False
_supports_solution_pool = False _supports_solution_pool = False
WHERE_DEFAULT = "default"
WHERE_CUTS = "cuts"
WHERE_LAZY = "lazy"
def __init__(self) -> None:
self._lazy_enforce: Optional[Callable] = None
self._lazy_separate: Optional[Callable] = None
self._lazy: Optional[List[Any]] = None
self._cuts_enforce: Optional[Callable] = None
self._cuts_separate: Optional[Callable] = None
self._cuts: Optional[List[Any]] = None
self._cuts_aot: Optional[List[Any]] = None
self._where = self.WHERE_DEFAULT
@abstractmethod @abstractmethod
def add_constrs( def add_constrs(
self, self,
@@ -68,3 +82,16 @@ class AbstractModel(ABC):
@abstractmethod @abstractmethod
def write(self, filename: str) -> None: def write(self, filename: str) -> None:
pass pass
def set_cuts(self, cuts: List) -> None:
self._cuts_aot = cuts
def lazy_enforce(self, violations: List[Any]) -> None:
if self._lazy_enforce is not None:
self._lazy_enforce(self, violations)
def _lazy_enforce_collected(self) -> None:
"""Adds all lazy constraints identified in the callback as actual model constraints. Useful for generating
a final MPS file with the constraints that were required in this run."""
if self._lazy_enforce is not None:
self._lazy_enforce(self, self._lazy)

View File

@@ -1,17 +1,78 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization # MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from typing import Dict, Optional, Callable, Any, List
import logging
import json
from typing import Dict, Optional, Callable, Any, List, Sequence
import gurobipy as gp import gurobipy as gp
from gurobipy import GRB, GurobiError from gurobipy import GRB, GurobiError, Var
import numpy as np import numpy as np
from scipy.sparse import lil_matrix from scipy.sparse import lil_matrix
from miplearn.h5 import H5File from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class GurobiModel: def _gurobi_callback(model: AbstractModel, gp_model: gp.Model, where: int) -> None:
assert isinstance(gp_model, gp.Model)
# Lazy constraints
if model._lazy_separate is not None:
assert model._lazy_enforce is not None
assert model._lazy is not None
if where == GRB.Callback.MIPSOL:
model._where = model.WHERE_LAZY
violations = model._lazy_separate(model)
if len(violations) > 0:
model._lazy.extend(violations)
model._lazy_enforce(model, violations)
# User cuts
if model._cuts_separate is not None:
assert model._cuts_enforce is not None
assert model._cuts is not None
if where == GRB.Callback.MIPNODE:
status = gp_model.cbGet(GRB.Callback.MIPNODE_STATUS)
if status == GRB.OPTIMAL:
model._where = model.WHERE_CUTS
if model._cuts_aot is not None:
violations = model._cuts_aot
model._cuts_aot = None
logger.info(f"Enforcing {len(violations)} cuts ahead-of-time...")
else:
violations = model._cuts_separate(model)
if len(violations) > 0:
model._cuts.extend(violations)
model._cuts_enforce(model, violations)
# Cleanup
model._where = model.WHERE_DEFAULT
def _gurobi_add_constr(gp_model: gp.Model, where: str, constr: Any) -> None:
if where == AbstractModel.WHERE_LAZY:
gp_model.cbLazy(constr)
elif where == AbstractModel.WHERE_CUTS:
gp_model.cbCut(constr)
else:
gp_model.addConstr(constr)
def _gurobi_set_required_params(model: AbstractModel, gp_model: gp.Model) -> None:
# Required parameters for lazy constraints
if model._lazy_enforce is not None:
gp_model.setParam("PreCrush", 1)
gp_model.setParam("LazyConstraints", 1)
# Required parameters for user cuts
if model._cuts_enforce is not None:
gp_model.setParam("PreCrush", 1)
class GurobiModel(AbstractModel):
_supports_basis_status = True _supports_basis_status = True
_supports_sensitivity_analysis = True _supports_sensitivity_analysis = True
_supports_node_count = True _supports_node_count = True
@@ -20,13 +81,17 @@ class GurobiModel:
def __init__( def __init__(
self, self,
inner: gp.Model, inner: gp.Model,
find_violations: Optional[Callable] = None, lazy_separate: Optional[Callable] = None,
fix_violations: Optional[Callable] = None, lazy_enforce: Optional[Callable] = None,
cuts_separate: Optional[Callable] = None,
cuts_enforce: Optional[Callable] = None,
) -> None: ) -> None:
self.fix_violations = fix_violations super().__init__()
self.find_violations = find_violations self._lazy_separate = lazy_separate
self._lazy_enforce = lazy_enforce
self._cuts_separate = cuts_separate
self._cuts_enforce = cuts_enforce
self.inner = inner self.inner = inner
self.violations_: Optional[List[Any]] = None
def add_constrs( def add_constrs(
self, self,
@@ -44,7 +109,11 @@ class GurobiModel:
assert constrs_sense.shape == (nconstrs,) assert constrs_sense.shape == (nconstrs,)
assert constrs_rhs.shape == (nconstrs,) assert constrs_rhs.shape == (nconstrs,)
gp_vars = [self.inner.getVarByName(var_name.decode()) for var_name in var_names] gp_vars: list[Var] = []
for var_name in var_names:
v = self.inner.getVarByName(var_name.decode())
assert v is not None, f"unknown var: {var_name}"
gp_vars.append(v)
self.inner.addMConstr(constrs_lhs, gp_vars, constrs_sense, constrs_rhs) self.inner.addMConstr(constrs_lhs, gp_vars, constrs_sense, constrs_rhs)
if stats is not None: if stats is not None:
@@ -52,6 +121,9 @@ class GurobiModel:
stats["Added constraints"] = 0 stats["Added constraints"] = 0
stats["Added constraints"] += nconstrs stats["Added constraints"] += nconstrs
def add_constr(self, constr: Any) -> None:
_gurobi_add_constr(self.inner, self._where, constr)
def extract_after_load(self, h5: H5File) -> None: def extract_after_load(self, h5: H5File) -> None:
""" """
Given a model that has just been loaded, extracts static problem Given a model that has just been loaded, extracts static problem
@@ -100,6 +172,10 @@ class GurobiModel:
except AttributeError: except AttributeError:
pass pass
self._extract_after_mip_solution_pool(h5) self._extract_after_mip_solution_pool(h5)
if self._lazy is not None:
h5.put_scalar("mip_lazy", json.dumps(self._lazy))
if self._cuts is not None:
h5.put_scalar("mip_cuts", json.dumps(self._cuts))
def fix_variables( def fix_variables(
self, self,
@@ -112,31 +188,28 @@ class GurobiModel:
assert var_names.shape == var_values.shape assert var_names.shape == var_values.shape
n_fixed = 0 n_fixed = 0
for (var_idx, var_name) in enumerate(var_names): for var_idx, var_name in enumerate(var_names):
var_val = var_values[var_idx] var_val = var_values[var_idx]
if np.isfinite(var_val): if np.isfinite(var_val):
var = self.inner.getVarByName(var_name.decode()) var = self.inner.getVarByName(var_name.decode())
var.vtype = "C" assert var is not None, f"unknown var: {var_name}"
var.lb = var_val var.VType = "c"
var.ub = var_val var.LB = var_val
var.UB = var_val
n_fixed += 1 n_fixed += 1
if stats is not None: if stats is not None:
stats["Fixed variables"] = n_fixed stats["Fixed variables"] = n_fixed
def optimize(self) -> None: def optimize(self) -> None:
self.violations_ = [] self._lazy = []
self._cuts = []
def callback(m: gp.Model, where: int) -> None: def callback(_: gp.Model, where: int) -> None:
assert self.find_violations is not None _gurobi_callback(self, self.inner, where)
assert self.violations_ is not None
assert self.fix_violations is not None
if where == GRB.Callback.MIPSOL:
violations = self.find_violations(self)
self.violations_.extend(violations)
self.fix_violations(self, violations, "cb")
if self.fix_violations is not None: _gurobi_set_required_params(self, self.inner)
self.inner.Params.lazyConstraints = 1
if self.lazy_enforce is not None or self.cuts_enforce is not None:
self.inner.optimize(callback) self.inner.optimize(callback)
else: else:
self.inner.optimize() self.inner.optimize()
@@ -145,7 +218,7 @@ class GurobiModel:
return GurobiModel(self.inner.relax()) return GurobiModel(self.inner.relax())
def set_time_limit(self, time_limit_sec: float) -> None: def set_time_limit(self, time_limit_sec: float) -> None:
self.inner.params.timeLimit = time_limit_sec self.inner.params.TimeLimit = time_limit_sec
def set_warm_starts( def set_warm_starts(
self, self,
@@ -160,12 +233,13 @@ class GurobiModel:
self.inner.numStart = n_starts self.inner.numStart = n_starts
for start_idx in range(n_starts): for start_idx in range(n_starts):
self.inner.params.startNumber = start_idx self.inner.params.StartNumber = start_idx
for (var_idx, var_name) in enumerate(var_names): for var_idx, var_name in enumerate(var_names):
var_val = var_values[start_idx, var_idx] var_val = var_values[start_idx, var_idx]
if np.isfinite(var_val): if np.isfinite(var_val):
var = self.inner.getVarByName(var_name.decode()) var = self.inner.getVarByName(var_name.decode())
var.start = var_val assert var is not None, f"unknown var: {var_name}"
var.Start = var_val
if stats is not None: if stats is not None:
stats["WS: Count"] = n_starts stats["WS: Count"] = n_starts
@@ -175,14 +249,14 @@ class GurobiModel:
def _extract_after_load_vars(self, h5: H5File) -> None: def _extract_after_load_vars(self, h5: H5File) -> None:
gp_vars = self.inner.getVars() gp_vars = self.inner.getVars()
for (h5_field, gp_field) in { for h5_field, gp_field in {
"static_var_names": "varName", "static_var_names": "varName",
"static_var_types": "vtype", "static_var_types": "vtype",
}.items(): }.items():
h5.put_array( h5.put_array(
h5_field, np.array(self.inner.getAttr(gp_field, gp_vars), dtype="S") h5_field, np.array(self.inner.getAttr(gp_field, gp_vars), dtype="S")
) )
for (h5_field, gp_field) in { for h5_field, gp_field in {
"static_var_upper_bounds": "ub", "static_var_upper_bounds": "ub",
"static_var_lower_bounds": "lb", "static_var_lower_bounds": "lb",
"static_var_obj_coeffs": "obj", "static_var_obj_coeffs": "obj",
@@ -190,6 +264,13 @@ class GurobiModel:
h5.put_array( h5.put_array(
h5_field, np.array(self.inner.getAttr(gp_field, gp_vars), dtype=float) h5_field, np.array(self.inner.getAttr(gp_field, gp_vars), dtype=float)
) )
obj = self.inner.getObjective()
if isinstance(obj, gp.QuadExpr):
nvars = len(self.inner.getVars())
obj_q = np.zeros((nvars, nvars))
for i in range(obj.size()):
obj_q[obj.getVar1(i).index, obj.getVar2(i).index] = obj.getCoeff(i)
h5.put_array("static_var_obj_coeffs_quad", obj_q)
def _extract_after_load_constrs(self, h5: H5File) -> None: def _extract_after_load_constrs(self, h5: H5File) -> None:
gp_constrs = self.inner.getConstrs() gp_constrs = self.inner.getConstrs()
@@ -199,7 +280,7 @@ class GurobiModel:
names = np.array(self.inner.getAttr("constrName", gp_constrs), dtype="S") names = np.array(self.inner.getAttr("constrName", gp_constrs), dtype="S")
nrows, ncols = len(gp_constrs), len(gp_vars) nrows, ncols = len(gp_constrs), len(gp_vars)
tmp = lil_matrix((nrows, ncols), dtype=float) tmp = lil_matrix((nrows, ncols), dtype=float)
for (i, gp_constr) in enumerate(gp_constrs): for i, gp_constr in enumerate(gp_constrs):
expr = self.inner.getRow(gp_constr) expr = self.inner.getRow(gp_constr)
for j in range(expr.size()): for j in range(expr.size()):
tmp[i, expr.getVar(j).index] = expr.getCoeff(j) tmp[i, expr.getVar(j).index] = expr.getCoeff(j)
@@ -234,7 +315,7 @@ class GurobiModel:
dtype="S", dtype="S",
), ),
) )
for (h5_field, gp_field) in { for h5_field, gp_field in {
"lp_var_reduced_costs": "rc", "lp_var_reduced_costs": "rc",
"lp_var_sa_obj_up": "saobjUp", "lp_var_sa_obj_up": "saobjUp",
"lp_var_sa_obj_down": "saobjLow", "lp_var_sa_obj_down": "saobjLow",
@@ -268,7 +349,7 @@ class GurobiModel:
dtype="S", dtype="S",
), ),
) )
for (h5_field, gp_field) in { for h5_field, gp_field in {
"lp_constr_dual_values": "pi", "lp_constr_dual_values": "pi",
"lp_constr_sa_rhs_up": "saRhsUp", "lp_constr_sa_rhs_up": "saRhsUp",
"lp_constr_sa_rhs_down": "saRhsLow", "lp_constr_sa_rhs_down": "saRhsLow",

View File

@@ -3,32 +3,43 @@
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from os.path import exists from os.path import exists
from tempfile import NamedTemporaryFile from tempfile import NamedTemporaryFile
from typing import List, Any, Union from typing import List, Any, Union, Dict, Callable, Optional, Tuple
from miplearn.h5 import H5File from miplearn.h5 import H5File
from miplearn.io import _to_h5_filename from miplearn.io import _to_h5_filename
from miplearn.solvers.abstract import AbstractModel from miplearn.solvers.abstract import AbstractModel
import shutil
class LearningSolver: class LearningSolver:
def __init__(self, components: List[Any], skip_lp=False): def __init__(self, components: List[Any], skip_lp: bool = False) -> None:
self.components = components self.components = components
self.skip_lp = skip_lp self.skip_lp = skip_lp
def fit(self, data_filenames): def fit(self, data_filenames: List[str]) -> None:
h5_filenames = [_to_h5_filename(f) for f in data_filenames] h5_filenames = [_to_h5_filename(f) for f in data_filenames]
for comp in self.components: for comp in self.components:
comp.fit(h5_filenames) comp.fit(h5_filenames)
def optimize(self, model: Union[str, AbstractModel], build_model=None): def optimize(
self,
model: Union[str, AbstractModel],
build_model: Optional[Callable] = None,
) -> Tuple[AbstractModel, Dict[str, Any]]:
h5_filename, mode = NamedTemporaryFile().name, "w"
if isinstance(model, str): if isinstance(model, str):
h5_filename = _to_h5_filename(model)
assert build_model is not None assert build_model is not None
old_h5_filename = _to_h5_filename(model)
model = build_model(model) model = build_model(model)
else: assert isinstance(model, AbstractModel)
h5_filename = NamedTemporaryFile().name
stats = {} # If the instance has an associate H5 file, we make a temporary copy of it,
mode = "r+" if exists(h5_filename) else "w" # then work on that copy. We keep the original file unmodified
if exists(old_h5_filename):
shutil.copy(old_h5_filename, h5_filename)
mode = "r+"
stats: Dict[str, Any] = {}
with H5File(h5_filename, mode) as h5: with H5File(h5_filename, mode) as h5:
model.extract_after_load(h5) model.extract_after_load(h5)
if not self.skip_lp: if not self.skip_lp:
@@ -36,8 +47,10 @@ class LearningSolver:
relaxed.optimize() relaxed.optimize()
relaxed.extract_after_lp(h5) relaxed.extract_after_lp(h5)
for comp in self.components: for comp in self.components:
comp.before_mip(h5_filename, model, stats) comp_stats = comp.before_mip(h5_filename, model, stats)
if comp_stats is not None:
stats.update(comp_stats)
model.optimize() model.optimize()
model.extract_after_mip(h5) model.extract_after_mip(h5)
return stats return model, stats

View File

@@ -2,35 +2,65 @@
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
from numbers import Number from numbers import Number
from typing import Optional, Dict, List, Any from typing import Optional, Dict, List, Any, Tuple, Callable
import numpy as np import numpy as np
import pyomo import pyomo
import pyomo.environ as pe
from pyomo.core import Objective, Var, Suffix from pyomo.core import Objective, Var, Suffix
from pyomo.core.base import _GeneralVarData from pyomo.core.base import VarData
from pyomo.core.expr import ProductExpression
from pyomo.core.expr.numeric_expr import SumExpression, MonomialTermExpression from pyomo.core.expr.numeric_expr import SumExpression, MonomialTermExpression
from scipy.sparse import coo_matrix from scipy.sparse import coo_matrix
from miplearn.h5 import H5File from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel from miplearn.solvers.abstract import AbstractModel
import pyomo.environ as pe from miplearn.solvers.gurobi import (
_gurobi_callback,
_gurobi_add_constr,
_gurobi_set_required_params,
)
class PyomoModel(AbstractModel): class PyomoModel(AbstractModel):
def __init__(self, model: pe.ConcreteModel, solver_name: str = "gurobi_persistent"): def __init__(
self,
model: pe.ConcreteModel,
solver_name: str = "gurobi_persistent",
lazy_separate: Optional[Callable] = None,
lazy_enforce: Optional[Callable] = None,
cuts_separate: Optional[Callable] = None,
cuts_enforce: Optional[Callable] = None,
):
super().__init__()
self.inner = model self.inner = model
self.solver_name = solver_name self.solver_name = solver_name
self._lazy_separate = lazy_separate
self._lazy_enforce = lazy_enforce
self._cuts_separate = cuts_separate
self._cuts_enforce = cuts_enforce
self.solver = pe.SolverFactory(solver_name) self.solver = pe.SolverFactory(solver_name)
self.is_persistent = hasattr(self.solver, "set_instance") self.is_persistent = hasattr(self.solver, "set_instance")
if self.is_persistent: if self.is_persistent:
self.solver.set_instance(model) self.solver.set_instance(model)
self.results = None self.results: Optional[Dict] = None
self._is_warm_start_available = False self._is_warm_start_available = False
if not hasattr(self.inner, "dual"): if not hasattr(self.inner, "dual"):
self.inner.dual = Suffix(direction=Suffix.IMPORT) self.inner.dual = Suffix(direction=Suffix.IMPORT)
self.inner.rc = Suffix(direction=Suffix.IMPORT) self.inner.rc = Suffix(direction=Suffix.IMPORT)
self.inner.slack = Suffix(direction=Suffix.IMPORT) self.inner.slack = Suffix(direction=Suffix.IMPORT)
def add_constr(self, constr: Any) -> None:
assert (
self.solver_name == "gurobi_persistent"
), "Callbacks are currently only supported on gurobi_persistent"
if self._where in [AbstractModel.WHERE_CUTS, AbstractModel.WHERE_LAZY]:
_gurobi_add_constr(self.solver, self._where, constr)
else:
# outside callbacks, add_constr shouldn't do anything, as the constraint
# has already been added to the ConstraintList object
pass
def add_constrs( def add_constrs(
self, self,
var_names: np.ndarray, var_names: np.ndarray,
@@ -56,7 +86,7 @@ class PyomoModel(AbstractModel):
raise Exception(f"Unknown sense: {sense}") raise Exception(f"Unknown sense: {sense}")
self.solver.add_constraint(eq) self.solver.add_constraint(eq)
def _var_names_to_vars(self, var_names): def _var_names_to_vars(self, var_names: np.ndarray) -> List[Any]:
varname_to_var = {} varname_to_var = {}
for var in self.inner.component_objects(Var): for var in self.inner.component_objects(Var):
for idx in var: for idx in var:
@@ -70,12 +100,14 @@ class PyomoModel(AbstractModel):
h5.put_scalar("static_sense", self._get_sense()) h5.put_scalar("static_sense", self._get_sense())
def extract_after_lp(self, h5: H5File) -> None: def extract_after_lp(self, h5: H5File) -> None:
assert self.results is not None
self._extract_after_lp_vars(h5) self._extract_after_lp_vars(h5)
self._extract_after_lp_constrs(h5) self._extract_after_lp_constrs(h5)
h5.put_scalar("lp_obj_value", self.results["Problem"][0]["Lower bound"]) h5.put_scalar("lp_obj_value", self.results["Problem"][0]["Lower bound"])
h5.put_scalar("lp_wallclock_time", self._get_runtime()) h5.put_scalar("lp_wallclock_time", self._get_runtime())
def _get_runtime(self): def _get_runtime(self) -> float:
assert self.results is not None
solver_dict = self.results["Solver"][0] solver_dict = self.results["Solver"][0]
for key in ["Wallclock time", "User time"]: for key in ["Wallclock time", "User time"]:
if isinstance(solver_dict[key], Number): if isinstance(solver_dict[key], Number):
@@ -83,6 +115,7 @@ class PyomoModel(AbstractModel):
raise Exception("Time unavailable") raise Exception("Time unavailable")
def extract_after_mip(self, h5: H5File) -> None: def extract_after_mip(self, h5: H5File) -> None:
assert self.results is not None
h5.put_scalar("mip_wallclock_time", self._get_runtime()) h5.put_scalar("mip_wallclock_time", self._get_runtime())
if self.results["Solver"][0]["Termination condition"] == "infeasible": if self.results["Solver"][0]["Termination condition"] == "infeasible":
return return
@@ -97,6 +130,10 @@ class PyomoModel(AbstractModel):
h5.put_scalar("mip_obj_value", obj_value) h5.put_scalar("mip_obj_value", obj_value)
h5.put_scalar("mip_obj_bound", obj_bound) h5.put_scalar("mip_obj_bound", obj_bound)
h5.put_scalar("mip_gap", self._gap(obj_value, obj_bound)) h5.put_scalar("mip_gap", self._gap(obj_value, obj_bound))
if self._lazy is not None:
h5.put_scalar("mip_lazy", repr(self._lazy))
if self._cuts is not None:
h5.put_scalar("mip_cuts", repr(self._cuts))
def fix_variables( def fix_variables(
self, self,
@@ -105,12 +142,26 @@ class PyomoModel(AbstractModel):
stats: Optional[Dict] = None, stats: Optional[Dict] = None,
) -> None: ) -> None:
variables = self._var_names_to_vars(var_names) variables = self._var_names_to_vars(var_names)
for (var, val) in zip(variables, var_values): for var, val in zip(variables, var_values):
if np.isfinite(val): if np.isfinite(val):
var.fix(val) var.fix(val)
self.solver.update_var(var) self.solver.update_var(var)
def optimize(self) -> None: def optimize(self) -> None:
self._lazy = []
self._cuts = []
if self._lazy_enforce is not None or self._cuts_enforce is not None:
assert (
self.solver_name == "gurobi_persistent"
), "Callbacks are currently only supported on gurobi_persistent"
_gurobi_set_required_params(self, self.solver._solver_model)
def callback(_: Any, __: Any, where: int) -> None:
_gurobi_callback(self, self.solver._solver_model, where)
self.solver.set_callback(callback)
if self.is_persistent: if self.is_persistent:
self.results = self.solver.solve( self.results = self.solver.solve(
tee=True, tee=True,
@@ -145,31 +196,35 @@ class PyomoModel(AbstractModel):
assert var_names.shape[0] == n_vars assert var_names.shape[0] == n_vars
assert n_starts == 1, "Pyomo does not support multiple warm starts" assert n_starts == 1, "Pyomo does not support multiple warm starts"
variables = self._var_names_to_vars(var_names) variables = self._var_names_to_vars(var_names)
for (var, val) in zip(variables, var_values[0, :]): for var, val in zip(variables, var_values[0, :]):
if np.isfinite(val): if np.isfinite(val):
var.value = val var.value = val
self._is_warm_start_available = True self._is_warm_start_available = True
def _extract_after_load_vars(self, h5): def _extract_after_load_vars(self, h5: H5File) -> None:
names: List[str] = [] names: List[str] = []
types: List[str] = [] types: List[str] = []
upper_bounds: List[float] = [] upper_bounds: List[float] = []
lower_bounds: List[float] = [] lower_bounds: List[float] = []
obj_coeffs: List[float] = [] obj_coeffs: List[float] = []
obj = None obj_quad, obj_linear = None, None
obj_offset = 0.0 obj_offset = 0.0
obj_count = 0 obj_count = 0
for obj in self.inner.component_objects(Objective): for obj in self.inner.component_objects(Objective):
obj, obj_offset = self._parse_pyomo_expr(obj.expr) obj_quad, obj_linear, obj_offset = self._parse_obj_expr(obj.expr)
obj_count += 1 obj_count += 1
assert obj_count == 1, f"One objective function expected; found {obj_count}" assert obj_count == 1, f"One objective function expected; found {obj_count}"
assert obj_quad is not None
assert obj_linear is not None
for (i, var) in enumerate(self.inner.component_objects(pyomo.core.Var)): varname_to_idx: Dict[str, int] = {}
for i, var in enumerate(self.inner.component_objects(pyomo.core.Var)):
for idx in var: for idx in var:
v = var[idx] v = var[idx]
# Variable name # Variable name
varname_to_idx[v.name] = len(names)
if idx is None: if idx is None:
names.append(var.name) names.append(var.name)
else: else:
@@ -199,11 +254,22 @@ class PyomoModel(AbstractModel):
lower_bounds.append(float(lb)) lower_bounds.append(float(lb))
# Objective coefficients # Objective coefficients
if v.name in obj: if v.name in obj_linear:
obj_coeffs.append(obj[v.name]) obj_coeffs.append(obj_linear[v.name])
else: else:
obj_coeffs.append(0.0) obj_coeffs.append(0.0)
if len(obj_quad) > 0:
nvars = len(names)
matrix = np.zeros((nvars, nvars))
for (left_varname, right_varname), coeff in obj_quad.items():
assert left_varname in varname_to_idx
assert right_varname in varname_to_idx
left_idx = varname_to_idx[left_varname]
right_idx = varname_to_idx[right_varname]
matrix[left_idx, right_idx] = coeff
h5.put_array("static_var_obj_coeffs_quad", matrix)
h5.put_array("static_var_names", np.array(names, dtype="S")) h5.put_array("static_var_names", np.array(names, dtype="S"))
h5.put_array("static_var_types", np.array(types, dtype="S")) h5.put_array("static_var_types", np.array(types, dtype="S"))
h5.put_array("static_var_lower_bounds", np.array(lower_bounds)) h5.put_array("static_var_lower_bounds", np.array(lower_bounds))
@@ -211,7 +277,7 @@ class PyomoModel(AbstractModel):
h5.put_array("static_var_obj_coeffs", np.array(obj_coeffs)) h5.put_array("static_var_obj_coeffs", np.array(obj_coeffs))
h5.put_scalar("static_obj_offset", obj_offset) h5.put_scalar("static_obj_offset", obj_offset)
def _extract_after_load_constrs(self, h5): def _extract_after_load_constrs(self, h5: H5File) -> None:
names: List[str] = [] names: List[str] = []
rhs: List[float] = [] rhs: List[float] = []
senses: List[str] = [] senses: List[str] = []
@@ -219,7 +285,7 @@ class PyomoModel(AbstractModel):
lhs_col: List[int] = [] lhs_col: List[int] = []
lhs_data: List[float] = [] lhs_data: List[float] = []
varname_to_idx = {} varname_to_idx: Dict[str, int] = {}
for var in self.inner.component_objects(Var): for var in self.inner.component_objects(Var):
for idx in var: for idx in var:
varname = var.name varname = var.name
@@ -252,13 +318,13 @@ class PyomoModel(AbstractModel):
lhs_row.append(row) lhs_row.append(row)
lhs_col.append(varname_to_idx[term._args_[1].name]) lhs_col.append(varname_to_idx[term._args_[1].name])
lhs_data.append(float(term._args_[0])) lhs_data.append(float(term._args_[0]))
elif isinstance(term, _GeneralVarData): elif isinstance(term, VarData):
lhs_row.append(row) lhs_row.append(row)
lhs_col.append(varname_to_idx[term.name]) lhs_col.append(varname_to_idx[term.name])
lhs_data.append(1.0) lhs_data.append(1.0)
else: else:
raise Exception(f"Unknown term type: {term.__class__.__name__}") raise Exception(f"Unknown term type: {term.__class__.__name__}")
elif isinstance(expr, _GeneralVarData): elif isinstance(expr, VarData):
lhs_row.append(row) lhs_row.append(row)
lhs_col.append(varname_to_idx[expr.name]) lhs_col.append(varname_to_idx[expr.name])
lhs_data.append(1.0) lhs_data.append(1.0)
@@ -266,26 +332,25 @@ class PyomoModel(AbstractModel):
raise Exception(f"Unknown expression type: {expr.__class__.__name__}") raise Exception(f"Unknown expression type: {expr.__class__.__name__}")
curr_row = 0 curr_row = 0
for (i, constr) in enumerate( for i, constr in enumerate(self.inner.component_objects(pyomo.core.Constraint)):
self.inner.component_objects(pyomo.core.Constraint) if len(constr) > 1:
):
if len(constr) > 0:
for idx in constr: for idx in constr:
names.append(constr[idx].name) names.append(constr[idx].name)
_parse_constraint(constr[idx], curr_row) _parse_constraint(constr[idx], curr_row)
curr_row += 1 curr_row += 1
else: elif len(constr) == 1:
names.append(constr.name) names.append(constr.name)
_parse_constraint(constr, curr_row) _parse_constraint(constr, curr_row)
curr_row += 1 curr_row += 1
lhs = coo_matrix((lhs_data, (lhs_row, lhs_col))).tocoo() if len(lhs_data) > 0:
h5.put_sparse("static_constr_lhs", lhs) lhs = coo_matrix((lhs_data, (lhs_row, lhs_col))).tocoo()
h5.put_sparse("static_constr_lhs", lhs)
h5.put_array("static_constr_names", np.array(names, dtype="S")) h5.put_array("static_constr_names", np.array(names, dtype="S"))
h5.put_array("static_constr_rhs", np.array(rhs)) h5.put_array("static_constr_rhs", np.array(rhs))
h5.put_array("static_constr_sense", np.array(senses, dtype="S")) h5.put_array("static_constr_sense", np.array(senses, dtype="S"))
def _extract_after_lp_vars(self, h5): def _extract_after_lp_vars(self, h5: H5File) -> None:
rc = [] rc = []
values = [] values = []
for var in self.inner.component_objects(Var): for var in self.inner.component_objects(Var):
@@ -296,7 +361,7 @@ class PyomoModel(AbstractModel):
h5.put_array("lp_var_reduced_costs", np.array(rc)) h5.put_array("lp_var_reduced_costs", np.array(rc))
h5.put_array("lp_var_values", np.array(values)) h5.put_array("lp_var_values", np.array(values))
def _extract_after_lp_constrs(self, h5): def _extract_after_lp_constrs(self, h5: H5File) -> None:
dual = [] dual = []
slacks = [] slacks = []
for constr in self.inner.component_objects(pyomo.core.Constraint): for constr in self.inner.component_objects(pyomo.core.Constraint):
@@ -307,7 +372,7 @@ class PyomoModel(AbstractModel):
h5.put_array("lp_constr_dual_values", np.array(dual)) h5.put_array("lp_constr_dual_values", np.array(dual))
h5.put_array("lp_constr_slacks", np.array(slacks)) h5.put_array("lp_constr_slacks", np.array(slacks))
def _extract_after_mip_vars(self, h5): def _extract_after_mip_vars(self, h5: H5File) -> None:
values = [] values = []
for var in self.inner.component_objects(Var): for var in self.inner.component_objects(Var):
for idx in var: for idx in var:
@@ -315,34 +380,58 @@ class PyomoModel(AbstractModel):
values.append(v.value) values.append(v.value)
h5.put_array("mip_var_values", np.array(values)) h5.put_array("mip_var_values", np.array(values))
def _extract_after_mip_constrs(self, h5): def _extract_after_mip_constrs(self, h5: H5File) -> None:
slacks = [] slacks = []
for constr in self.inner.component_objects(pyomo.core.Constraint): for constr in self.inner.component_objects(pyomo.core.Constraint):
for idx in constr: for idx in constr:
c = constr[idx] c = constr[idx]
slacks.append(abs(self.inner.slack[c])) if c in self.inner.slack:
slacks.append(abs(self.inner.slack[c]))
h5.put_array("mip_constr_slacks", np.array(slacks)) h5.put_array("mip_constr_slacks", np.array(slacks))
def _parse_pyomo_expr(self, expr: Any): def _parse_term(self, t: Any) -> Tuple[str, float]:
lhs = {} if isinstance(t, MonomialTermExpression):
offset = 0.0 return t._args_[1].name, float(t._args_[0])
elif isinstance(t, VarData):
return t.name, 1.0
else:
raise Exception(f"Unknown term type: {t.__class__.__name__}")
def _parse_obj_expr(
self, expr: Any
) -> Tuple[Dict[Tuple[str, str], float], Dict[str, float], float]:
obj_coeff_linear = {}
obj_coeff_quadratic = {}
obj_offset = 0.0
if isinstance(expr, SumExpression): if isinstance(expr, SumExpression):
for term in expr._args_: for term in expr._args_:
if isinstance(term, MonomialTermExpression): if isinstance(term, (int, float)):
lhs[term._args_[1].name] = float(term._args_[0]) # Constant term
elif isinstance(term, _GeneralVarData): obj_offset += term
lhs[term.name] = 1.0 elif isinstance(term, (MonomialTermExpression, VarData)):
elif isinstance(term, Number): # Linear term
offset += term var_name, var_coeff = self._parse_term(term)
if var_name not in obj_coeff_linear:
obj_coeff_linear[var_name] = 0.0
obj_coeff_linear[var_name] += var_coeff
elif isinstance(term, ProductExpression):
# Quadratic terms
left_var_nane, left_coeff = self._parse_term(term._args_[0])
right_var_nane, right_coeff = self._parse_term(term._args_[1])
if (left_var_nane, right_var_nane) not in obj_coeff_quadratic:
obj_coeff_quadratic[(left_var_nane, right_var_nane)] = 0.0
obj_coeff_quadratic[(left_var_nane, right_var_nane)] += (
left_coeff * right_coeff
)
else: else:
raise Exception(f"Unknown term type: {term.__class__.__name__}") raise Exception(f"Unknown term type: {term.__class__.__name__}")
elif isinstance(expr, _GeneralVarData): elif isinstance(expr, VarData):
lhs[expr.name] = 1.0 obj_coeff_linear[expr.name] = 1.0
else: else:
raise Exception(f"Unknown expression type: {expr.__class__.__name__}") raise Exception(f"Unknown expression type: {expr.__class__.__name__}")
return lhs, offset return obj_coeff_quadratic, obj_coeff_linear, obj_offset
def _gap(self, zp, zd, tol=1e-6): def _gap(self, zp: float, zd: float, tol: float = 1e-6) -> float:
# Reference: https://www.gurobi.com/documentation/9.5/refman/mipgap2.html # Reference: https://www.gurobi.com/documentation/9.5/refman/mipgap2.html
if abs(zp) < tol: if abs(zp) < tol:
if abs(zd) < tol: if abs(zd) < tol:
@@ -352,7 +441,7 @@ class PyomoModel(AbstractModel):
else: else:
return abs(zp - zd) / abs(zp) return abs(zp - zd) / abs(zp)
def _get_sense(self): def _get_sense(self) -> str:
for obj in self.inner.component_objects(Objective): for obj in self.inner.component_objects(Objective):
sense = obj.sense sense = obj.sense
if sense == pyomo.core.kernel.objective.minimize: if sense == pyomo.core.kernel.objective.minimize:
@@ -361,6 +450,7 @@ class PyomoModel(AbstractModel):
return "max" return "max"
else: else:
raise Exception(f"Unknown sense: ${sense}") raise Exception(f"Unknown sense: ${sense}")
raise Exception(f"No objective")
def write(self, filename: str) -> None: def write(self, filename: str) -> None:
self.inner.write(filename, io_options={"symbolic_solver_labels": True}) self.inner.write(filename, io_options={"symbolic_solver_labels": True})

View File

@@ -6,7 +6,7 @@ from setuptools import setup, find_namespace_packages
setup( setup(
name="miplearn", name="miplearn",
version="0.3.0.dev1", version="0.4.3",
author="Alinson S. Xavier", author="Alinson S. Xavier",
author_email="axavier@anl.gov", author_email="axavier@anl.gov",
description="Extensible Framework for Learning-Enhanced Mixed-Integer Optimization", description="Extensible Framework for Learning-Enhanced Mixed-Integer Optimization",
@@ -14,12 +14,11 @@ setup(
packages=find_namespace_packages(), packages=find_namespace_packages(),
python_requires=">=3.9", python_requires=">=3.9",
install_requires=[ install_requires=[
"Jinja2<3.1", "gurobipy>=12,<13",
"gurobipy>=10,<11",
"h5py>=3,<4", "h5py>=3,<4",
"networkx>=2,<3", "networkx>=2,<3",
"numpy>=1,<2", "numpy>=1,<2",
"pandas>=1,<2", "pandas>=2,<3",
"pathos>=0.2,<0.3", "pathos>=0.2,<0.3",
"pyomo>=6,<7", "pyomo>=6,<7",
"scikit-learn>=1,<2", "scikit-learn>=1,<2",
@@ -28,17 +27,17 @@ setup(
], ],
extras_require={ extras_require={
"dev": [ "dev": [
"Sphinx>=3,<4", "Sphinx>=8,<9",
"black==22.6.0", "black==22.6.0",
"mypy==0.971", "mypy==1.8",
"myst-parser==0.14.0", "myst-parser>=4,<5",
"nbsphinx>=0.9,<0.10", "nbsphinx>=0.9,<0.10",
"pyflakes==2.5.0", "pyflakes==2.5.0",
"pytest>=7,<8", "pytest>=7,<8",
"sphinx-book-theme==0.1.0", "sphinx-book-theme>=1,<2",
"sphinx-multitoc-numbering>=0.1,<0.2", "sphinx-multitoc-numbering==0.1.3",
"twine>=4,<5" "twine>=6,<7",
"ipython>=9,<10",
] ]
}, },
) )

View File

View File

@@ -0,0 +1,75 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2023, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import Any, List, Dict
from unittest.mock import Mock
from miplearn.components.cuts.mem import MemorizingCutsComponent
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.problems.stab import build_stab_model_gurobipy, build_stab_model_pyomo
from miplearn.solvers.learning import LearningSolver
from sklearn.dummy import DummyClassifier
from sklearn.neighbors import KNeighborsClassifier
from typing import Callable
def test_mem_component_gp(
stab_gp_h5: List[str],
stab_pyo_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None:
for h5 in [stab_pyo_h5, stab_gp_h5]:
clf = Mock(wraps=DummyClassifier())
comp = MemorizingCutsComponent(clf=clf, extractor=default_extractor)
comp.fit(h5)
# Should call fit method with correct arguments
clf.fit.assert_called()
x, y = clf.fit.call_args.args
assert x.shape == (3, 50)
assert y.shape == (3, 382)
y = y.tolist()
assert y[0][40:50] == [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
assert y[1][40:50] == [1, 1, 1, 0, 1, 1, 1, 1, 1, 1]
assert y[2][40:50] == [1, 1, 1, 1, 0, 0, 1, 1, 1, 1]
# Should store violations
assert comp.constrs_ is not None
assert comp.n_features_ == 50
assert comp.n_targets_ == 382
assert len(comp.constrs_) == 382
# Call before-mip
stats: Dict[str, Any] = {}
model = Mock()
comp.before_mip(h5[0], model, stats)
# Should call predict with correct args
clf.predict.assert_called()
(x_test,) = clf.predict.call_args.args
assert x_test.shape == (1, 50)
# Should call set_cuts
model.set_cuts.assert_called()
(cuts_aot_,) = model.set_cuts.call_args.args
assert cuts_aot_ is not None
assert len(cuts_aot_) == 247
def test_usage_stab(
stab_gp_h5: List[str],
stab_pyo_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None:
for h5, build_model in [
(stab_pyo_h5, build_stab_model_pyomo),
(stab_gp_h5, build_stab_model_gurobipy),
]:
data_filenames = [f.replace(".h5", ".pkl.gz") for f in h5]
clf = KNeighborsClassifier(n_neighbors=1)
comp = MemorizingCutsComponent(clf=clf, extractor=default_extractor)
solver = LearningSolver(components=[comp])
solver.fit(data_filenames)
model, stats = solver.optimize(data_filenames[0], build_model) # type: ignore
assert stats["Cuts: AOT"] > 0

View File

View File

@@ -0,0 +1,69 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import List, Dict, Any
from unittest.mock import Mock
from sklearn.dummy import DummyClassifier
from sklearn.neighbors import KNeighborsClassifier
from miplearn.components.lazy.mem import MemorizingLazyComponent
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.problems.tsp import build_tsp_model_gurobipy, build_tsp_model_pyomo
from miplearn.solvers.learning import LearningSolver
def test_mem_component(
tsp_gp_h5: List[str],
tsp_pyo_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None:
for h5 in [tsp_gp_h5, tsp_pyo_h5]:
clf = Mock(wraps=DummyClassifier())
comp = MemorizingLazyComponent(clf=clf, extractor=default_extractor)
comp.fit(h5)
# Should call fit method with correct arguments
clf.fit.assert_called()
x, y = clf.fit.call_args.args
assert x.shape == (3, 190)
assert y.tolist() == [
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0],
[1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1],
]
# Should store violations
assert comp.constrs_ is not None
assert comp.n_features_ == 190
assert comp.n_targets_ == 20
assert len(comp.constrs_) == 20
# Call before-mip
stats: Dict[str, Any] = {}
model = Mock()
comp.before_mip(h5[0], model, stats)
# Should call predict with correct args
clf.predict.assert_called()
(x_test,) = clf.predict.call_args.args
assert x_test.shape == (1, 190)
def test_usage_tsp(
tsp_gp_h5: List[str],
tsp_pyo_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None:
for h5, build_model in [
(tsp_pyo_h5, build_tsp_model_pyomo),
(tsp_gp_h5, build_tsp_model_gurobipy),
]:
data_filenames = [f.replace(".h5", ".pkl.gz") for f in h5]
clf = KNeighborsClassifier(n_neighbors=1)
comp = MemorizingLazyComponent(clf=clf, extractor=default_extractor)
solver = LearningSolver(components=[comp])
solver.fit(data_filenames)
model, stats = solver.optimize(data_filenames[0], build_model) # type: ignore
assert stats["Lazy Constraints: AOT"] > 0

View File

@@ -20,7 +20,8 @@ logger = logging.getLogger(__name__)
def test_mem_component( def test_mem_component(
multiknapsack_h5: List[str], default_extractor: FeaturesExtractor multiknapsack_h5: List[str],
default_extractor: FeaturesExtractor,
) -> None: ) -> None:
# Create mock classifier # Create mock classifier
clf = Mock(wraps=DummyClassifier()) clf = Mock(wraps=DummyClassifier())

View File

@@ -1,20 +1,68 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization # MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
import os
from glob import glob import shutil
from os.path import dirname import tempfile
from typing import List from glob import glob, escape
from os.path import dirname, basename, isfile
from tempfile import NamedTemporaryFile
from typing import List, Any
import pytest import pytest
from miplearn.extractors.fields import H5FieldsExtractor
from miplearn.extractors.abstract import FeaturesExtractor from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.extractors.fields import H5FieldsExtractor
def _h5_fixture(pattern: str, request: Any) -> List[str]:
"""
Create a temporary copy of the provided .h5 files, along with the companion
.pkl.gz files, and return the path to the copy. Also register a finalizer,
so that the temporary folder is removed after the tests.
"""
fixtures_dir = escape(os.path.join(dirname(__file__), "fixtures"))
filenames = glob(os.path.join(fixtures_dir, pattern))
tmpdir = tempfile.mkdtemp()
def cleanup() -> None:
shutil.rmtree(tmpdir)
request.addfinalizer(cleanup)
for f in filenames:
fbase, _ = os.path.splitext(f)
for ext in [".h5", ".pkl.gz"]:
dest = os.path.join(tmpdir, f"{basename(fbase)}{ext}")
print(dest)
shutil.copy(f"{fbase}{ext}", dest)
assert isfile(dest)
return sorted(glob(f"{tmpdir}/*.h5"))
@pytest.fixture() @pytest.fixture()
def multiknapsack_h5() -> List[str]: def multiknapsack_h5(request: Any) -> List[str]:
return sorted(glob(f"{dirname(__file__)}/fixtures/multiknapsack*.h5")) return _h5_fixture("multiknapsack*.h5", request)
@pytest.fixture()
def tsp_gp_h5(request: Any) -> List[str]:
return _h5_fixture("tsp-gp*.h5", request)
@pytest.fixture()
def tsp_pyo_h5(request: Any) -> List[str]:
return _h5_fixture("tsp-pyo*.h5", request)
@pytest.fixture()
def stab_gp_h5(request: Any) -> List[str]:
return _h5_fixture("stab-gp*.h5", request)
@pytest.fixture()
def stab_pyo_h5(request: Any) -> List[str]:
return _h5_fixture("stab-pyo*.h5", request)
@pytest.fixture() @pytest.fixture()

48
tests/fixtures/gen_stab.py vendored Normal file
View File

@@ -0,0 +1,48 @@
from os.path import dirname
import numpy as np
from scipy.stats import uniform, randint
from miplearn.collectors.basic import BasicCollector
from miplearn.io import write_pkl_gz
from miplearn.problems.stab import (
MaxWeightStableSetGenerator,
MaxWeightStableSetPerturber,
build_stab_model_gurobipy,
build_stab_model_pyomo,
)
np.random.seed(42)
gen = MaxWeightStableSetGenerator(
w=uniform(10.0, scale=1.0),
n=randint(low=50, high=51),
p=uniform(loc=0.5, scale=0.0),
)
pr = MaxWeightStableSetPerturber(
w_jitter=uniform(0.9, scale=0.2),
)
base_instance = gen.generate(1)[0]
data = pr.perturb(base_instance, 3)
params = {"seed": 42, "threads": 1}
# Gurobipy
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="stab-gp-n50-")
collector = BasicCollector()
collector.collect(
data_filenames,
lambda data: build_stab_model_gurobipy(data, params=params),
progress=True,
verbose=True,
)
# Pyomo
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="stab-pyo-n50-")
collector = BasicCollector()
collector.collect(
data_filenames,
lambda model: build_stab_model_pyomo(model, params=params),
progress=True,
verbose=True,
)

54
tests/fixtures/gen_tsp.py vendored Normal file
View File

@@ -0,0 +1,54 @@
from os.path import dirname
import numpy as np
from scipy.stats import uniform, randint
from miplearn.collectors.basic import BasicCollector
from miplearn.io import write_pkl_gz
from miplearn.problems.tsp import (
TravelingSalesmanGenerator,
TravelingSalesmanPerturber,
build_tsp_model_gurobipy,
build_tsp_model_pyomo,
)
np.random.seed(42)
gen = TravelingSalesmanGenerator(
x=uniform(loc=0.0, scale=1000.0),
y=uniform(loc=0.0, scale=1000.0),
n=randint(low=20, high=21),
gamma=uniform(loc=1.0, scale=0.0),
round=True,
)
# Generate a reference instance with fixed cities
reference_instance = gen.generate(1)[0]
# Generate perturbed instances with same cities but different distance scaling
perturber = TravelingSalesmanPerturber(
gamma=uniform(loc=1.0, scale=0.25),
round=True,
)
data = perturber.perturb(reference_instance, 3)
params = {"seed": 42, "threads": 1}
# Gurobipy
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="tsp-gp-n20-")
collector = BasicCollector()
collector.collect(
data_filenames,
lambda d: build_tsp_model_gurobipy(d, params=params),
progress=True,
verbose=True,
)
# Pyomo
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="tsp-pyo-n20-")
collector = BasicCollector()
collector.collect(
data_filenames,
lambda d: build_tsp_model_pyomo(d, params=params),
progress=True,
verbose=True,
)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00000.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00000.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00000.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00001.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00001.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00001.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00002.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00002.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-gp-n50-00002.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00000.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00000.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00000.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00001.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00001.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00001.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00002.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00002.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/stab-pyo-n50-00002.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00000.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00000.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00000.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00001.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00001.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00001.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00002.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00002.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-gp-n20-00002.pkl.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-pyo-n20-00000.h5 vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-pyo-n20-00000.mps.gz vendored Normal file

Binary file not shown.

BIN
tests/fixtures/tsp-pyo-n20-00000.pkl.gz vendored Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More