mirror of
https://github.com/ANL-CEEESA/MIPLearn.git
synced 2025-12-06 09:28:51 -06:00
Compare commits
43 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 332b2b9fca | |||
| af65069202 | |||
| dadd2216f1 | |||
| 5fefb49566 | |||
| 3775c3f780 | |||
| e66e6d7660 | |||
| 8e05a69351 | |||
| 7ccb7875b9 | |||
|
f085ab538b
|
|||
|
7f273ebb70
|
|||
|
26cfab0ebd
|
|||
| 52ed34784d | |||
|
0534d50af3
|
|||
| 8a02e22a35 | |||
| 702824a3b5 | |||
| 752885660d | |||
| b55554d410 | |||
| fb3f219ea8 | |||
| 714904ea35 | |||
| cec56cbd7b | |||
| e75850fab8 | |||
| 687c271d4d | |||
| 60d9a68485 | |||
| 33f2cb3d9e | |||
| 5b28595b0b | |||
| 60c7222fbe | |||
| 281508f44c | |||
| 2774edae8c | |||
| 25bbe20748 | |||
| c9eef36c4e | |||
| d2faa15079 | |||
| 8c2c45417b | |||
|
8805a83c1c
|
|||
|
b81815d35b
|
|||
|
a42cd5ae35
|
|||
|
7079a36203
|
|||
|
c1adc0b79e
|
|||
|
2d07a44f7d
|
|||
|
e555dffc0c
|
|||
|
cd32b0e70d
|
|||
|
40c7f2ffb5
|
|||
|
25728f5512
|
|||
|
8dd5bb416b
|
63
CHANGELOG.md
63
CHANGELOG.md
@@ -3,32 +3,69 @@
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
and this project adheres to
|
||||
[Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.3.0] - 2023-06-08
|
||||
## [0.4.3] - 2025-05-10
|
||||
|
||||
This is a complete rewrite of the original prototype package, with an entirely new API, focused on performance, scalability and flexibility.
|
||||
## Changed
|
||||
|
||||
- Update dependency: Gurobi 12
|
||||
|
||||
## [0.4.2] - 2024-12-10
|
||||
|
||||
## Changed
|
||||
|
||||
- H5File: Use float64 precision instead of float32
|
||||
- LearningSolver: optimize now returns (model, stats) instead of just stats
|
||||
- Update dependency: Gurobi 11
|
||||
|
||||
## [0.4.0] - 2024-02-06
|
||||
|
||||
### Added
|
||||
|
||||
- Add support for Python/Gurobipy and Julia/JuMP, in addition to the existing Python/Pyomo interface.
|
||||
- Add six new random instance generators (bin packing, capacitated p-median, set cover, set packing, unit commitment, vertex cover), in addition to the three existing generators (multiknapsack, stable set, tsp).
|
||||
- Collect some additional raw training data (e.g. basis status, reduced costs, etc)
|
||||
- Add new primal solution ML strategies (memorizing, independent vars and joint vars)
|
||||
- Add new primal solution actions (set warm start, fix variables, enforce proximity)
|
||||
- Add ML strategies for user cuts
|
||||
- Add ML strategies for lazy constraints
|
||||
|
||||
### Changed
|
||||
|
||||
- LearningSolver.solve no longer generates HDF5 files; use a collector instead.
|
||||
- Add `_gurobipy` suffix to all `build_model` functions; implement some `_pyomo`
|
||||
and `_jump` functions.
|
||||
|
||||
## [0.3.0] - 2023-06-08
|
||||
|
||||
This is a complete rewrite of the original prototype package, with an entirely
|
||||
new API, focused on performance, scalability and flexibility.
|
||||
|
||||
### Added
|
||||
|
||||
- Add support for Python/Gurobipy and Julia/JuMP, in addition to the existing
|
||||
Python/Pyomo interface.
|
||||
- Add six new random instance generators (bin packing, capacitated p-median, set
|
||||
cover, set packing, unit commitment, vertex cover), in addition to the three
|
||||
existing generators (multiknapsack, stable set, tsp).
|
||||
- Collect some additional raw training data (e.g. basis status, reduced costs,
|
||||
etc)
|
||||
- Add new primal solution ML strategies (memorizing, independent vars and joint
|
||||
vars)
|
||||
- Add new primal solution actions (set warm start, fix variables, enforce
|
||||
proximity)
|
||||
- Add runnable tutorials and user guides to the documentation.
|
||||
|
||||
### Changed
|
||||
|
||||
- To support large-scale problems and datasets, switch from an in-memory architecture to a file-based architecture, using HDF5 files.
|
||||
- To accelerate development cycle, split training data collection from feature extraction.
|
||||
- To support large-scale problems and datasets, switch from an in-memory
|
||||
architecture to a file-based architecture, using HDF5 files.
|
||||
- To accelerate development cycle, split training data collection from feature
|
||||
extraction.
|
||||
|
||||
### Removed
|
||||
|
||||
- Temporarily remove ML strategies for lazy constraints
|
||||
- Remove benchmarks from documentation. These will be published in a separate paper.
|
||||
|
||||
- Remove benchmarks from documentation. These will be published in a separate
|
||||
paper.
|
||||
|
||||
## [0.1.0] - 2020-11-23
|
||||
|
||||
- Initial public release
|
||||
- Initial public release
|
||||
|
||||
6
Makefile
6
Makefile
@@ -3,7 +3,7 @@ PYTEST := pytest
|
||||
PIP := $(PYTHON) -m pip
|
||||
MYPY := $(PYTHON) -m mypy
|
||||
PYTEST_ARGS := -W ignore::DeprecationWarning -vv --log-level=DEBUG
|
||||
VERSION := 0.3
|
||||
VERSION := 0.4
|
||||
|
||||
all: docs test
|
||||
|
||||
@@ -21,8 +21,8 @@ dist-upload:
|
||||
|
||||
docs:
|
||||
rm -rf ../docs/$(VERSION)
|
||||
cd docs; make clean; make dirhtml
|
||||
rsync -avP --delete-after docs/_build/dirhtml/ ../docs/$(VERSION)
|
||||
cd docs; make dirhtml
|
||||
rsync -avP --delete-after docs/_build/dirhtml/ ../docs/$(VERSION)/
|
||||
|
||||
install-deps:
|
||||
$(PIP) install --upgrade pip
|
||||
|
||||
29
README.md
29
README.md
@@ -22,21 +22,22 @@ Documentation
|
||||
-------------
|
||||
|
||||
- Tutorials:
|
||||
1. [Getting started (Pyomo)](https://anl-ceeesa.github.io/MIPLearn/0.3/tutorials/getting-started-pyomo/)
|
||||
2. [Getting started (Gurobipy)](https://anl-ceeesa.github.io/MIPLearn/0.3/tutorials/getting-started-gurobipy/)
|
||||
3. [Getting started (JuMP)](https://anl-ceeesa.github.io/MIPLearn/0.3/tutorials/getting-started-jump/)
|
||||
1. [Getting started (Pyomo)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-pyomo/)
|
||||
2. [Getting started (Gurobipy)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-gurobipy/)
|
||||
3. [Getting started (JuMP)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-jump/)
|
||||
4. [User cuts and lazy constraints](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/cuts-gurobipy/)
|
||||
- User Guide
|
||||
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/problems/)
|
||||
2. [Training data collectors](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/collectors/)
|
||||
3. [Feature extractors](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/features/)
|
||||
4. [Primal components](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/primal/)
|
||||
5. [Learning solver](https://anl-ceeesa.github.io/MIPLearn/0.3/guide/solvers/)
|
||||
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/problems/)
|
||||
2. [Training data collectors](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/collectors/)
|
||||
3. [Feature extractors](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/features/)
|
||||
4. [Primal components](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/primal/)
|
||||
5. [Learning solver](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/solvers/)
|
||||
- Python API Reference
|
||||
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.3/api/problems/)
|
||||
2. [Collectors & extractors](https://anl-ceeesa.github.io/MIPLearn/0.3/api/collectors/)
|
||||
3. [Components](https://anl-ceeesa.github.io/MIPLearn/0.3/api/components/)
|
||||
4. [Solvers](https://anl-ceeesa.github.io/MIPLearn/0.3/api/solvers/)
|
||||
5. [Helpers](https://anl-ceeesa.github.io/MIPLearn/0.3/api/helpers/)
|
||||
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.4/api/problems/)
|
||||
2. [Collectors & extractors](https://anl-ceeesa.github.io/MIPLearn/0.4/api/collectors/)
|
||||
3. [Components](https://anl-ceeesa.github.io/MIPLearn/0.4/api/components/)
|
||||
4. [Solvers](https://anl-ceeesa.github.io/MIPLearn/0.4/api/solvers/)
|
||||
5. [Helpers](https://anl-ceeesa.github.io/MIPLearn/0.4/api/helpers/)
|
||||
|
||||
Authors
|
||||
-------
|
||||
@@ -58,7 +59,7 @@ Citing MIPLearn
|
||||
|
||||
If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:
|
||||
|
||||
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.3)*. Zenodo (2023). DOI: [10.5281/zenodo.4287567](https://doi.org/10.5281/zenodo.4287567)
|
||||
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.4)*. Zenodo (2024). DOI: [10.5281/zenodo.4287567](https://doi.org/10.5281/zenodo.4287567)
|
||||
|
||||
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
project = "MIPLearn"
|
||||
copyright = "2020-2023, UChicago Argonne, LLC"
|
||||
author = ""
|
||||
release = "0.3"
|
||||
release = "0.4"
|
||||
extensions = [
|
||||
"myst_parser",
|
||||
"nbsphinx",
|
||||
|
||||
@@ -38,9 +38,13 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 1,
|
||||
"id": "f906fe9c",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-30T22:19:30.826123021Z",
|
||||
"start_time": "2024-01-30T22:19:30.766066926Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -57,18 +61,18 @@
|
||||
"x4 = [[0.37454012 0.9507143 0.7319939 ]\n",
|
||||
" [0.5986585 0.15601864 0.15599452]\n",
|
||||
" [0.05808361 0.8661761 0.601115 ]]\n",
|
||||
"x5 = (2, 3)\t0.68030757\n",
|
||||
" (3, 2)\t0.45049927\n",
|
||||
" (4, 0)\t0.013264962\n",
|
||||
" (0, 2)\t0.94220173\n",
|
||||
" (4, 2)\t0.5632882\n",
|
||||
" (2, 1)\t0.3854165\n",
|
||||
" (1, 1)\t0.015966251\n",
|
||||
" (3, 0)\t0.23089382\n",
|
||||
" (4, 4)\t0.24102546\n",
|
||||
" (1, 3)\t0.68326354\n",
|
||||
" (3, 1)\t0.6099967\n",
|
||||
" (0, 3)\t0.8331949\n"
|
||||
"x5 = (3, 2)\t0.6803075671195984\n",
|
||||
" (2, 3)\t0.4504992663860321\n",
|
||||
" (0, 4)\t0.013264961540699005\n",
|
||||
" (2, 0)\t0.9422017335891724\n",
|
||||
" (2, 4)\t0.5632882118225098\n",
|
||||
" (1, 2)\t0.38541650772094727\n",
|
||||
" (1, 1)\t0.015966251492500305\n",
|
||||
" (0, 3)\t0.2308938205242157\n",
|
||||
" (4, 4)\t0.24102546274662018\n",
|
||||
" (3, 1)\t0.6832635402679443\n",
|
||||
" (1, 3)\t0.6099966764450073\n",
|
||||
" (3, 0)\t0.83319491147995\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -179,9 +183,13 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 2,
|
||||
"id": "ac6f8c6f",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-30T22:19:30.826707866Z",
|
||||
"start_time": "2024-01-30T22:19:30.825940503Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -205,7 +213,7 @@
|
||||
"\n",
|
||||
"from miplearn.problems.tsp import (\n",
|
||||
" TravelingSalesmanGenerator,\n",
|
||||
" build_tsp_model,\n",
|
||||
" build_tsp_model_gurobipy,\n",
|
||||
")\n",
|
||||
"from miplearn.io import write_pkl_gz\n",
|
||||
"from miplearn.h5 import H5File\n",
|
||||
@@ -231,7 +239,7 @@
|
||||
"# Solve all instances and collect basic solution information.\n",
|
||||
"# Process at most four instances in parallel.\n",
|
||||
"bc = BasicCollector()\n",
|
||||
"bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model, n_jobs=4)\n",
|
||||
"bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model_gurobipy, n_jobs=4)\n",
|
||||
"\n",
|
||||
"# Read and print some training data for the first instance.\n",
|
||||
"with H5File(\"data/tsp/00000.h5\", \"r\") as h5:\n",
|
||||
@@ -244,6 +252,9 @@
|
||||
"execution_count": null,
|
||||
"id": "78f0b07a",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"start_time": "2024-01-30T22:19:30.826179789Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -269,7 +280,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -51,7 +51,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 1,
|
||||
"id": "ed9a18c8",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
@@ -101,7 +101,7 @@
|
||||
"from miplearn.io import write_pkl_gz\n",
|
||||
"from miplearn.problems.multiknapsack import (\n",
|
||||
" MultiKnapsackGenerator,\n",
|
||||
" build_multiknapsack_model,\n",
|
||||
" build_multiknapsack_model_gurobipy,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Set random seed to make example reproducible\n",
|
||||
@@ -127,7 +127,7 @@
|
||||
"# Run the basic collector\n",
|
||||
"BasicCollector().collect(\n",
|
||||
" glob(\"data/multiknapsack/*\"),\n",
|
||||
" build_multiknapsack_model,\n",
|
||||
" build_multiknapsack_model_gurobipy,\n",
|
||||
" n_jobs=4,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
@@ -166,7 +166,7 @@
|
||||
"\n",
|
||||
" # Extract and print constraint features\n",
|
||||
" x3 = ext.get_constr_features(h5)\n",
|
||||
" print(\"constraint features\", x3.shape, \"\\n\", x3)\n"
|
||||
" print(\"constraint features\", x3.shape, \"\\n\", x3)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -204,7 +204,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 2,
|
||||
"id": "a1bc38fe",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
@@ -326,7 +326,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -120,7 +120,7 @@
|
||||
" extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n",
|
||||
" constructor=MergeTopSolutions(k=3, thresholds=[0.25, 0.75]),\n",
|
||||
" action=EnforceProximity(3),\n",
|
||||
")\n"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -175,7 +175,7 @@
|
||||
" ),\n",
|
||||
" extractor=AlvLouWeh2017Extractor(),\n",
|
||||
" action=SetWarmStart(),\n",
|
||||
")\n"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -230,7 +230,7 @@
|
||||
" instance_fields=[\"static_var_obj_coeffs\"],\n",
|
||||
" ),\n",
|
||||
" action=SetWarmStart(),\n",
|
||||
")\n"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -263,7 +263,7 @@
|
||||
"# Configures an expert primal component, which reads a pre-computed\n",
|
||||
"# optimal solution from the HDF5 file and provides it to the solver\n",
|
||||
"# as warm start.\n",
|
||||
"comp = ExpertPrimalComponent(action=SetWarmStart())\n"
|
||||
"comp = ExpertPrimalComponent(action=SetWarmStart())"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -283,7 +283,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
"\n",
|
||||
"Benchmark sets such as [MIPLIB](https://miplib.zib.de/) or [TSPLIB](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/) are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, however, make existing benchmark sets less suitable for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.\n",
|
||||
"\n",
|
||||
"To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. As of MIPLearn 0.3, nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.\n",
|
||||
"To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. Nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.\n",
|
||||
"\n",
|
||||
"In the following, we describe the problems included in the library, their MIP formulation and the generation algorithm."
|
||||
]
|
||||
@@ -39,7 +39,6 @@
|
||||
"cell_type": "markdown",
|
||||
"id": "830f3784-a3fc-4e2f-a484-e7808841ffe8",
|
||||
"metadata": {
|
||||
"jp-MarkdownHeadingCollapsed": true,
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
@@ -108,6 +107,10 @@
|
||||
"execution_count": 1,
|
||||
"id": "f14e560c-ef9f-4c48-8467-72d6acce5f9f",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-11-07T16:29:48.409419720Z",
|
||||
"start_time": "2023-11-07T16:29:47.824353556Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
@@ -126,10 +129,11 @@
|
||||
"8 [ 8.47 21.9 16.58 15.37 3.76 3.91 1.57 20.57 14.76 18.61] 94.58\n",
|
||||
"9 [ 8.57 22.77 17.06 16.25 4.14 4. 1.56 22.97 14.09 19.09] 100.79\n",
|
||||
"\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Restricted license - for non-production use only - expires 2024-10-28\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 20 rows, 110 columns and 210 nonzeros\n",
|
||||
"Model fingerprint: 0x1ff9913f\n",
|
||||
@@ -154,28 +158,22 @@
|
||||
"H 0 0 2.0000000 1.27484 36.3% - 0s\n",
|
||||
" 0 0 1.27484 0 4 2.00000 1.27484 36.3% - 0s\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (38 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 32 (of 32 available processors)\n",
|
||||
"Explored 1 nodes (38 simplex iterations) in 0.03 seconds (0.00 work units)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 3: 2 4 5 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 2.000000000000e+00, best bound 2.000000000000e+00, gap 0.0000%\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/home/axavier/.conda/envs/miplearn2/lib/python3.9/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
||||
" from .autonotebook import tqdm as notebook_tqdm\n"
|
||||
"Best objective 2.000000000000e+00, best bound 2.000000000000e+00, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 143, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import numpy as np\n",
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from miplearn.problems.binpack import BinPackGenerator, build_binpack_model\n",
|
||||
"from miplearn.problems.binpack import BinPackGenerator, build_binpack_model_gurobipy\n",
|
||||
"\n",
|
||||
"# Set random seed, to make example reproducible\n",
|
||||
"np.random.seed(42)\n",
|
||||
@@ -196,8 +194,8 @@
|
||||
"print()\n",
|
||||
"\n",
|
||||
"# Optimize first instance\n",
|
||||
"model = build_binpack_model(data[0])\n",
|
||||
"model.optimize()\n"
|
||||
"model = build_binpack_model_gurobipy(data[0])\n",
|
||||
"model.optimize()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -304,7 +302,12 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "1ce5f8fb-2769-4fbd-a40c-fd62b897690a",
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-11-07T16:29:48.485068449Z",
|
||||
"start_time": "2023-11-07T16:29:48.406139946Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -321,10 +324,10 @@
|
||||
"capacities\n",
|
||||
" [1310. 988. 1004. 1269. 1007.]\n",
|
||||
"\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 5 rows, 10 columns and 50 nonzeros\n",
|
||||
"Model fingerprint: 0xaf3ac15e\n",
|
||||
@@ -352,13 +355,15 @@
|
||||
" Cover: 1\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (4 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 32 (of 32 available processors)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 2: -1279 -804 \n",
|
||||
"No other solutions better than -1279\n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective -1.279000000000e+03, best bound -1.279000000000e+03, gap 0.0000%\n"
|
||||
"Best objective -1.279000000000e+03, best bound -1.279000000000e+03, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 490, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -367,7 +372,7 @@
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from miplearn.problems.multiknapsack import (\n",
|
||||
" MultiKnapsackGenerator,\n",
|
||||
" build_multiknapsack_model,\n",
|
||||
" build_multiknapsack_model_gurobipy,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Set random seed, to make example reproducible\n",
|
||||
@@ -394,8 +399,8 @@
|
||||
"print()\n",
|
||||
"\n",
|
||||
"# Build model and optimize\n",
|
||||
"model = build_multiknapsack_model(data[0])\n",
|
||||
"model.optimize()\n"
|
||||
"model = build_multiknapsack_model_gurobipy(data[0])\n",
|
||||
"model.optimize()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -470,7 +475,12 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "4e0e4223-b4e0-4962-a157-82a23a86e37d",
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-11-07T16:29:48.575025403Z",
|
||||
"start_time": "2023-11-07T16:29:48.453962705Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -491,10 +501,10 @@
|
||||
"demands = [6.12 1.39 2.92 3.66 4.56 7.85 2. 5.14 5.92 0.46]\n",
|
||||
"capacities = [151.89 42.63 16.26 237.22 241.41 202.1 76.15 24.42 171.06 110.04]\n",
|
||||
"\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 21 rows, 110 columns and 220 nonzeros\n",
|
||||
"Model fingerprint: 0x8d8d9346\n",
|
||||
@@ -528,20 +538,22 @@
|
||||
" 0 0 86.06884 0 15 93.92000 86.06884 8.36% - 0s\n",
|
||||
"* 0 0 0 91.2300000 91.23000 0.00% - 0s\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (70 simplex iterations) in 0.02 seconds (0.00 work units)\n",
|
||||
"Thread count was 32 (of 32 available processors)\n",
|
||||
"Explored 1 nodes (70 simplex iterations) in 0.08 seconds (0.00 work units)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 10: 91.23 93.92 93.98 ... 368.79\n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 9.123000000000e+01, best bound 9.123000000000e+01, gap 0.0000%\n"
|
||||
"Best objective 9.123000000000e+01, best bound 9.123000000000e+01, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 190, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import numpy as np\n",
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from miplearn.problems.pmedian import PMedianGenerator, build_pmedian_model\n",
|
||||
"from miplearn.problems.pmedian import PMedianGenerator, build_pmedian_model_gurobipy\n",
|
||||
"\n",
|
||||
"# Set random seed, to make example reproducible\n",
|
||||
"np.random.seed(42)\n",
|
||||
@@ -569,8 +581,8 @@
|
||||
"print()\n",
|
||||
"\n",
|
||||
"# Build and optimize model\n",
|
||||
"model = build_pmedian_model(data[0])\n",
|
||||
"model.optimize()\n"
|
||||
"model = build_pmedian_model_gurobipy(data[0])\n",
|
||||
"model.optimize()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -643,7 +655,12 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "3224845b-9afd-463e-abf4-e0e93d304859",
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-11-07T16:29:48.804292323Z",
|
||||
"start_time": "2023-11-07T16:29:48.492933268Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -658,10 +675,10 @@
|
||||
"costs [1044.58 850.13 1014.5 944.83 697.9 971.87 213.49 220.98 70.23\n",
|
||||
" 425.33]\n",
|
||||
"\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 5 rows, 10 columns and 28 nonzeros\n",
|
||||
"Model fingerprint: 0xe5c2d4fa\n",
|
||||
@@ -677,12 +694,14 @@
|
||||
"Presolve: All rows and columns removed\n",
|
||||
"\n",
|
||||
"Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)\n",
|
||||
"Thread count was 1 (of 32 available processors)\n",
|
||||
"Thread count was 1 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 1: 213.49 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 2.134900000000e+02, best bound 2.134900000000e+02, gap 0.0000%\n"
|
||||
"Best objective 2.134900000000e+02, best bound 2.134900000000e+02, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 178, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -714,7 +733,7 @@
|
||||
"\n",
|
||||
"# Build and optimize model\n",
|
||||
"model = build_setcover_model_gurobipy(data[0])\n",
|
||||
"model.optimize()\n"
|
||||
"model.optimize()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -774,6 +793,10 @@
|
||||
"execution_count": 5,
|
||||
"id": "cc797da7",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-11-07T16:29:48.806917868Z",
|
||||
"start_time": "2023-11-07T16:29:48.781619530Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -793,10 +816,10 @@
|
||||
"costs [1044.58 850.13 1014.5 944.83 697.9 971.87 213.49 220.98 70.23\n",
|
||||
" 425.33]\n",
|
||||
"\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 5 rows, 10 columns and 28 nonzeros\n",
|
||||
"Model fingerprint: 0x4ee91388\n",
|
||||
@@ -812,20 +835,22 @@
|
||||
"Presolve: All rows and columns removed\n",
|
||||
"\n",
|
||||
"Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)\n",
|
||||
"Thread count was 1 (of 32 available processors)\n",
|
||||
"Thread count was 1 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 2: -1986.37 -1265.56 \n",
|
||||
"No other solutions better than -1986.37\n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective -1.986370000000e+03, best bound -1.986370000000e+03, gap 0.0000%\n"
|
||||
"Best objective -1.986370000000e+03, best bound -1.986370000000e+03, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 238, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import numpy as np\n",
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from miplearn.problems.setpack import SetPackGenerator, build_setpack_model\n",
|
||||
"from miplearn.problems.setpack import SetPackGenerator, build_setpack_model_gurobipy\n",
|
||||
"\n",
|
||||
"# Set random seed, to make example reproducible\n",
|
||||
"np.random.seed(42)\n",
|
||||
@@ -849,8 +874,8 @@
|
||||
"print()\n",
|
||||
"\n",
|
||||
"# Build and optimize model\n",
|
||||
"model = build_setpack_model(data[0])\n",
|
||||
"model.optimize()\n"
|
||||
"model = build_setpack_model_gurobipy(data[0])\n",
|
||||
"model.optimize()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -875,11 +900,10 @@
|
||||
"$$\n",
|
||||
"\\begin{align*}\n",
|
||||
"\\text{minimize} \\;\\;\\; & -\\sum_{v \\in V} w_v x_v \\\\\n",
|
||||
"\\text{such that} \\;\\;\\; & \\sum_{v \\in C} x_v \\leq 1 & \\forall C \\in \\mathcal{C} \\\\\n",
|
||||
"\\text{such that} \\;\\;\\; & x_v + x_u \\leq 1 & \\forall (v,u) \\in E \\\\\n",
|
||||
"& x_v \\in \\{0, 1\\} & \\forall v \\in V\n",
|
||||
"\\end{align*}\n",
|
||||
"$$\n",
|
||||
"where $\\mathcal{C}$ is the set of cliques in $G$. We recall that a clique is a subset of vertices in which every pair of vertices is adjacent."
|
||||
"$$"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -903,7 +927,12 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0f996e99-0ec9-472b-be8a-30c9b8556931",
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-11-07T16:29:48.954896857Z",
|
||||
"start_time": "2023-11-07T16:29:48.825579097Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -913,13 +942,14 @@
|
||||
"weights[0] [37.45 95.07 73.2 59.87 15.6 15.6 5.81 86.62 60.11 70.81]\n",
|
||||
"weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]\n",
|
||||
"\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Set parameter PreCrush to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 10 rows, 10 columns and 24 nonzeros\n",
|
||||
"Model fingerprint: 0xf4c21689\n",
|
||||
"Optimize a model with 15 rows, 10 columns and 30 nonzeros\n",
|
||||
"Model fingerprint: 0x3240ea4a\n",
|
||||
"Variable types: 0 continuous, 10 integer (10 binary)\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 1e+00]\n",
|
||||
@@ -927,26 +957,28 @@
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [1e+00, 1e+00]\n",
|
||||
"Found heuristic solution: objective -219.1400000\n",
|
||||
"Presolve removed 2 rows and 2 columns\n",
|
||||
"Presolve removed 7 rows and 2 columns\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 8 rows, 8 columns, 19 nonzeros\n",
|
||||
"Variable types: 0 continuous, 8 integer (8 binary)\n",
|
||||
"\n",
|
||||
"Root relaxation: objective -2.205650e+02, 4 iterations, 0.00 seconds (0.00 work units)\n",
|
||||
"Root relaxation: objective -2.205650e+02, 5 iterations, 0.00 seconds (0.00 work units)\n",
|
||||
"\n",
|
||||
" Nodes | Current Node | Objective Bounds | Work\n",
|
||||
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
|
||||
"\n",
|
||||
" 0 0 infeasible 0 -219.14000 -219.14000 0.00% - 0s\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (4 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 32 (of 32 available processors)\n",
|
||||
"Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 1: -219.14 \n",
|
||||
"No other solutions better than -219.14\n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective -2.191400000000e+02, best bound -2.191400000000e+02, gap 0.0000%\n"
|
||||
"Best objective -2.191400000000e+02, best bound -2.191400000000e+02, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 299, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -980,7 +1012,7 @@
|
||||
"\n",
|
||||
"# Load and optimize the first instance\n",
|
||||
"model = build_stab_model_gurobipy(data[0])\n",
|
||||
"model.optimize()\n"
|
||||
"model.optimize()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1052,6 +1084,10 @@
|
||||
"execution_count": 7,
|
||||
"id": "9d0c56c6",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-11-07T16:29:48.958833448Z",
|
||||
"start_time": "2023-11-07T16:29:48.898121017Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -1085,11 +1121,12 @@
|
||||
" [ 444. 398. 371. 454. 356. 476. 565. 374. 0. 274.]\n",
|
||||
" [ 668. 446. 317. 648. 469. 752. 394. 286. 274. 0.]]\n",
|
||||
"\n",
|
||||
"Set parameter PreCrush to value 1\n",
|
||||
"Set parameter LazyConstraints to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
|
||||
"Model fingerprint: 0x719675e5\n",
|
||||
@@ -1114,7 +1151,7 @@
|
||||
" Lazy constraints: 3\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (17 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 32 (of 32 available processors)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 1: 2921 \n",
|
||||
"\n",
|
||||
@@ -1129,7 +1166,10 @@
|
||||
"import random\n",
|
||||
"import numpy as np\n",
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from miplearn.problems.tsp import TravelingSalesmanGenerator, build_tsp_model\n",
|
||||
"from miplearn.problems.tsp import (\n",
|
||||
" TravelingSalesmanGenerator,\n",
|
||||
" build_tsp_model_gurobipy,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Set random seed to make example reproducible\n",
|
||||
"random.seed(42)\n",
|
||||
@@ -1152,8 +1192,8 @@
|
||||
"print()\n",
|
||||
"\n",
|
||||
"# Load and optimize the first instance\n",
|
||||
"model = build_tsp_model(data[0])\n",
|
||||
"model.optimize()\n"
|
||||
"model = build_tsp_model_gurobipy(data[0])\n",
|
||||
"model.optimize()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1262,6 +1302,10 @@
|
||||
"execution_count": 8,
|
||||
"id": "6217da7c",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-11-07T16:29:49.061613905Z",
|
||||
"start_time": "2023-11-07T16:29:48.941857719Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -1298,10 +1342,10 @@
|
||||
" 828.28 775.18 834.99 959.76 865.72 1193.52 1058.92 985.19 893.92\n",
|
||||
" 962.16 781.88 723.15 639.04 602.4 787.02]\n",
|
||||
"\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 578 rows, 360 columns and 2128 nonzeros\n",
|
||||
"Model fingerprint: 0x4dc1c661\n",
|
||||
@@ -1312,7 +1356,7 @@
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [1e+00, 1e+03]\n",
|
||||
"Presolve removed 244 rows and 131 columns\n",
|
||||
"Presolve time: 0.02s\n",
|
||||
"Presolve time: 0.01s\n",
|
||||
"Presolved: 334 rows, 229 columns, 842 nonzeros\n",
|
||||
"Variable types: 116 continuous, 113 integer (113 binary)\n",
|
||||
"Found heuristic solution: objective 440662.46430\n",
|
||||
@@ -1339,13 +1383,15 @@
|
||||
" RLT: 1\n",
|
||||
" Relax-and-lift: 7\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (234 simplex iterations) in 0.04 seconds (0.02 work units)\n",
|
||||
"Thread count was 32 (of 32 available processors)\n",
|
||||
"Explored 1 nodes (234 simplex iterations) in 0.02 seconds (0.02 work units)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 5: 364722 368600 374044 ... 440662\n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 3.647217661000e+05, best bound 3.647217661000e+05, gap 0.0000%\n"
|
||||
"Best objective 3.647217661000e+05, best bound 3.647217661000e+05, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 677, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -1353,7 +1399,7 @@
|
||||
"import random\n",
|
||||
"import numpy as np\n",
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from miplearn.problems.uc import UnitCommitmentGenerator, build_uc_model\n",
|
||||
"from miplearn.problems.uc import UnitCommitmentGenerator, build_uc_model_gurobipy\n",
|
||||
"\n",
|
||||
"# Set random seed to make example reproducible\n",
|
||||
"random.seed(42)\n",
|
||||
@@ -1389,8 +1435,8 @@
|
||||
" print()\n",
|
||||
"\n",
|
||||
"# Load and optimize the first instance\n",
|
||||
"model = build_uc_model(data[0])\n",
|
||||
"model.optimize()\n"
|
||||
"model = build_uc_model_gurobipy(data[0])\n",
|
||||
"model.optimize()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1450,7 +1496,12 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "5fff7afe-5b7a-4889-a502-66751ec979bf",
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-11-07T16:29:49.075657363Z",
|
||||
"start_time": "2023-11-07T16:29:49.049561363Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -1460,10 +1511,10 @@
|
||||
"weights[0] [37.45 95.07 73.2 59.87 15.6 15.6 5.81 86.62 60.11 70.81]\n",
|
||||
"weights[1] [ 2.06 96.99 83.24 21.23 18.18 18.34 30.42 52.48 43.19 29.12]\n",
|
||||
"\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 15 rows, 10 columns and 30 nonzeros\n",
|
||||
"Model fingerprint: 0x2d2d1390\n",
|
||||
@@ -1487,12 +1538,14 @@
|
||||
" 0 0 infeasible 0 301.00000 301.00000 0.00% - 0s\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (8 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 32 (of 32 available processors)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 1: 301 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 3.010000000000e+02, best bound 3.010000000000e+02, gap 0.0000%\n"
|
||||
"Best objective 3.010000000000e+02, best bound 3.010000000000e+02, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 326, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -1502,7 +1555,7 @@
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from miplearn.problems.vertexcover import (\n",
|
||||
" MinWeightVertexCoverGenerator,\n",
|
||||
" build_vertexcover_model,\n",
|
||||
" build_vertexcover_model_gurobipy,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Set random seed to make example reproducible\n",
|
||||
@@ -1525,22 +1578,9 @@
|
||||
"print()\n",
|
||||
"\n",
|
||||
"# Load and optimize the first instance\n",
|
||||
"model = build_vertexcover_model(data[0])\n",
|
||||
"model.optimize()\n"
|
||||
"model = build_vertexcover_model_gurobipy(data[0])\n",
|
||||
"model.optimize()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9f12e91f",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -1559,7 +1599,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -57,7 +57,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 1,
|
||||
"id": "92b09b98",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
@@ -70,10 +70,11 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Restricted license - for non-production use only - expires 2024-10-28\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
|
||||
"Model fingerprint: 0x6ddcd141\n",
|
||||
@@ -91,11 +92,14 @@
|
||||
"\n",
|
||||
"Solved in 15 iterations and 0.00 seconds (0.00 work units)\n",
|
||||
"Optimal objective 2.761000000e+03\n",
|
||||
"Set parameter LazyConstraints to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"User-callback calls 56, time in user-callback 0.00 sec\n",
|
||||
"Set parameter PreCrush to value 1\n",
|
||||
"Set parameter LazyConstraints to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
|
||||
"Model fingerprint: 0x74ca3d0a\n",
|
||||
@@ -125,7 +129,7 @@
|
||||
" Lazy constraints: 3\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (16 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 32 (of 32 available processors)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 1: 2796 \n",
|
||||
"\n",
|
||||
@@ -141,7 +145,7 @@
|
||||
"{'WS: Count': 1, 'WS: Number of variables set': 41.0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -162,7 +166,7 @@
|
||||
"from miplearn.io import write_pkl_gz\n",
|
||||
"from miplearn.problems.tsp import (\n",
|
||||
" TravelingSalesmanGenerator,\n",
|
||||
" build_tsp_model,\n",
|
||||
" build_tsp_model_gurobipy,\n",
|
||||
")\n",
|
||||
"from miplearn.solvers.learning import LearningSolver\n",
|
||||
"\n",
|
||||
@@ -189,7 +193,7 @@
|
||||
"\n",
|
||||
"# Collect training data\n",
|
||||
"bc = BasicCollector()\n",
|
||||
"bc.collect(train_data, build_tsp_model, n_jobs=4)\n",
|
||||
"bc.collect(train_data, build_tsp_model_gurobipy, n_jobs=4)\n",
|
||||
"\n",
|
||||
"# Build learning solver\n",
|
||||
"solver = LearningSolver(\n",
|
||||
@@ -211,7 +215,7 @@
|
||||
"solver.fit(train_data)\n",
|
||||
"\n",
|
||||
"# Solve a test instance\n",
|
||||
"solver.optimize(test_data[0], build_tsp_model)"
|
||||
"solver.optimize(test_data[0], build_tsp_model_gurobipy)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -239,7 +243,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.12"
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -16,6 +16,7 @@ Contents
|
||||
tutorials/getting-started-pyomo
|
||||
tutorials/getting-started-gurobipy
|
||||
tutorials/getting-started-jump
|
||||
tutorials/cuts-gurobipy
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
@@ -60,7 +61,7 @@ Citing MIPLearn
|
||||
|
||||
If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:
|
||||
|
||||
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.3)*. Zenodo (2023). DOI: https://doi.org/10.5281/zenodo.4287567
|
||||
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.4)*. Zenodo (2024). DOI: https://doi.org/10.5281/zenodo.4287567
|
||||
|
||||
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:
|
||||
|
||||
|
||||
541
docs/tutorials/cuts-gurobipy.ipynb
Normal file
541
docs/tutorials/cuts-gurobipy.ipynb
Normal file
@@ -0,0 +1,541 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b4bd8bd6-3ce9-4932-852f-f98a44120a3e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# User cuts and lazy constraints\n",
|
||||
"\n",
|
||||
"User cuts and lazy constraints are two advanced mixed-integer programming techniques that can accelerate solver performance. User cuts are additional constraints, derived from the constraints already in the model, that can tighten the feasible region and eliminate fractional solutions, thus reducing the size of the branch-and-bound tree. Lazy constraints, on the other hand, are constraints that are potentially part of the problem formulation but are omitted from the initial model to reduce its size; these constraints are added to the formulation only once the solver finds a solution that violates them. While both techniques have been successful, significant computational effort may still be required to generate strong user cuts and to identify violated lazy constraints, which can reduce their effectiveness.\n",
|
||||
"\n",
|
||||
"MIPLearn is able to predict which user cuts and which lazy constraints to enforce at the beginning of the optimization process, using machine learning. In this tutorial, we will use the framework to predict subtour elimination constraints for the **traveling salesman problem** using Gurobipy. We assume that MIPLearn has already been correctly installed.\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-info\">\n",
|
||||
"\n",
|
||||
"Solver Compatibility\n",
|
||||
"\n",
|
||||
"User cuts and lazy constraints are also supported in the Python/Pyomo and Julia/JuMP versions of the package. See the source code of <code>build_tsp_model_pyomo</code> and <code>build_tsp_model_jump</code> for more details. Note, however, the following limitations:\n",
|
||||
"\n",
|
||||
"- Python/Pyomo: Only `gurobi_persistent` is currently supported. PRs implementing callbacks for other persistent solvers are welcome.\n",
|
||||
"- Julia/JuMP: Only solvers supporting solver-independent callbacks are supported. As of JuMP 1.19, this includes Gurobi, CPLEX, XPRESS, SCIP and GLPK. Note that HiGHS and Cbc are not supported. As newer versions of JuMP implement further callback support, MIPLearn should become automatically compatible with these solvers.\n",
|
||||
"\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "72229e1f-cbd8-43f0-82ee-17d6ec9c3b7d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Modeling the traveling salesman problem\n",
|
||||
"\n",
|
||||
"Given a list of cities and the distances between them, the **traveling salesman problem (TSP)** asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes.\n",
|
||||
"\n",
|
||||
"To describe an instance of TSP, we need to specify the number of cities $n$, and an $n \\times n$ matrix of distances. The class `TravelingSalesmanData`, in the `miplearn.problems.tsp` package, can hold this data:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4598a1bc-55b6-48cc-a050-2262786c203a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"```python\n",
|
||||
"@dataclass\r\n",
|
||||
"class TravelingSalesmanData:\r\n",
|
||||
" n_cities: int\r\n",
|
||||
" distances: np.ndarray\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3a43cc12-1207-4247-bdb2-69a6a2910738",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"MIPLearn also provides `TravelingSalesmandGenerator`, a random generator for TSP instances, and `build_tsp_model_gurobipy`, a function which converts `TravelingSalesmanData` into an actual gurobipy optimization model, and which uses lazy constraints to enforce subtour elimination.\n",
|
||||
"\n",
|
||||
"The example below is a simplified and annotated version of `build_tsp_model_gurobipy`, illustrating the usage of callbacks with MIPLearn. Compared the the previous tutorial examples, note that, in addition to defining the variables, objective function and constraints of our problem, we also define two callback functions `lazy_separate` and `lazy_enforce`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "e4712a85-0327-439c-8889-933e1ff714e7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import gurobipy as gp\n",
|
||||
"from gurobipy import quicksum, GRB, tuplelist\n",
|
||||
"from miplearn.solvers.gurobi import GurobiModel\n",
|
||||
"import networkx as nx\n",
|
||||
"import numpy as np\n",
|
||||
"from miplearn.problems.tsp import (\n",
|
||||
" TravelingSalesmanData,\n",
|
||||
" TravelingSalesmanGenerator,\n",
|
||||
")\n",
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from miplearn.io import write_pkl_gz, read_pkl_gz\n",
|
||||
"from miplearn.collectors.basic import BasicCollector\n",
|
||||
"from miplearn.solvers.learning import LearningSolver\n",
|
||||
"from miplearn.components.lazy.mem import MemorizingLazyComponent\n",
|
||||
"from miplearn.extractors.fields import H5FieldsExtractor\n",
|
||||
"from sklearn.neighbors import KNeighborsClassifier\n",
|
||||
"\n",
|
||||
"# Set up random seed to make example more reproducible\n",
|
||||
"np.random.seed(42)\n",
|
||||
"\n",
|
||||
"# Set up Python logging\n",
|
||||
"import logging\n",
|
||||
"\n",
|
||||
"logging.basicConfig(level=logging.WARNING)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def build_tsp_model_gurobipy_simplified(data):\n",
|
||||
" # Read data from file if a filename is provided\n",
|
||||
" if isinstance(data, str):\n",
|
||||
" data = read_pkl_gz(data)\n",
|
||||
"\n",
|
||||
" # Create empty gurobipy model\n",
|
||||
" model = gp.Model()\n",
|
||||
"\n",
|
||||
" # Create set of edges between every pair of cities, for convenience\n",
|
||||
" edges = tuplelist(\n",
|
||||
" (i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" # Add binary variable x[e] for each edge e\n",
|
||||
" x = model.addVars(edges, vtype=GRB.BINARY, name=\"x\")\n",
|
||||
"\n",
|
||||
" # Add objective function\n",
|
||||
" model.setObjective(quicksum(x[(i, j)] * data.distances[i, j] for (i, j) in edges))\n",
|
||||
"\n",
|
||||
" # Add constraint: must choose two edges adjacent to each city\n",
|
||||
" model.addConstrs(\n",
|
||||
" (\n",
|
||||
" quicksum(x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)\n",
|
||||
" == 2\n",
|
||||
" for i in range(data.n_cities)\n",
|
||||
" ),\n",
|
||||
" name=\"eq_degree\",\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" def lazy_separate(m: GurobiModel):\n",
|
||||
" \"\"\"\n",
|
||||
" Callback function that finds subtours in the current solution.\n",
|
||||
" \"\"\"\n",
|
||||
" # Query current value of the x variables\n",
|
||||
" x_val = m.inner.cbGetSolution(x)\n",
|
||||
"\n",
|
||||
" # Initialize empty set of violations\n",
|
||||
" violations = []\n",
|
||||
"\n",
|
||||
" # Build set of edges we have currently selected\n",
|
||||
" selected_edges = [e for e in edges if x_val[e] > 0.5]\n",
|
||||
"\n",
|
||||
" # Build a graph containing the selected edges, using networkx\n",
|
||||
" graph = nx.Graph()\n",
|
||||
" graph.add_edges_from(selected_edges)\n",
|
||||
"\n",
|
||||
" # For each component of the graph\n",
|
||||
" for component in list(nx.connected_components(graph)):\n",
|
||||
"\n",
|
||||
" # If the component is not the entire graph, we found a\n",
|
||||
" # subtour. Add the edge cut to the list of violations.\n",
|
||||
" if len(component) < data.n_cities:\n",
|
||||
" cut_edges = [\n",
|
||||
" [e[0], e[1]]\n",
|
||||
" for e in edges\n",
|
||||
" if (e[0] in component and e[1] not in component)\n",
|
||||
" or (e[0] not in component and e[1] in component)\n",
|
||||
" ]\n",
|
||||
" violations.append(cut_edges)\n",
|
||||
"\n",
|
||||
" # Return the list of violations\n",
|
||||
" return violations\n",
|
||||
"\n",
|
||||
" def lazy_enforce(m: GurobiModel, violations) -> None:\n",
|
||||
" \"\"\"\n",
|
||||
" Callback function that, given a list of subtours, adds lazy\n",
|
||||
" constraints to remove them from the feasible region.\n",
|
||||
" \"\"\"\n",
|
||||
" print(f\"Enforcing {len(violations)} subtour elimination constraints\")\n",
|
||||
" for violation in violations:\n",
|
||||
" m.add_constr(quicksum(x[e[0], e[1]] for e in violation) >= 2)\n",
|
||||
"\n",
|
||||
" return GurobiModel(\n",
|
||||
" model,\n",
|
||||
" lazy_separate=lazy_separate,\n",
|
||||
" lazy_enforce=lazy_enforce,\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "58875042-d6ac-4f93-b3cc-9a5822b11dad",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `lazy_separate` function starts by querying the current fractional solution value through `m.inner.cbGetSolution` (recall that `m.inner` is a regular gurobipy model), then finds the set of violated lazy constraints. Unlike a regular lazy constraint solver callback, note that `lazy_separate` does not add the violated constraints to the model; it simply returns a list of objects that uniquely identifies the set of lazy constraints that should be generated. Enforcing the constraints is the responsbility of the second callback function, `lazy_enforce`. This function takes as input the model and the list of violations found by `lazy_separate`, converts them into actual constraints, and adds them to the model through `m.add_constr`.\n",
|
||||
"\n",
|
||||
"During training data generation, MIPLearn calls `lazy_separate` and `lazy_enforce` in sequence, inside a regular solver callback. However, once the machine learning models are trained, MIPLearn calls `lazy_enforce` directly, before the optimization process starts, with a list of **predicted** violations, as we will see in the example below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5839728e-406c-4be2-ba81-83f2b873d4b2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<div class=\"alert alert-info\">\n",
|
||||
"\n",
|
||||
"Constraint Representation\n",
|
||||
"\n",
|
||||
"How should user cuts and lazy constraints be represented is a decision that the user can make; MIPLearn is representation agnostic. The objects returned by `lazy_separate`, however, are serialized as JSON and stored in the HDF5 training data files. Therefore, it is recommended to use only simple objects, such as lists, tuples and dictionaries.\n",
|
||||
"\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "847ae32e-fad7-406a-8797-0d79065a07fd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Generating training data\n",
|
||||
"\n",
|
||||
"To test the callback defined above, we generate a small set of TSP instances, using the provided random instance generator. As in the previous tutorial, we generate some test instances and some training instances, then solve them using `BasicCollector`. Input problem data is stored in `tsp/train/00000.pkl.gz, ...`, whereas solver training data (including list of required lazy constraints) is stored in `tsp/train/00000.h5, ...`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "eb63154a-1fa6-4eac-aa46-6838b9c201f6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Configure generator to produce instances with 50 cities located\n",
|
||||
"# in the 1000 x 1000 square, and with slightly perturbed distances.\n",
|
||||
"gen = TravelingSalesmanGenerator(\n",
|
||||
" x=uniform(loc=0.0, scale=1000.0),\n",
|
||||
" y=uniform(loc=0.0, scale=1000.0),\n",
|
||||
" n=randint(low=50, high=51),\n",
|
||||
" gamma=uniform(loc=1.0, scale=0.25),\n",
|
||||
" fix_cities=True,\n",
|
||||
" round=True,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Generate 500 instances and store input data file to .pkl.gz files\n",
|
||||
"data = gen.generate(500)\n",
|
||||
"train_data = write_pkl_gz(data[0:450], \"tsp/train\")\n",
|
||||
"test_data = write_pkl_gz(data[450:500], \"tsp/test\")\n",
|
||||
"\n",
|
||||
"# Solve the training instances in parallel, collecting the required lazy\n",
|
||||
"# constraints, in addition to other information, such as optimal solution.\n",
|
||||
"bc = BasicCollector()\n",
|
||||
"bc.collect(train_data, build_tsp_model_gurobipy_simplified, n_jobs=10)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6903c26c-dbe0-4a2e-bced-fdbf93513dde",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Training and solving new instances"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "57cd724a-2d27-4698-a1e6-9ab8345ef31f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"After producing the training dataset, we can train the machine learning models to predict which lazy constraints are necessary. In this tutorial, we use the following ML strategy: given a new instance, find the 50 most similar ones in the training dataset and verify how often each lazy constraint was required. If a lazy constraint was required for the majority of the 50 most-similar instances, enforce it ahead-of-time for the current instance. To measure instance similarity, use the objective function only. This ML strategy can be implemented using `MemorizingLazyComponent` with `H5FieldsExtractor` and `KNeighborsClassifier`, as shown below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "43779e3d-4174-4189-bc75-9f564910e212",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"solver = LearningSolver(\n",
|
||||
" components=[\n",
|
||||
" MemorizingLazyComponent(\n",
|
||||
" extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n",
|
||||
" clf=KNeighborsClassifier(n_neighbors=100),\n",
|
||||
" ),\n",
|
||||
" ],\n",
|
||||
")\n",
|
||||
"solver.fit(train_data)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "12480712-9d3d-4cbc-a6d7-d6c1e2f950f4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, we solve one of the test instances using the trained solver. In the run below, we can see that MIPLearn adds many lazy constraints ahead-of-time, before the optimization starts. During the optimization process itself, some additional lazy constraints are required, but very few."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "23f904ad-f1a8-4b5a-81ae-c0b9e813a4b2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Set parameter Threads to value 1\n",
|
||||
"Restricted license - for non-production use only - expires 2024-10-28\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
|
||||
"Model fingerprint: 0x04d7bec1\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 1e+00]\n",
|
||||
" Objective range [1e+01, 1e+03]\n",
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [2e+00, 2e+00]\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
|
||||
"\n",
|
||||
"Iteration Objective Primal Inf. Dual Inf. Time\n",
|
||||
" 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n",
|
||||
" 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n",
|
||||
"\n",
|
||||
"Solved in 66 iterations and 0.01 seconds (0.00 work units)\n",
|
||||
"Optimal objective 5.588000000e+03\n",
|
||||
"\n",
|
||||
"User-callback calls 107, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"INFO:miplearn.components.cuts.mem:Predicting violated lazy constraints...\n",
|
||||
"INFO:miplearn.components.lazy.mem:Enforcing 19 constraints ahead-of-time...\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Enforcing 19 subtour elimination constraints\n",
|
||||
"Set parameter PreCrush to value 1\n",
|
||||
"Set parameter LazyConstraints to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 69 rows, 1225 columns and 6091 nonzeros\n",
|
||||
"Model fingerprint: 0x09bd34d6\n",
|
||||
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 1e+00]\n",
|
||||
" Objective range [1e+01, 1e+03]\n",
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [2e+00, 2e+00]\n",
|
||||
"Found heuristic solution: objective 29853.000000\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 69 rows, 1225 columns, 6091 nonzeros\n",
|
||||
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
|
||||
"\n",
|
||||
"Root relaxation: objective 6.139000e+03, 93 iterations, 0.00 seconds (0.00 work units)\n",
|
||||
"\n",
|
||||
" Nodes | Current Node | Objective Bounds | Work\n",
|
||||
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
|
||||
"\n",
|
||||
" 0 0 6139.00000 0 6 29853.0000 6139.00000 79.4% - 0s\n",
|
||||
"H 0 0 6390.0000000 6139.00000 3.93% - 0s\n",
|
||||
" 0 0 6165.50000 0 10 6390.00000 6165.50000 3.51% - 0s\n",
|
||||
"Enforcing 3 subtour elimination constraints\n",
|
||||
" 0 0 6165.50000 0 6 6390.00000 6165.50000 3.51% - 0s\n",
|
||||
" 0 0 6198.50000 0 16 6390.00000 6198.50000 3.00% - 0s\n",
|
||||
"* 0 0 0 6219.0000000 6219.00000 0.00% - 0s\n",
|
||||
"\n",
|
||||
"Cutting planes:\n",
|
||||
" Gomory: 11\n",
|
||||
" MIR: 1\n",
|
||||
" Zero half: 4\n",
|
||||
" Lazy constraints: 3\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (222 simplex iterations) in 0.03 seconds (0.02 work units)\n",
|
||||
"Thread count was 1 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 3: 6219 6390 29853 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 141, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Increase log verbosity, so that we can see what is MIPLearn doing\n",
|
||||
"logging.getLogger(\"miplearn\").setLevel(logging.INFO)\n",
|
||||
"\n",
|
||||
"# Solve a new test instance\n",
|
||||
"solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "79cc3e61-ee2b-4f18-82cb-373d55d67de6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, we solve the same instance, but using a regular solver, without ML prediction. We can see that a much larger number of lazy constraints are added during the optimization process itself. Additionally, the solver requires a larger number of iterations to find the optimal solution. There is not a significant difference in running time because of the small size of these instances."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "a015c51c-091a-43b6-b761-9f3577fc083e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
|
||||
"Model fingerprint: 0x04d7bec1\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 1e+00]\n",
|
||||
" Objective range [1e+01, 1e+03]\n",
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [2e+00, 2e+00]\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
|
||||
"\n",
|
||||
"Iteration Objective Primal Inf. Dual Inf. Time\n",
|
||||
" 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n",
|
||||
" 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n",
|
||||
"\n",
|
||||
"Solved in 66 iterations and 0.01 seconds (0.00 work units)\n",
|
||||
"Optimal objective 5.588000000e+03\n",
|
||||
"\n",
|
||||
"User-callback calls 107, time in user-callback 0.00 sec\n",
|
||||
"Set parameter PreCrush to value 1\n",
|
||||
"Set parameter LazyConstraints to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
|
||||
"Model fingerprint: 0x77a94572\n",
|
||||
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 1e+00]\n",
|
||||
" Objective range [1e+01, 1e+03]\n",
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [2e+00, 2e+00]\n",
|
||||
"Found heuristic solution: objective 29695.000000\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
|
||||
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
|
||||
"\n",
|
||||
"Root relaxation: objective 5.588000e+03, 68 iterations, 0.00 seconds (0.00 work units)\n",
|
||||
"\n",
|
||||
" Nodes | Current Node | Objective Bounds | Work\n",
|
||||
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
|
||||
"\n",
|
||||
" 0 0 5588.00000 0 12 29695.0000 5588.00000 81.2% - 0s\n",
|
||||
"Enforcing 9 subtour elimination constraints\n",
|
||||
"Enforcing 11 subtour elimination constraints\n",
|
||||
"H 0 0 27241.000000 5588.00000 79.5% - 0s\n",
|
||||
" 0 0 5898.00000 0 8 27241.0000 5898.00000 78.3% - 0s\n",
|
||||
"Enforcing 4 subtour elimination constraints\n",
|
||||
"Enforcing 3 subtour elimination constraints\n",
|
||||
" 0 0 6066.00000 0 - 27241.0000 6066.00000 77.7% - 0s\n",
|
||||
"Enforcing 2 subtour elimination constraints\n",
|
||||
" 0 0 6128.00000 0 - 27241.0000 6128.00000 77.5% - 0s\n",
|
||||
" 0 0 6139.00000 0 6 27241.0000 6139.00000 77.5% - 0s\n",
|
||||
"H 0 0 6368.0000000 6139.00000 3.60% - 0s\n",
|
||||
" 0 0 6154.75000 0 15 6368.00000 6154.75000 3.35% - 0s\n",
|
||||
"Enforcing 2 subtour elimination constraints\n",
|
||||
" 0 0 6154.75000 0 6 6368.00000 6154.75000 3.35% - 0s\n",
|
||||
" 0 0 6165.75000 0 11 6368.00000 6165.75000 3.18% - 0s\n",
|
||||
"Enforcing 3 subtour elimination constraints\n",
|
||||
" 0 0 6204.00000 0 6 6368.00000 6204.00000 2.58% - 0s\n",
|
||||
"* 0 0 0 6219.0000000 6219.00000 0.00% - 0s\n",
|
||||
"\n",
|
||||
"Cutting planes:\n",
|
||||
" Gomory: 5\n",
|
||||
" MIR: 1\n",
|
||||
" Zero half: 4\n",
|
||||
" Lazy constraints: 4\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (224 simplex iterations) in 0.10 seconds (0.03 work units)\n",
|
||||
"Thread count was 1 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 4: 6219 6368 27241 29695 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 170, time in user-callback 0.01 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"solver = LearningSolver(components=[]) # empty set of ML components\n",
|
||||
"solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "432c99b2-67fe-409b-8224-ccef91de96d1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Learning user cuts\n",
|
||||
"\n",
|
||||
"The example above focused on lazy constraints. To enforce user cuts instead, the procedure is very similar, with the following changes:\n",
|
||||
"\n",
|
||||
"- Instead of `lazy_separate` and `lazy_enforce`, use `cuts_separate` and `cuts_enforce`\n",
|
||||
"- Instead of `m.inner.cbGetSolution`, use `m.inner.cbGetNodeRel`\n",
|
||||
"\n",
|
||||
"For a complete example, see `build_stab_model_gurobipy`, `build_stab_model_pyomo` and `build_stab_model_jump`, which solves the maximum-weight stable set problem using user cut callbacks."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e6cb694d-8c43-410f-9a13-01bf9e0763b7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -33,6 +33,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "02f0a927",
|
||||
"metadata": {},
|
||||
@@ -44,53 +45,11 @@
|
||||
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
|
||||
"- Julia version, compatible with the JuMP modeling language.\n",
|
||||
"\n",
|
||||
"In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "cd8a69c1",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-06-06T20:18:02.381829278Z",
|
||||
"start_time": "2023-06-06T20:18:02.381532300Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install MIPLearn==0.3.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e8274543",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "dcc8756c",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-06-06T20:18:15.537811992Z",
|
||||
"start_time": "2023-06-06T20:18:13.449177860Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Requirement already satisfied: gurobipy<10.1,>=10 in /home/axavier/Software/anaconda3/envs/miplearn/lib/python3.8/site-packages (10.0.1)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"!pip install 'gurobipy>=10,<10.1'"
|
||||
"In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"$ pip install MIPLearn~=0.4\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -162,7 +121,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 1,
|
||||
"id": "22a67170-10b4-43d3-8708-014d91141e73",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -198,7 +157,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 2,
|
||||
"id": "2f67032f-0d74-4317-b45c-19da0ec859e9",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -214,6 +173,7 @@
|
||||
"from miplearn.io import read_pkl_gz\n",
|
||||
"from miplearn.solvers.gurobi import GurobiModel\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:\n",
|
||||
" if isinstance(data, str):\n",
|
||||
" data = read_pkl_gz(data)\n",
|
||||
@@ -223,9 +183,7 @@
|
||||
" x = model._x = model.addVars(n, vtype=GRB.BINARY, name=\"x\")\n",
|
||||
" y = model._y = model.addVars(n, name=\"y\")\n",
|
||||
" model.setObjective(\n",
|
||||
" quicksum(\n",
|
||||
" data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n)\n",
|
||||
" )\n",
|
||||
" quicksum(data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n))\n",
|
||||
" )\n",
|
||||
" model.addConstrs(y[i] <= data.pmax[i] * x[i] for i in range(n))\n",
|
||||
" model.addConstrs(y[i] >= data.pmin[i] * x[i] for i in range(n))\n",
|
||||
@@ -243,7 +201,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 3,
|
||||
"id": "2a896f47",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -256,11 +214,12 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Set parameter Threads to value 1\n",
|
||||
"Restricted license - for non-production use only - expires 2024-10-28\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
|
||||
"Model fingerprint: 0x58dfdd53\n",
|
||||
@@ -286,12 +245,14 @@
|
||||
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 12 (of 12 available processors)\n",
|
||||
"Thread count was 1 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 2: 1320 1400 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 371, time in user-callback 0.00 sec\n",
|
||||
"obj = 1320.0\n",
|
||||
"x = [-0.0, 1.0, 1.0]\n",
|
||||
"y = [0.0, 60.0, 40.0]\n"
|
||||
@@ -351,7 +312,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 4,
|
||||
"id": "5eb09fab",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -397,7 +358,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 5,
|
||||
"id": "6156752c",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -424,7 +385,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 6,
|
||||
"id": "7623f002",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -437,7 +398,7 @@
|
||||
"from miplearn.collectors.basic import BasicCollector\n",
|
||||
"\n",
|
||||
"bc = BasicCollector()\n",
|
||||
"bc.collect(train_data, build_uc_model, n_jobs=4)"
|
||||
"bc.collect(train_data, build_uc_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -465,7 +426,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 7,
|
||||
"id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -503,7 +464,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 8,
|
||||
"id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -516,10 +477,10 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0xa8b70287\n",
|
||||
@@ -536,15 +497,17 @@
|
||||
" 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n",
|
||||
" 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n",
|
||||
"\n",
|
||||
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
|
||||
"Solved in 1 iterations and 0.00 seconds (0.00 work units)\n",
|
||||
"Optimal objective 8.290621916e+09\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"User-callback calls 56, time in user-callback 0.00 sec\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0x4ccd7ae3\n",
|
||||
"Model fingerprint: 0x892e56b2\n",
|
||||
"Variable types: 500 continuous, 500 integer (500 binary)\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 2e+06]\n",
|
||||
@@ -552,11 +515,9 @@
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [3e+08, 3e+08]\n",
|
||||
"\n",
|
||||
"User MIP start produced solution with objective 8.30129e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.29184e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.29146e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.29146e+09 (0.01s)\n",
|
||||
"Loaded user MIP start with objective 8.29146e+09\n",
|
||||
"User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
|
||||
"Loaded user MIP start with objective 8.29153e+09\n",
|
||||
"\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
|
||||
@@ -568,19 +529,34 @@
|
||||
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
|
||||
"\n",
|
||||
" 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2907e+09 0 2 8.2915e+09 8.2907e+09 0.01% - 0s\n",
|
||||
"\n",
|
||||
"Cutting planes:\n",
|
||||
" Cover: 1\n",
|
||||
" Gomory: 1\n",
|
||||
" Flow cover: 2\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (512 simplex iterations) in 0.07 seconds (0.01 work units)\n",
|
||||
"Thread count was 12 (of 12 available processors)\n",
|
||||
"Explored 1 nodes (565 simplex iterations) in 0.02 seconds (0.01 work units)\n",
|
||||
"Thread count was 1 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 3: 8.29146e+09 8.29184e+09 8.30129e+09 \n",
|
||||
"Solution count 1: 8.29153e+09 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 8.291459497797e+09, best bound 8.290645029670e+09, gap 0.0098%\n"
|
||||
"Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%\n",
|
||||
"\n",
|
||||
"User-callback calls 193, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'WS: Count': 1, 'WS: Number of variables set': 477.0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@@ -588,7 +564,7 @@
|
||||
"\n",
|
||||
"solver_ml = LearningSolver(components=[comp])\n",
|
||||
"solver_ml.fit(train_data)\n",
|
||||
"solver_ml.optimize(test_data[0], build_uc_model);"
|
||||
"solver_ml.optimize(test_data[0], build_uc_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -601,7 +577,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 9,
|
||||
"id": "2ff391ed-e855-4228-aa09-a7641d8c2893",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -614,10 +590,10 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0xa8b70287\n",
|
||||
@@ -636,10 +612,12 @@
|
||||
"\n",
|
||||
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
|
||||
"Optimal objective 8.290621916e+09\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"User-callback calls 56, time in user-callback 0.00 sec\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0x4cbbf7c7\n",
|
||||
@@ -668,22 +646,29 @@
|
||||
" 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n",
|
||||
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
|
||||
" 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n",
|
||||
" 0 0 8.2908e+09 0 1 8.2940e+09 8.2908e+09 0.04% - 0s\n",
|
||||
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n",
|
||||
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n",
|
||||
"H 0 0 8.291465e+09 8.2908e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n",
|
||||
" 0 2 8.2908e+09 0 2 8.2940e+09 8.2908e+09 0.04% - 0s\n",
|
||||
"H 9 9 8.292131e+09 8.2908e+09 0.02% 1.0 0s\n",
|
||||
"H 132 88 8.292121e+09 8.2908e+09 0.02% 2.0 0s\n",
|
||||
"* 133 88 28 8.292121e+09 8.2908e+09 0.02% 2.2 0s\n",
|
||||
"H 216 136 8.291918e+09 8.2909e+09 0.01% 2.4 0s\n",
|
||||
"* 232 136 28 8.291664e+09 8.2909e+09 0.01% 2.4 0s\n",
|
||||
"\n",
|
||||
"Cutting planes:\n",
|
||||
" Gomory: 2\n",
|
||||
" Cover: 1\n",
|
||||
" MIR: 1\n",
|
||||
" Inf proof: 3\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (1031 simplex iterations) in 0.07 seconds (0.03 work units)\n",
|
||||
"Thread count was 12 (of 12 available processors)\n",
|
||||
"Explored 233 nodes (1577 simplex iterations) in 0.09 seconds (0.06 work units)\n",
|
||||
"Thread count was 1 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n",
|
||||
"Solution count 7: 8.29166e+09 8.29192e+09 8.29212e+09 ... 9.75713e+09\n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 8.291465302389e+09, best bound 8.290781665333e+09, gap 0.0082%\n"
|
||||
"Best objective 8.291663722826e+09, best bound 8.290885027548e+09, gap 0.0094%\n",
|
||||
"\n",
|
||||
"User-callback calls 708, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -710,12 +695,12 @@
|
||||
"source": [
|
||||
"## Accessing the solution\n",
|
||||
"\n",
|
||||
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
|
||||
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 10,
|
||||
"id": "67a6cd18",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -728,10 +713,10 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0x19042f12\n",
|
||||
@@ -750,13 +735,15 @@
|
||||
"\n",
|
||||
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
|
||||
"Optimal objective 8.253596777e+09\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"User-callback calls 56, time in user-callback 0.00 sec\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0x8ee64638\n",
|
||||
"Model fingerprint: 0x6926c32f\n",
|
||||
"Variable types: 500 continuous, 500 integer (500 binary)\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 2e+06]\n",
|
||||
@@ -766,11 +753,15 @@
|
||||
"\n",
|
||||
"User MIP start produced solution with objective 8.25814e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25512e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n",
|
||||
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n",
|
||||
"Loaded user MIP start with objective 8.25459e+09\n",
|
||||
"User MIP start produced solution with objective 8.2551e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25508e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25508e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25499e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25448e+09 (0.02s)\n",
|
||||
"User MIP start produced solution with objective 8.25448e+09 (0.02s)\n",
|
||||
"Loaded user MIP start with objective 8.25448e+09\n",
|
||||
"\n",
|
||||
"Presolve time: 0.01s\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
|
||||
"Variable types: 500 continuous, 500 integer (500 binary)\n",
|
||||
"\n",
|
||||
@@ -779,31 +770,25 @@
|
||||
" Nodes | Current Node | Objective Bounds | Work\n",
|
||||
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
|
||||
"\n",
|
||||
" 0 0 8.2536e+09 0 1 8.2546e+09 8.2536e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2537e+09 0 3 8.2546e+09 8.2537e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2537e+09 0 1 8.2546e+09 8.2537e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2537e+09 0 4 8.2546e+09 8.2537e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2538e+09 0 4 8.2546e+09 8.2538e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2538e+09 0 5 8.2546e+09 8.2538e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2538e+09 0 6 8.2546e+09 8.2538e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2536e+09 0 1 8.2545e+09 8.2536e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2537e+09 0 3 8.2545e+09 8.2537e+09 0.01% - 0s\n",
|
||||
"\n",
|
||||
"Cutting planes:\n",
|
||||
" Cover: 1\n",
|
||||
" MIR: 2\n",
|
||||
" StrongCG: 1\n",
|
||||
" Flow cover: 1\n",
|
||||
" Flow cover: 2\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (575 simplex iterations) in 0.12 seconds (0.01 work units)\n",
|
||||
"Thread count was 12 (of 12 available processors)\n",
|
||||
"Explored 1 nodes (515 simplex iterations) in 0.03 seconds (0.02 work units)\n",
|
||||
"Thread count was 1 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 3: 8.25459e+09 8.25512e+09 8.25814e+09 \n",
|
||||
"Solution count 6: 8.25448e+09 8.25499e+09 8.25508e+09 ... 8.25814e+09\n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n",
|
||||
"obj = 8254590409.969726\n",
|
||||
"Best objective 8.254479145594e+09, best bound 8.253689731796e+09, gap 0.0096%\n",
|
||||
"\n",
|
||||
"User-callback calls 203, time in user-callback 0.00 sec\n",
|
||||
"obj = 8254479145.594168\n",
|
||||
"x = [1.0, 1.0, 0.0]\n",
|
||||
"y = [935662.0949263407, 1604270.0218116897, 0.0]\n"
|
||||
"y = [935662.0949262811, 1604270.0218116897, 0.0]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -841,7 +826,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -41,7 +41,7 @@
|
||||
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the [official Julia website for more instructions](https://julialang.org/downloads/). After Julia is installed, launch the Julia REPL, type `]` to enter package mode, then install MIPLearn:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"pkg> add MIPLearn@0.3\n",
|
||||
"pkg> add MIPLearn@0.4\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
@@ -592,7 +592,7 @@
|
||||
"source": [
|
||||
"## Accessing the solution\n",
|
||||
"\n",
|
||||
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver."
|
||||
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -33,6 +33,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "02f0a927",
|
||||
"metadata": {},
|
||||
@@ -44,53 +45,11 @@
|
||||
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
|
||||
"- Julia version, compatible with the JuMP modeling language.\n",
|
||||
"\n",
|
||||
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.8+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "cd8a69c1",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-06-06T19:57:33.202580815Z",
|
||||
"start_time": "2023-06-06T19:57:33.198341886Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install MIPLearn==0.3.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e8274543",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In addition to MIPLearn itself, we will also install Gurobi 10.0, a state-of-the-art commercial MILP solver. This step also install a demo license for Gurobi, which should able to solve the small optimization problems in this tutorial. A license is required for solving larger-scale problems."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "dcc8756c",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-06-06T19:57:35.756831801Z",
|
||||
"start_time": "2023-06-06T19:57:33.201767088Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Requirement already satisfied: gurobipy<10.1,>=10 in /home/axavier/Software/anaconda3/envs/miplearn/lib/python3.8/site-packages (10.0.1)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"!pip install 'gurobipy>=10,<10.1'"
|
||||
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"$ pip install MIPLearn~=0.4\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -162,7 +121,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 1,
|
||||
"id": "22a67170-10b4-43d3-8708-014d91141e73",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -198,7 +157,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 2,
|
||||
"id": "2f67032f-0d74-4317-b45c-19da0ec859e9",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -248,7 +207,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 3,
|
||||
"id": "2a896f47",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -263,10 +222,10 @@
|
||||
"text": [
|
||||
"Restricted license - for non-production use only - expires 2024-10-28\n",
|
||||
"Set parameter QCPDual to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
|
||||
"Model fingerprint: 0x15c7a953\n",
|
||||
@@ -292,7 +251,7 @@
|
||||
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (5 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 12 (of 12 available processors)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 2: 1320 1400 \n",
|
||||
"\n",
|
||||
@@ -359,7 +318,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 4,
|
||||
"id": "5eb09fab",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -405,7 +364,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 5,
|
||||
"id": "6156752c",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -432,7 +391,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 6,
|
||||
"id": "7623f002",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -473,7 +432,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 7,
|
||||
"id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -511,7 +470,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 8,
|
||||
"id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -525,10 +484,10 @@
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Set parameter QCPDual to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0x5e67c6ee\n",
|
||||
@@ -548,13 +507,13 @@
|
||||
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
|
||||
"Optimal objective 8.290621916e+09\n",
|
||||
"Set parameter QCPDual to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0xa4a7961e\n",
|
||||
"Model fingerprint: 0x4a7cfe2b\n",
|
||||
"Variable types: 500 continuous, 500 integer (500 binary)\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 2e+06]\n",
|
||||
@@ -562,37 +521,48 @@
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [3e+08, 3e+08]\n",
|
||||
"\n",
|
||||
"User MIP start produced solution with objective 8.30129e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.29184e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.29146e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.29146e+09 (0.02s)\n",
|
||||
"Loaded user MIP start with objective 8.29146e+09\n",
|
||||
"User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
|
||||
"Loaded user MIP start with objective 8.29153e+09\n",
|
||||
"\n",
|
||||
"Presolve time: 0.01s\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
|
||||
"Variable types: 500 continuous, 500 integer (500 binary)\n",
|
||||
"\n",
|
||||
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.01 seconds (0.00 work units)\n",
|
||||
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n",
|
||||
"\n",
|
||||
" Nodes | Current Node | Objective Bounds | Work\n",
|
||||
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
|
||||
"\n",
|
||||
" 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
|
||||
" 0 0 8.2907e+09 0 2 8.2915e+09 8.2907e+09 0.01% - 0s\n",
|
||||
"\n",
|
||||
"Cutting planes:\n",
|
||||
" Cover: 1\n",
|
||||
" Gomory: 1\n",
|
||||
" Flow cover: 2\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (512 simplex iterations) in 0.09 seconds (0.01 work units)\n",
|
||||
"Thread count was 12 (of 12 available processors)\n",
|
||||
"Explored 1 nodes (565 simplex iterations) in 0.04 seconds (0.01 work units)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 3: 8.29146e+09 8.29184e+09 8.30129e+09 \n",
|
||||
"Solution count 1: 8.29153e+09 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 8.291459497797e+09, best bound 8.290645029670e+09, gap 0.0098%\n",
|
||||
"Best objective 8.291528276179e+09, best bound 8.290733258025e+09, gap 0.0096%\n",
|
||||
"WARNING: Cannot get reduced costs for MIP.\n",
|
||||
"WARNING: Cannot get duals for MIP.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@@ -600,7 +570,7 @@
|
||||
"\n",
|
||||
"solver_ml = LearningSolver(components=[comp])\n",
|
||||
"solver_ml.fit(train_data)\n",
|
||||
"solver_ml.optimize(test_data[0], build_uc_model);"
|
||||
"solver_ml.optimize(test_data[0], build_uc_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -613,7 +583,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 9,
|
||||
"id": "2ff391ed-e855-4228-aa09-a7641d8c2893",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -627,10 +597,10 @@
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Set parameter QCPDual to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0x5e67c6ee\n",
|
||||
@@ -640,7 +610,7 @@
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [3e+08, 3e+08]\n",
|
||||
"Presolve removed 1000 rows and 500 columns\n",
|
||||
"Presolve time: 0.01s\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
|
||||
"\n",
|
||||
"Iteration Objective Primal Inf. Dual Inf. Time\n",
|
||||
@@ -650,10 +620,10 @@
|
||||
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
|
||||
"Optimal objective 8.290621916e+09\n",
|
||||
"Set parameter QCPDual to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0x8a0f9587\n",
|
||||
@@ -691,8 +661,8 @@
|
||||
" Gomory: 2\n",
|
||||
" MIR: 1\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (1025 simplex iterations) in 0.08 seconds (0.03 work units)\n",
|
||||
"Thread count was 12 (of 12 available processors)\n",
|
||||
"Explored 1 nodes (1025 simplex iterations) in 0.12 seconds (0.03 work units)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 4: 8.29147e+09 8.29398e+09 8.29827e+09 9.75713e+09 \n",
|
||||
"\n",
|
||||
@@ -701,12 +671,22 @@
|
||||
"WARNING: Cannot get reduced costs for MIP.\n",
|
||||
"WARNING: Cannot get duals for MIP.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{}"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"solver_baseline = LearningSolver(components=[])\n",
|
||||
"solver_baseline.fit(train_data)\n",
|
||||
"solver_baseline.optimize(test_data[0], build_uc_model);"
|
||||
"solver_baseline.optimize(test_data[0], build_uc_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -726,12 +706,12 @@
|
||||
"source": [
|
||||
"## Accessing the solution\n",
|
||||
"\n",
|
||||
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
|
||||
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 10,
|
||||
"id": "67a6cd18",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -745,10 +725,10 @@
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Set parameter QCPDual to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0x2dfe4e1c\n",
|
||||
@@ -758,7 +738,7 @@
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [3e+08, 3e+08]\n",
|
||||
"Presolve removed 1000 rows and 500 columns\n",
|
||||
"Presolve time: 0.01s\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
|
||||
"\n",
|
||||
"Iteration Objective Primal Inf. Dual Inf. Time\n",
|
||||
@@ -768,13 +748,13 @@
|
||||
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
|
||||
"Optimal objective 8.253596777e+09\n",
|
||||
"Set parameter QCPDual to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
|
||||
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
|
||||
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
|
||||
"Model fingerprint: 0x20637200\n",
|
||||
"Model fingerprint: 0x0f0924a1\n",
|
||||
"Variable types: 500 continuous, 500 integer (500 binary)\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 2e+06]\n",
|
||||
@@ -782,13 +762,16 @@
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [3e+08, 3e+08]\n",
|
||||
"\n",
|
||||
"User MIP start produced solution with objective 8.25814e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25814e+09 (0.00s)\n",
|
||||
"User MIP start produced solution with objective 8.25512e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n",
|
||||
"User MIP start produced solution with objective 8.25459e+09 (0.04s)\n",
|
||||
"User MIP start produced solution with objective 8.25483e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25483e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25483e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25459e+09 (0.01s)\n",
|
||||
"User MIP start produced solution with objective 8.25459e+09 (0.01s)\n",
|
||||
"Loaded user MIP start with objective 8.25459e+09\n",
|
||||
"\n",
|
||||
"Presolve time: 0.01s\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
|
||||
"Variable types: 500 continuous, 500 integer (500 binary)\n",
|
||||
"\n",
|
||||
@@ -812,10 +795,10 @@
|
||||
" StrongCG: 1\n",
|
||||
" Flow cover: 1\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (575 simplex iterations) in 0.11 seconds (0.01 work units)\n",
|
||||
"Thread count was 12 (of 12 available processors)\n",
|
||||
"Explored 1 nodes (575 simplex iterations) in 0.09 seconds (0.01 work units)\n",
|
||||
"Thread count was 20 (of 20 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 3: 8.25459e+09 8.25512e+09 8.25814e+09 \n",
|
||||
"Solution count 4: 8.25459e+09 8.25483e+09 8.25512e+09 8.25814e+09 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 8.254590409970e+09, best bound 8.253768093811e+09, gap 0.0100%\n",
|
||||
@@ -823,7 +806,7 @@
|
||||
"WARNING: Cannot get duals for MIP.\n",
|
||||
"obj = 8254590409.96973\n",
|
||||
" x = [1.0, 1.0, 0.0, 1.0, 1.0]\n",
|
||||
" y = [935662.0949263407, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]\n"
|
||||
" y = [935662.0949262811, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -861,7 +844,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.13"
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
1
docs/tutorials/gurobi.env
Normal file
1
docs/tutorials/gurobi.env
Normal file
@@ -0,0 +1 @@
|
||||
Threads 1
|
||||
BIN
miplearn/.io.py.swp
Normal file
BIN
miplearn/.io.py.swp
Normal file
Binary file not shown.
@@ -54,7 +54,7 @@ class MinProbabilityClassifier(BaseEstimator):
|
||||
y_pred = []
|
||||
for sample_idx in range(n_samples):
|
||||
yi = float("nan")
|
||||
for (class_idx, class_val) in enumerate(self.classes_):
|
||||
for class_idx, class_val in enumerate(self.classes_):
|
||||
if y_proba[sample_idx, class_idx] >= self.thresholds[class_idx]:
|
||||
yi = class_val
|
||||
y_pred.append(yi)
|
||||
|
||||
@@ -4,9 +4,12 @@
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
from io import StringIO
|
||||
from os.path import exists
|
||||
from typing import Callable, List
|
||||
from typing import Callable, List, Any
|
||||
import traceback
|
||||
|
||||
from ..h5 import H5File
|
||||
from ..io import _RedirectOutput, gzip, _to_h5_filename
|
||||
@@ -14,63 +17,69 @@ from ..parallel import p_umap
|
||||
|
||||
|
||||
class BasicCollector:
|
||||
def __init__(self, skip_lp: bool = False, write_mps: bool = True) -> None:
|
||||
self.skip_lp = skip_lp
|
||||
self.write_mps = write_mps
|
||||
|
||||
def collect(
|
||||
self,
|
||||
filenames: List[str],
|
||||
build_model: Callable,
|
||||
n_jobs: int = 1,
|
||||
progress: bool = False,
|
||||
verbose: bool = False,
|
||||
) -> None:
|
||||
def _collect(data_filename):
|
||||
h5_filename = _to_h5_filename(data_filename)
|
||||
mps_filename = h5_filename.replace(".h5", ".mps")
|
||||
def _collect(data_filename: str) -> None:
|
||||
try:
|
||||
h5_filename = _to_h5_filename(data_filename)
|
||||
mps_filename = h5_filename.replace(".h5", ".mps")
|
||||
|
||||
if exists(h5_filename):
|
||||
# Try to read optimal solution
|
||||
mip_var_values = None
|
||||
try:
|
||||
with H5File(h5_filename, "r") as h5:
|
||||
mip_var_values = h5.get_array("mip_var_values")
|
||||
except:
|
||||
pass
|
||||
if exists(h5_filename):
|
||||
# Try to read optimal solution
|
||||
mip_var_values = None
|
||||
try:
|
||||
with H5File(h5_filename, "r") as h5:
|
||||
mip_var_values = h5.get_array("mip_var_values")
|
||||
except:
|
||||
pass
|
||||
|
||||
if mip_var_values is None:
|
||||
print(f"Removing empty/corrupted h5 file: {h5_filename}")
|
||||
os.remove(h5_filename)
|
||||
else:
|
||||
return
|
||||
if mip_var_values is None:
|
||||
print(f"Removing empty/corrupted h5 file: {h5_filename}")
|
||||
os.remove(h5_filename)
|
||||
else:
|
||||
return
|
||||
|
||||
with H5File(h5_filename, "w") as h5:
|
||||
streams = [StringIO()]
|
||||
with _RedirectOutput(streams):
|
||||
# Load and extract static features
|
||||
model = build_model(data_filename)
|
||||
model.extract_after_load(h5)
|
||||
with H5File(h5_filename, "w") as h5:
|
||||
streams: List[Any] = [StringIO()]
|
||||
if verbose:
|
||||
streams += [sys.stdout]
|
||||
with _RedirectOutput(streams):
|
||||
# Load and extract static features
|
||||
model = build_model(data_filename)
|
||||
model.extract_after_load(h5)
|
||||
|
||||
# Solve LP relaxation
|
||||
relaxed = model.relax()
|
||||
relaxed.optimize()
|
||||
relaxed.extract_after_lp(h5)
|
||||
if not self.skip_lp:
|
||||
# Solve LP relaxation
|
||||
relaxed = model.relax()
|
||||
relaxed.optimize()
|
||||
relaxed.extract_after_lp(h5)
|
||||
|
||||
# Solve MIP
|
||||
model.optimize()
|
||||
model.extract_after_mip(h5)
|
||||
# Solve MIP
|
||||
model.optimize()
|
||||
model.extract_after_mip(h5)
|
||||
|
||||
# Add lazy constraints to model
|
||||
if (
|
||||
hasattr(model, "fix_violations")
|
||||
and model.fix_violations is not None
|
||||
):
|
||||
model.fix_violations(model, model.violations_, "aot")
|
||||
h5.put_scalar(
|
||||
"mip_constr_violations", json.dumps(model.violations_)
|
||||
)
|
||||
if self.write_mps:
|
||||
# Add lazy constraints to model
|
||||
model._lazy_enforce_collected()
|
||||
|
||||
# Save MPS file
|
||||
model.write(mps_filename)
|
||||
gzip(mps_filename)
|
||||
# Save MPS file
|
||||
model.write(mps_filename)
|
||||
gzip(mps_filename)
|
||||
|
||||
h5.put_scalar("mip_log", streams[0].getvalue())
|
||||
h5.put_scalar("mip_log", streams[0].getvalue())
|
||||
except:
|
||||
print(f"Error processing: data_filename")
|
||||
traceback.print_exc()
|
||||
|
||||
if n_jobs > 1:
|
||||
p_umap(
|
||||
|
||||
@@ -1,117 +0,0 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
from io import StringIO
|
||||
from typing import Callable
|
||||
|
||||
import gurobipy as gp
|
||||
import numpy as np
|
||||
from gurobipy import GRB, LinExpr
|
||||
|
||||
from ..h5 import H5File
|
||||
from ..io import _RedirectOutput
|
||||
|
||||
|
||||
class LazyCollector:
|
||||
def __init__(
|
||||
self,
|
||||
min_constrs: int = 100_000,
|
||||
time_limit: float = 900,
|
||||
) -> None:
|
||||
self.min_constrs = min_constrs
|
||||
self.time_limit = time_limit
|
||||
|
||||
def collect(
|
||||
self, data_filename: str, build_model: Callable, tol: float = 1e-6
|
||||
) -> None:
|
||||
h5_filename = f"{data_filename}.h5"
|
||||
with H5File(h5_filename, "r+") as h5:
|
||||
streams = [StringIO()]
|
||||
lazy = None
|
||||
with _RedirectOutput(streams):
|
||||
slacks = h5.get_array("mip_constr_slacks")
|
||||
assert slacks is not None
|
||||
|
||||
# Check minimum problem size
|
||||
if len(slacks) < self.min_constrs:
|
||||
print("Problem is too small. Skipping.")
|
||||
h5.put_array("mip_constr_lazy", np.zeros(len(slacks)))
|
||||
return
|
||||
|
||||
# Load model
|
||||
print("Loading model...")
|
||||
model = build_model(data_filename)
|
||||
model.params.LazyConstraints = True
|
||||
model.params.timeLimit = self.time_limit
|
||||
gp_constrs = np.array(model.getConstrs())
|
||||
gp_vars = np.array(model.getVars())
|
||||
|
||||
# Load constraints
|
||||
lhs = h5.get_sparse("static_constr_lhs")
|
||||
rhs = h5.get_array("static_constr_rhs")
|
||||
sense = h5.get_array("static_constr_sense")
|
||||
assert lhs is not None
|
||||
assert rhs is not None
|
||||
assert sense is not None
|
||||
lhs_csr = lhs.tocsr()
|
||||
lhs_csc = lhs.tocsc()
|
||||
constr_idx = np.array(range(len(rhs)))
|
||||
lazy = np.zeros(len(rhs))
|
||||
|
||||
# Drop loose constraints
|
||||
selected = (slacks > 0) & ((sense == b"<") | (sense == b">"))
|
||||
loose_constrs = gp_constrs[selected]
|
||||
print(
|
||||
f"Removing {len(loose_constrs):,d} constraints (out of {len(rhs):,d})..."
|
||||
)
|
||||
model.remove(list(loose_constrs))
|
||||
|
||||
# Filter to constraints that were dropped
|
||||
lhs_csr = lhs_csr[selected, :]
|
||||
lhs_csc = lhs_csc[selected, :]
|
||||
rhs = rhs[selected]
|
||||
sense = sense[selected]
|
||||
constr_idx = constr_idx[selected]
|
||||
lazy[selected] = 1
|
||||
|
||||
# Load warm start
|
||||
var_names = h5.get_array("static_var_names")
|
||||
var_values = h5.get_array("mip_var_values")
|
||||
assert var_values is not None
|
||||
assert var_names is not None
|
||||
for (var_idx, var_name) in enumerate(var_names):
|
||||
var = model.getVarByName(var_name.decode())
|
||||
var.start = var_values[var_idx]
|
||||
|
||||
print("Solving MIP with lazy constraints callback...")
|
||||
|
||||
def callback(model: gp.Model, where: int) -> None:
|
||||
assert rhs is not None
|
||||
assert lazy is not None
|
||||
assert sense is not None
|
||||
|
||||
if where == GRB.Callback.MIPSOL:
|
||||
x_val = np.array(model.cbGetSolution(model.getVars()))
|
||||
slack = lhs_csc * x_val - rhs
|
||||
slack[sense == b">"] *= -1
|
||||
is_violated = slack > tol
|
||||
|
||||
for (j, rhs_j) in enumerate(rhs):
|
||||
if is_violated[j]:
|
||||
lazy[constr_idx[j]] = 0
|
||||
expr = LinExpr(
|
||||
lhs_csr[j, :].data, gp_vars[lhs_csr[j, :].indices]
|
||||
)
|
||||
if sense[j] == b"<":
|
||||
model.cbLazy(expr <= rhs_j)
|
||||
elif sense[j] == b">":
|
||||
model.cbLazy(expr >= rhs_j)
|
||||
else:
|
||||
raise RuntimeError(f"Unknown sense: {sense[j]}")
|
||||
|
||||
model.optimize(callback)
|
||||
print(f"Marking {lazy.sum():,.0f} constraints as lazy...")
|
||||
|
||||
h5.put_array("mip_constr_lazy", lazy)
|
||||
h5.put_scalar("mip_constr_lazy_log", streams[0].getvalue())
|
||||
0
miplearn/components/cuts/__init__.py
Normal file
0
miplearn/components/cuts/__init__.py
Normal file
35
miplearn/components/cuts/expert.py
Normal file
35
miplearn/components/cuts/expert.py
Normal file
@@ -0,0 +1,35 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import Dict, Any, List
|
||||
|
||||
from miplearn.components.cuts.mem import convert_lists_to_tuples
|
||||
from miplearn.h5 import H5File
|
||||
from miplearn.solvers.abstract import AbstractModel
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ExpertCutsComponent:
|
||||
def fit(
|
||||
self,
|
||||
_: List[str],
|
||||
) -> None:
|
||||
pass
|
||||
|
||||
def before_mip(
|
||||
self,
|
||||
test_h5: str,
|
||||
model: AbstractModel,
|
||||
stats: Dict[str, Any],
|
||||
) -> None:
|
||||
with H5File(test_h5, "r") as h5:
|
||||
cuts_str = h5.get_scalar("mip_cuts")
|
||||
assert cuts_str is not None
|
||||
assert isinstance(cuts_str, str)
|
||||
cuts = list(set(convert_lists_to_tuples(json.loads(cuts_str))))
|
||||
model.set_cuts(cuts)
|
||||
stats["Cuts: AOT"] = len(cuts)
|
||||
113
miplearn/components/cuts/mem.py
Normal file
113
miplearn/components/cuts/mem.py
Normal file
@@ -0,0 +1,113 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import List, Dict, Any, Hashable
|
||||
|
||||
import numpy as np
|
||||
from sklearn.preprocessing import MultiLabelBinarizer
|
||||
|
||||
from miplearn.extractors.abstract import FeaturesExtractor
|
||||
from miplearn.h5 import H5File
|
||||
from miplearn.solvers.abstract import AbstractModel
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def convert_lists_to_tuples(obj: Any) -> Any:
|
||||
if isinstance(obj, list):
|
||||
return tuple(convert_lists_to_tuples(item) for item in obj)
|
||||
elif isinstance(obj, dict):
|
||||
return {key: convert_lists_to_tuples(value) for key, value in obj.items()}
|
||||
else:
|
||||
return obj
|
||||
|
||||
|
||||
class _BaseMemorizingConstrComponent:
|
||||
def __init__(self, clf: Any, extractor: FeaturesExtractor, field: str) -> None:
|
||||
self.clf = clf
|
||||
self.extractor = extractor
|
||||
self.constrs_: List[Hashable] = []
|
||||
self.n_features_: int = 0
|
||||
self.n_targets_: int = 0
|
||||
self.field = field
|
||||
|
||||
def fit(
|
||||
self,
|
||||
train_h5: List[str],
|
||||
) -> None:
|
||||
logger.info("Reading training data...")
|
||||
n_samples = len(train_h5)
|
||||
x, y, constrs, n_features = [], [], [], None
|
||||
constr_to_idx: Dict[Hashable, int] = {}
|
||||
for h5_filename in train_h5:
|
||||
with H5File(h5_filename, "r") as h5:
|
||||
# Store constraints
|
||||
sample_constrs_str = h5.get_scalar(self.field)
|
||||
assert sample_constrs_str is not None
|
||||
assert isinstance(sample_constrs_str, str)
|
||||
sample_constrs = convert_lists_to_tuples(json.loads(sample_constrs_str))
|
||||
y_sample = []
|
||||
for c in sample_constrs:
|
||||
if c not in constr_to_idx:
|
||||
constr_to_idx[c] = len(constr_to_idx)
|
||||
constrs.append(c)
|
||||
y_sample.append(constr_to_idx[c])
|
||||
y.append(y_sample)
|
||||
|
||||
# Extract features
|
||||
x_sample = self.extractor.get_instance_features(h5)
|
||||
assert len(x_sample.shape) == 1
|
||||
if n_features is None:
|
||||
n_features = len(x_sample)
|
||||
else:
|
||||
assert len(x_sample) == n_features
|
||||
x.append(x_sample)
|
||||
logger.info("Constructing matrices...")
|
||||
assert n_features is not None
|
||||
self.n_features_ = n_features
|
||||
self.constrs_ = constrs
|
||||
self.n_targets_ = len(constr_to_idx)
|
||||
x_np = np.vstack(x)
|
||||
assert x_np.shape == (n_samples, n_features)
|
||||
y_np = MultiLabelBinarizer().fit_transform(y)
|
||||
assert y_np.shape == (n_samples, self.n_targets_)
|
||||
logger.info(
|
||||
f"Dataset has {n_samples:,d} samples, "
|
||||
f"{n_features:,d} features and {self.n_targets_:,d} targets"
|
||||
)
|
||||
logger.info("Training classifier...")
|
||||
self.clf.fit(x_np, y_np)
|
||||
|
||||
def predict(
|
||||
self,
|
||||
msg: str,
|
||||
test_h5: str,
|
||||
) -> List[Hashable]:
|
||||
with H5File(test_h5, "r") as h5:
|
||||
x_sample = self.extractor.get_instance_features(h5)
|
||||
assert x_sample.shape == (self.n_features_,)
|
||||
x_sample = x_sample.reshape(1, -1)
|
||||
logger.info(msg)
|
||||
y = self.clf.predict(x_sample)
|
||||
assert y.shape == (1, self.n_targets_)
|
||||
y = y.reshape(-1)
|
||||
return [self.constrs_[i] for (i, yi) in enumerate(y) if yi > 0.5]
|
||||
|
||||
|
||||
class MemorizingCutsComponent(_BaseMemorizingConstrComponent):
|
||||
def __init__(self, clf: Any, extractor: FeaturesExtractor) -> None:
|
||||
super().__init__(clf, extractor, "mip_cuts")
|
||||
|
||||
def before_mip(
|
||||
self,
|
||||
test_h5: str,
|
||||
model: AbstractModel,
|
||||
stats: Dict[str, Any],
|
||||
) -> None:
|
||||
assert self.constrs_ is not None
|
||||
cuts = self.predict("Predicting cutting planes...", test_h5)
|
||||
model.set_cuts(cuts)
|
||||
stats["Cuts: AOT"] = len(cuts)
|
||||
@@ -1,43 +0,0 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
import json
|
||||
from typing import Any, Dict, List
|
||||
|
||||
import gurobipy as gp
|
||||
|
||||
from ..h5 import H5File
|
||||
|
||||
|
||||
class ExpertLazyComponent:
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
def fit(self, train_h5: List[str]) -> None:
|
||||
pass
|
||||
|
||||
def before_mip(self, test_h5: str, model: gp.Model, stats: Dict[str, Any]) -> None:
|
||||
with H5File(test_h5, "r") as h5:
|
||||
constr_names = h5.get_array("static_constr_names")
|
||||
constr_lazy = h5.get_array("mip_constr_lazy")
|
||||
constr_violations = h5.get_scalar("mip_constr_violations")
|
||||
|
||||
assert constr_names is not None
|
||||
assert constr_violations is not None
|
||||
|
||||
# Static lazy constraints
|
||||
n_static_lazy = 0
|
||||
if constr_lazy is not None:
|
||||
for (constr_idx, constr_name) in enumerate(constr_names):
|
||||
if constr_lazy[constr_idx]:
|
||||
constr = model.getConstrByName(constr_name.decode())
|
||||
constr.lazy = 3
|
||||
n_static_lazy += 1
|
||||
stats.update({"Static lazy constraints": n_static_lazy})
|
||||
|
||||
# Dynamic lazy constraints
|
||||
if hasattr(model, "_fix_violations"):
|
||||
violations = json.loads(constr_violations)
|
||||
model._fix_violations(model, violations, "aot")
|
||||
stats.update({"Dynamic lazy constraints": len(violations)})
|
||||
0
miplearn/components/lazy/__init__.py
Normal file
0
miplearn/components/lazy/__init__.py
Normal file
36
miplearn/components/lazy/expert.py
Normal file
36
miplearn/components/lazy/expert.py
Normal file
@@ -0,0 +1,36 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import Dict, Any, List
|
||||
|
||||
from miplearn.components.cuts.mem import convert_lists_to_tuples
|
||||
from miplearn.h5 import H5File
|
||||
from miplearn.solvers.abstract import AbstractModel
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ExpertLazyComponent:
|
||||
def fit(
|
||||
self,
|
||||
_: List[str],
|
||||
) -> None:
|
||||
pass
|
||||
|
||||
def before_mip(
|
||||
self,
|
||||
test_h5: str,
|
||||
model: AbstractModel,
|
||||
stats: Dict[str, Any],
|
||||
) -> None:
|
||||
with H5File(test_h5, "r") as h5:
|
||||
violations_str = h5.get_scalar("mip_lazy")
|
||||
assert violations_str is not None
|
||||
assert isinstance(violations_str, str)
|
||||
violations = list(set(convert_lists_to_tuples(json.loads(violations_str))))
|
||||
logger.info(f"Enforcing {len(violations)} constraints ahead-of-time...")
|
||||
model.lazy_enforce(violations)
|
||||
stats["Lazy Constraints: AOT"] = len(violations)
|
||||
31
miplearn/components/lazy/mem.py
Normal file
31
miplearn/components/lazy/mem.py
Normal file
@@ -0,0 +1,31 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
import logging
|
||||
from typing import List, Dict, Any, Hashable
|
||||
|
||||
from miplearn.components.cuts.mem import (
|
||||
_BaseMemorizingConstrComponent,
|
||||
)
|
||||
from miplearn.extractors.abstract import FeaturesExtractor
|
||||
from miplearn.solvers.abstract import AbstractModel
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MemorizingLazyComponent(_BaseMemorizingConstrComponent):
|
||||
def __init__(self, clf: Any, extractor: FeaturesExtractor) -> None:
|
||||
super().__init__(clf, extractor, "mip_lazy")
|
||||
|
||||
def before_mip(
|
||||
self,
|
||||
test_h5: str,
|
||||
model: AbstractModel,
|
||||
stats: Dict[str, Any],
|
||||
) -> None:
|
||||
assert self.constrs_ is not None
|
||||
violations = self.predict("Predicting violated lazy constraints...", test_h5)
|
||||
logger.info(f"Enforcing {len(violations)} constraints ahead-of-time...")
|
||||
model.lazy_enforce(violations)
|
||||
stats["Lazy Constraints: AOT"] = len(violations)
|
||||
@@ -1,29 +1,53 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
from typing import Tuple
|
||||
from typing import Tuple, List
|
||||
|
||||
import numpy as np
|
||||
|
||||
from miplearn.h5 import H5File
|
||||
|
||||
|
||||
def _extract_bin_var_names_values(
|
||||
def _extract_var_names_values(
|
||||
h5: H5File,
|
||||
selected_var_types: List[bytes],
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
|
||||
bin_var_names, bin_var_indices = _extract_bin_var_names(h5)
|
||||
bin_var_names, bin_var_indices = _extract_var_names(h5, selected_var_types)
|
||||
var_values = h5.get_array("mip_var_values")
|
||||
assert var_values is not None
|
||||
bin_var_values = var_values[bin_var_indices].astype(int)
|
||||
return bin_var_names, bin_var_values, bin_var_indices
|
||||
|
||||
|
||||
def _extract_bin_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
|
||||
def _extract_var_names(
|
||||
h5: H5File,
|
||||
selected_var_types: List[bytes],
|
||||
) -> Tuple[np.ndarray, np.ndarray]:
|
||||
var_types = h5.get_array("static_var_types")
|
||||
var_names = h5.get_array("static_var_names")
|
||||
assert var_types is not None
|
||||
assert var_names is not None
|
||||
bin_var_indices = np.where(var_types == b"B")[0]
|
||||
bin_var_indices = np.where(np.isin(var_types, selected_var_types))[0]
|
||||
bin_var_names = var_names[bin_var_indices]
|
||||
assert len(bin_var_names.shape) == 1
|
||||
return bin_var_names, bin_var_indices
|
||||
|
||||
|
||||
def _extract_bin_var_names_values(
|
||||
h5: H5File,
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
|
||||
return _extract_var_names_values(h5, [b"B"])
|
||||
|
||||
|
||||
def _extract_bin_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
|
||||
return _extract_var_names(h5, [b"B"])
|
||||
|
||||
|
||||
def _extract_int_var_names_values(
|
||||
h5: H5File,
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
|
||||
return _extract_var_names_values(h5, [b"B", b"I"])
|
||||
|
||||
|
||||
def _extract_int_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
|
||||
return _extract_var_names(h5, [b"B", b"I"])
|
||||
|
||||
@@ -71,7 +71,7 @@ class EnforceProximity(PrimalComponentAction):
|
||||
constr_lhs = []
|
||||
constr_vars = []
|
||||
constr_rhs = 0.0
|
||||
for (i, var_name) in enumerate(var_names):
|
||||
for i, var_name in enumerate(var_names):
|
||||
if np.isnan(var_values[i]):
|
||||
continue
|
||||
constr_lhs.append(1.0 if var_values[i] < 0.5 else -1.0)
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
import logging
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from . import _extract_bin_var_names_values
|
||||
from . import _extract_int_var_names_values
|
||||
from .actions import PrimalComponentAction
|
||||
from ...solvers.abstract import AbstractModel
|
||||
from ...h5 import H5File
|
||||
@@ -28,5 +28,5 @@ class ExpertPrimalComponent:
|
||||
self, test_h5: str, model: AbstractModel, stats: Dict[str, Any]
|
||||
) -> None:
|
||||
with H5File(test_h5, "r") as h5:
|
||||
names, values, _ = _extract_bin_var_names_values(h5)
|
||||
names, values, _ = _extract_int_var_names_values(h5)
|
||||
self.action.perform(model, names, values.reshape(1, -1), stats)
|
||||
|
||||
@@ -91,7 +91,7 @@ class IndependentVarsPrimalComponent:
|
||||
|
||||
logger.info(f"Training {n_bin_vars} classifiers...")
|
||||
self.clf_ = {}
|
||||
for (var_idx, var_name) in enumerate(self.bin_var_names_):
|
||||
for var_idx, var_name in enumerate(self.bin_var_names_):
|
||||
self.clf_[var_name] = self.clone_fn(self.base_clf)
|
||||
self.clf_[var_name].fit(
|
||||
x_np[var_idx::n_bin_vars, :], y_np[var_idx::n_bin_vars]
|
||||
@@ -117,7 +117,7 @@ class IndependentVarsPrimalComponent:
|
||||
# Predict optimal solution
|
||||
logger.info("Predicting warm starts...")
|
||||
y_pred = []
|
||||
for (var_idx, var_name) in enumerate(self.bin_var_names_):
|
||||
for var_idx, var_name in enumerate(self.bin_var_names_):
|
||||
x_var = x_sample[var_idx, :].reshape(1, -1)
|
||||
y_var = self.clf_[var_name].predict(x_var)
|
||||
assert y_var.shape == (1,)
|
||||
|
||||
@@ -25,7 +25,8 @@ class ExpertBranchPriorityComponent:
|
||||
assert var_priority is not None
|
||||
assert var_names is not None
|
||||
|
||||
for (var_idx, var_name) in enumerate(var_names):
|
||||
for var_idx, var_name in enumerate(var_names):
|
||||
if np.isfinite(var_priority[var_idx]):
|
||||
var = model.getVarByName(var_name.decode())
|
||||
var.branchPriority = int(log(1 + var_priority[var_idx]))
|
||||
assert var is not None, f"unknown var: {var_name}"
|
||||
var.BranchPriority = int(log(1 + var_priority[var_idx]))
|
||||
|
||||
@@ -22,7 +22,7 @@ class AlvLouWeh2017Extractor(FeaturesExtractor):
|
||||
self.with_m3 = with_m3
|
||||
|
||||
def get_instance_features(self, h5: H5File) -> np.ndarray:
|
||||
raise NotImplemented()
|
||||
raise NotImplementedError()
|
||||
|
||||
def get_var_features(self, h5: H5File) -> np.ndarray:
|
||||
"""
|
||||
@@ -197,7 +197,7 @@ class AlvLouWeh2017Extractor(FeaturesExtractor):
|
||||
return features
|
||||
|
||||
def get_constr_features(self, h5: H5File) -> np.ndarray:
|
||||
raise NotImplemented()
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def _fix_infinity(m: Optional[np.ndarray]) -> None:
|
||||
|
||||
@@ -31,9 +31,9 @@ class H5FieldsExtractor(FeaturesExtractor):
|
||||
data = h5.get_scalar(field)
|
||||
assert data is not None
|
||||
x.append(data)
|
||||
x = np.hstack(x)
|
||||
assert len(x.shape) == 1
|
||||
return x
|
||||
x_np = np.hstack(x)
|
||||
assert len(x_np.shape) == 1
|
||||
return x_np
|
||||
|
||||
def get_var_features(self, h5: H5File) -> np.ndarray:
|
||||
var_types = h5.get_array("static_var_types")
|
||||
@@ -51,13 +51,14 @@ class H5FieldsExtractor(FeaturesExtractor):
|
||||
raise Exception("No constr fields provided")
|
||||
return self._extract(h5, self.constr_fields, n_constr)
|
||||
|
||||
def _extract(self, h5, fields, n_expected):
|
||||
def _extract(self, h5: H5File, fields: List[str], n_expected: int) -> np.ndarray:
|
||||
x = []
|
||||
for field in fields:
|
||||
try:
|
||||
data = h5.get_array(field)
|
||||
except ValueError:
|
||||
v = h5.get_scalar(field)
|
||||
assert v is not None
|
||||
data = np.repeat(v, n_expected)
|
||||
assert data is not None
|
||||
assert len(data.shape) == 1
|
||||
|
||||
@@ -68,7 +68,7 @@ class H5File:
|
||||
return
|
||||
self._assert_is_array(value)
|
||||
if value.dtype.kind == "f":
|
||||
value = value.astype("float32")
|
||||
value = value.astype("float64")
|
||||
if key in self.file:
|
||||
del self.file[key]
|
||||
return self.file.create_dataset(key, data=value, compression="gzip")
|
||||
@@ -111,7 +111,7 @@ class H5File:
|
||||
), f"bytes expected; found: {value.__class__}" # type: ignore
|
||||
self.put_array(key, np.frombuffer(value, dtype="uint8"))
|
||||
|
||||
def close(self):
|
||||
def close(self) -> None:
|
||||
self.file.close()
|
||||
|
||||
def __enter__(self) -> "H5File":
|
||||
|
||||
@@ -86,7 +86,11 @@ def read_pkl_gz(filename: str) -> Any:
|
||||
|
||||
def _to_h5_filename(data_filename: str) -> str:
|
||||
output = f"{data_filename}.h5"
|
||||
output = output.replace(".pkl.gz.h5", ".h5")
|
||||
output = output.replace(".pkl.h5", ".h5")
|
||||
output = output.replace(".gz.h5", ".h5")
|
||||
output = output.replace(".csv.h5", ".h5")
|
||||
output = output.replace(".jld2.h5", ".h5")
|
||||
output = output.replace(".json.h5", ".h5")
|
||||
output = output.replace(".lp.h5", ".h5")
|
||||
output = output.replace(".mps.h5", ".h5")
|
||||
output = output.replace(".pkl.h5", ".h5")
|
||||
return output
|
||||
|
||||
@@ -1,3 +1,28 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
from typing import Any, Optional
|
||||
|
||||
import gurobipy as gp
|
||||
from pyomo import environ as pe
|
||||
|
||||
|
||||
def _gurobipy_set_params(model: gp.Model, params: Optional[dict[str, Any]]) -> None:
|
||||
assert isinstance(model, gp.Model)
|
||||
if params is not None:
|
||||
for param_name, param_value in params.items():
|
||||
setattr(model.params, param_name, param_value)
|
||||
|
||||
|
||||
def _pyomo_set_params(
|
||||
model: pe.ConcreteModel,
|
||||
params: Optional[dict[str, Any]],
|
||||
solver: str,
|
||||
) -> None:
|
||||
assert (
|
||||
solver == "gurobi_persistent"
|
||||
), "setting parameters is only supported with gurobi_persistent"
|
||||
if solver == "gurobi_persistent" and params is not None:
|
||||
for param_name, param_value in params.items():
|
||||
model.solver.set_gurobi_param(param_name, param_value)
|
||||
|
||||
@@ -109,7 +109,7 @@ class BinPackGenerator:
|
||||
return [_sample() for n in range(n_samples)]
|
||||
|
||||
|
||||
def build_binpack_model(data: Union[str, BinPackData]) -> GurobiModel:
|
||||
def build_binpack_model_gurobipy(data: Union[str, BinPackData]) -> GurobiModel:
|
||||
"""Converts bin packing problem data into a concrete Gurobipy model."""
|
||||
if isinstance(data, str):
|
||||
data = read_pkl_gz(data)
|
||||
|
||||
@@ -174,7 +174,9 @@ class MultiKnapsackGenerator:
|
||||
return [_sample() for _ in range(n_samples)]
|
||||
|
||||
|
||||
def build_multiknapsack_model(data: Union[str, MultiKnapsackData]) -> GurobiModel:
|
||||
def build_multiknapsack_model_gurobipy(
|
||||
data: Union[str, MultiKnapsackData]
|
||||
) -> GurobiModel:
|
||||
"""Converts multi-knapsack problem data into a concrete Gurobipy model."""
|
||||
if isinstance(data, str):
|
||||
data = read_pkl_gz(data)
|
||||
|
||||
@@ -141,7 +141,7 @@ class PMedianGenerator:
|
||||
return [_sample() for _ in range(n_samples)]
|
||||
|
||||
|
||||
def build_pmedian_model(data: Union[str, PMedianData]) -> GurobiModel:
|
||||
def build_pmedian_model_gurobipy(data: Union[str, PMedianData]) -> GurobiModel:
|
||||
"""Converts capacitated p-median data into a concrete Gurobipy model."""
|
||||
if isinstance(data, str):
|
||||
data = read_pkl_gz(data)
|
||||
|
||||
@@ -8,7 +8,7 @@ from typing import List, Union
|
||||
import gurobipy as gp
|
||||
import numpy as np
|
||||
import pyomo.environ as pe
|
||||
from gurobipy.gurobipy import GRB
|
||||
from gurobipy import GRB
|
||||
from scipy.stats import uniform, randint
|
||||
from scipy.stats.distributions import rv_frozen
|
||||
|
||||
@@ -95,7 +95,7 @@ def build_setcover_model_gurobipy(data: Union[str, SetCoverData]) -> GurobiModel
|
||||
|
||||
def build_setcover_model_pyomo(
|
||||
data: Union[str, SetCoverData],
|
||||
solver="gurobi_persistent",
|
||||
solver: str = "gurobi_persistent",
|
||||
) -> PyomoModel:
|
||||
data = _read_setcover_data(data)
|
||||
(n_elements, n_sets) = data.incidence_matrix.shape
|
||||
|
||||
@@ -7,7 +7,7 @@ from typing import List, Union
|
||||
|
||||
import gurobipy as gp
|
||||
import numpy as np
|
||||
from gurobipy.gurobipy import GRB
|
||||
from gurobipy import GRB
|
||||
from scipy.stats import uniform, randint
|
||||
from scipy.stats.distributions import rv_frozen
|
||||
|
||||
@@ -53,7 +53,7 @@ class SetPackGenerator:
|
||||
]
|
||||
|
||||
|
||||
def build_setpack_model(data: Union[str, SetPackData]) -> GurobiModel:
|
||||
def build_setpack_model_gurobipy(data: Union[str, SetPackData]) -> GurobiModel:
|
||||
if isinstance(data, str):
|
||||
data = read_pkl_gz(data)
|
||||
assert isinstance(data, SetPackData)
|
||||
|
||||
@@ -2,21 +2,25 @@
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import List, Union
|
||||
from typing import List, Union, Any, Hashable, Optional
|
||||
|
||||
import gurobipy as gp
|
||||
import networkx as nx
|
||||
import numpy as np
|
||||
import pyomo.environ as pe
|
||||
from gurobipy import GRB, quicksum
|
||||
from miplearn.io import read_pkl_gz
|
||||
from miplearn.solvers.gurobi import GurobiModel
|
||||
from miplearn.solvers.pyomo import PyomoModel
|
||||
from networkx import Graph
|
||||
from scipy.stats import uniform, randint
|
||||
from scipy.stats.distributions import rv_frozen
|
||||
|
||||
from miplearn.io import read_pkl_gz
|
||||
from miplearn.solvers.gurobi import GurobiModel
|
||||
from miplearn.solvers.pyomo import PyomoModel
|
||||
from . import _gurobipy_set_params, _pyomo_set_params
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -82,35 +86,96 @@ class MaxWeightStableSetGenerator:
|
||||
return nx.generators.random_graphs.binomial_graph(self.n.rvs(), self.p.rvs())
|
||||
|
||||
|
||||
def build_stab_model_gurobipy(data: MaxWeightStableSetData) -> GurobiModel:
|
||||
data = _read_stab_data(data)
|
||||
def build_stab_model_gurobipy(
|
||||
data: Union[str, MaxWeightStableSetData],
|
||||
params: Optional[dict[str, Any]] = None,
|
||||
) -> GurobiModel:
|
||||
model = gp.Model()
|
||||
_gurobipy_set_params(model, params)
|
||||
|
||||
data = _stab_read(data)
|
||||
nodes = list(data.graph.nodes)
|
||||
|
||||
# Variables and objective function
|
||||
x = model.addVars(nodes, vtype=GRB.BINARY, name="x")
|
||||
model.setObjective(quicksum(-data.weights[i] * x[i] for i in nodes))
|
||||
for clique in nx.find_cliques(data.graph):
|
||||
model.addConstr(quicksum(x[i] for i in clique) <= 1)
|
||||
|
||||
# Edge inequalities
|
||||
for i1, i2 in data.graph.edges:
|
||||
model.addConstr(x[i1] + x[i2] <= 1)
|
||||
|
||||
def cuts_separate(m: GurobiModel) -> List[Hashable]:
|
||||
x_val_dict = m.inner.cbGetNodeRel(x)
|
||||
x_val = [x_val_dict[i] for i in nodes]
|
||||
return _stab_separate(data, x_val)
|
||||
|
||||
def cuts_enforce(m: GurobiModel, violations: List[Any]) -> None:
|
||||
logger.info(f"Adding {len(violations)} clique cuts...")
|
||||
for clique in violations:
|
||||
m.add_constr(quicksum(x[i] for i in clique) <= 1)
|
||||
|
||||
model.update()
|
||||
return GurobiModel(model)
|
||||
|
||||
return GurobiModel(
|
||||
model,
|
||||
cuts_separate=cuts_separate,
|
||||
cuts_enforce=cuts_enforce,
|
||||
)
|
||||
|
||||
|
||||
def build_stab_model_pyomo(
|
||||
data: MaxWeightStableSetData,
|
||||
solver="gurobi_persistent",
|
||||
solver: str = "gurobi_persistent",
|
||||
params: Optional[dict[str, Any]] = None,
|
||||
) -> PyomoModel:
|
||||
data = _read_stab_data(data)
|
||||
data = _stab_read(data)
|
||||
model = pe.ConcreteModel()
|
||||
nodes = pe.Set(initialize=list(data.graph.nodes))
|
||||
|
||||
# Variables and objective function
|
||||
model.x = pe.Var(nodes, domain=pe.Boolean, name="x")
|
||||
model.obj = pe.Objective(expr=sum([-data.weights[i] * model.x[i] for i in nodes]))
|
||||
|
||||
# Edge inequalities
|
||||
model.edge_eqs = pe.ConstraintList()
|
||||
for i1, i2 in data.graph.edges:
|
||||
model.edge_eqs.add(model.x[i1] + model.x[i2] <= 1)
|
||||
|
||||
# Clique inequalities
|
||||
model.clique_eqs = pe.ConstraintList()
|
||||
for clique in nx.find_cliques(data.graph):
|
||||
model.clique_eqs.add(expr=sum(model.x[i] for i in clique) <= 1)
|
||||
return PyomoModel(model, solver)
|
||||
|
||||
def cuts_separate(m: PyomoModel) -> List[Hashable]:
|
||||
m.solver.cbGetNodeRel([model.x[i] for i in nodes])
|
||||
x_val = [model.x[i].value for i in nodes]
|
||||
return _stab_separate(data, x_val)
|
||||
|
||||
def cuts_enforce(m: PyomoModel, violations: List[Any]) -> None:
|
||||
logger.info(f"Adding {len(violations)} clique cuts...")
|
||||
for clique in violations:
|
||||
m.add_constr(model.clique_eqs.add(sum(model.x[i] for i in clique) <= 1))
|
||||
|
||||
pm = PyomoModel(
|
||||
model,
|
||||
solver,
|
||||
cuts_separate=cuts_separate,
|
||||
cuts_enforce=cuts_enforce,
|
||||
)
|
||||
_pyomo_set_params(pm, params, solver)
|
||||
return pm
|
||||
|
||||
|
||||
def _read_stab_data(data: Union[str, MaxWeightStableSetData]) -> MaxWeightStableSetData:
|
||||
def _stab_read(data: Union[str, MaxWeightStableSetData]) -> MaxWeightStableSetData:
|
||||
if isinstance(data, str):
|
||||
data = read_pkl_gz(data)
|
||||
assert isinstance(data, MaxWeightStableSetData)
|
||||
return data
|
||||
|
||||
|
||||
def _stab_separate(data: MaxWeightStableSetData, x_val: List[float]) -> List:
|
||||
# Check that we selected at most one vertex for each
|
||||
# clique in the graph (sum <= 1)
|
||||
violations: List[Any] = []
|
||||
for clique in nx.find_cliques(data.graph):
|
||||
if sum(x_val[i] for i in clique) > 1.0001:
|
||||
violations.append(sorted(clique))
|
||||
return violations
|
||||
|
||||
@@ -2,20 +2,23 @@
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import List, Tuple, Optional, Any, Union
|
||||
|
||||
import gurobipy as gp
|
||||
import networkx as nx
|
||||
import numpy as np
|
||||
import pyomo.environ as pe
|
||||
from gurobipy import quicksum, GRB, tuplelist
|
||||
from miplearn.io import read_pkl_gz
|
||||
from miplearn.problems import _gurobipy_set_params, _pyomo_set_params
|
||||
from miplearn.solvers.gurobi import GurobiModel
|
||||
from scipy.spatial.distance import pdist, squareform
|
||||
from scipy.stats import uniform, randint
|
||||
from scipy.stats.distributions import rv_frozen
|
||||
import logging
|
||||
|
||||
from miplearn.io import read_pkl_gz
|
||||
from miplearn.solvers.gurobi import GurobiModel
|
||||
from miplearn.solvers.pyomo import PyomoModel
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -112,15 +115,17 @@ class TravelingSalesmanGenerator:
|
||||
return n, cities
|
||||
|
||||
|
||||
def build_tsp_model(data: Union[str, TravelingSalesmanData]) -> GurobiModel:
|
||||
if isinstance(data, str):
|
||||
data = read_pkl_gz(data)
|
||||
assert isinstance(data, TravelingSalesmanData)
|
||||
def build_tsp_model_gurobipy(
|
||||
data: Union[str, TravelingSalesmanData],
|
||||
params: Optional[dict[str, Any]] = None,
|
||||
) -> GurobiModel:
|
||||
model = gp.Model()
|
||||
_gurobipy_set_params(model, params)
|
||||
|
||||
data = _tsp_read(data)
|
||||
edges = tuplelist(
|
||||
(i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)
|
||||
)
|
||||
model = gp.Model()
|
||||
|
||||
# Decision variables
|
||||
x = model.addVars(edges, vtype=GRB.BINARY, name="x")
|
||||
@@ -142,36 +147,100 @@ def build_tsp_model(data: Union[str, TravelingSalesmanData]) -> GurobiModel:
|
||||
name="eq_degree",
|
||||
)
|
||||
|
||||
def find_violations(model: GurobiModel) -> List[Any]:
|
||||
violations = []
|
||||
x = model.inner.cbGetSolution(model.inner._x)
|
||||
selected_edges = [e for e in model.inner._edges if x[e] > 0.5]
|
||||
graph = nx.Graph()
|
||||
graph.add_edges_from(selected_edges)
|
||||
for component in list(nx.connected_components(graph)):
|
||||
if len(component) < model.inner._n_cities:
|
||||
cut_edges = [
|
||||
e
|
||||
for e in model.inner._edges
|
||||
if (e[0] in component and e[1] not in component)
|
||||
or (e[0] not in component and e[1] in component)
|
||||
]
|
||||
violations.append(cut_edges)
|
||||
return violations
|
||||
def lazy_separate(model: GurobiModel) -> List[Any]:
|
||||
x_val = model.inner.cbGetSolution(model.inner._x)
|
||||
return _tsp_separate(x_val, edges, data.n_cities)
|
||||
|
||||
def fix_violations(model: GurobiModel, violations: List[Any], where: str) -> None:
|
||||
def lazy_enforce(model: GurobiModel, violations: List[Any]) -> None:
|
||||
for violation in violations:
|
||||
constr = quicksum(model.inner._x[e[0], e[1]] for e in violation) >= 2
|
||||
if where == "cb":
|
||||
model.inner.cbLazy(constr)
|
||||
else:
|
||||
model.inner.addConstr(constr)
|
||||
model.add_constr(
|
||||
quicksum(model.inner._x[e[0], e[1]] for e in violation) >= 2
|
||||
)
|
||||
logger.info(f"tsp: added {len(violations)} subtour elimination constraints")
|
||||
|
||||
model.update()
|
||||
|
||||
return GurobiModel(
|
||||
model,
|
||||
find_violations=find_violations,
|
||||
fix_violations=fix_violations,
|
||||
lazy_separate=lazy_separate,
|
||||
lazy_enforce=lazy_enforce,
|
||||
)
|
||||
|
||||
|
||||
def build_tsp_model_pyomo(
|
||||
data: Union[str, TravelingSalesmanData],
|
||||
solver: str = "gurobi_persistent",
|
||||
params: Optional[dict[str, Any]] = None,
|
||||
) -> PyomoModel:
|
||||
model = pe.ConcreteModel()
|
||||
data = _tsp_read(data)
|
||||
|
||||
edges = tuplelist(
|
||||
(i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)
|
||||
)
|
||||
|
||||
# Decision variables
|
||||
model.x = pe.Var(edges, domain=pe.Boolean, name="x")
|
||||
model.obj = pe.Objective(
|
||||
expr=sum(model.x[i, j] * data.distances[i, j] for (i, j) in edges)
|
||||
)
|
||||
|
||||
# Eq: Must choose two edges adjacent to each node
|
||||
model.degree_eqs = pe.ConstraintList()
|
||||
for i in range(data.n_cities):
|
||||
model.degree_eqs.add(
|
||||
sum(model.x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)
|
||||
== 2
|
||||
)
|
||||
|
||||
# Eq: Subtour elimination
|
||||
model.subtour_eqs = pe.ConstraintList()
|
||||
|
||||
def lazy_separate(m: PyomoModel) -> List[Any]:
|
||||
m.solver.cbGetSolution([model.x[e] for e in edges])
|
||||
x_val = {e: model.x[e].value for e in edges}
|
||||
return _tsp_separate(x_val, edges, data.n_cities)
|
||||
|
||||
def lazy_enforce(m: PyomoModel, violations: List[Any]) -> None:
|
||||
logger.warning(f"Adding {len(violations)} subtour elimination constraints...")
|
||||
for violation in violations:
|
||||
m.add_constr(
|
||||
model.subtour_eqs.add(sum(model.x[e[0], e[1]] for e in violation) >= 2)
|
||||
)
|
||||
|
||||
pm = PyomoModel(
|
||||
model,
|
||||
solver,
|
||||
lazy_separate=lazy_separate,
|
||||
lazy_enforce=lazy_enforce,
|
||||
)
|
||||
_pyomo_set_params(pm, params, solver)
|
||||
return pm
|
||||
|
||||
|
||||
def _tsp_read(data: Union[str, TravelingSalesmanData]) -> TravelingSalesmanData:
|
||||
if isinstance(data, str):
|
||||
data = read_pkl_gz(data)
|
||||
assert isinstance(data, TravelingSalesmanData)
|
||||
return data
|
||||
|
||||
|
||||
def _tsp_separate(
|
||||
x_val: dict[Tuple[int, int], float],
|
||||
edges: List[Tuple[int, int]],
|
||||
n_cities: int,
|
||||
) -> List:
|
||||
violations = []
|
||||
selected_edges = [e for e in edges if x_val[e] > 0.5]
|
||||
graph = nx.Graph()
|
||||
graph.add_edges_from(selected_edges)
|
||||
for component in list(nx.connected_components(graph)):
|
||||
if len(component) < n_cities:
|
||||
cut_edges = [
|
||||
[e[0], e[1]]
|
||||
for e in edges
|
||||
if (e[0] in component and e[1] not in component)
|
||||
or (e[0] not in component and e[1] in component)
|
||||
]
|
||||
violations.append(cut_edges)
|
||||
return violations
|
||||
|
||||
@@ -112,7 +112,7 @@ class UnitCommitmentGenerator:
|
||||
return [_sample() for _ in range(n_samples)]
|
||||
|
||||
|
||||
def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:
|
||||
def build_uc_model_gurobipy(data: Union[str, UnitCommitmentData]) -> GurobiModel:
|
||||
"""
|
||||
Models the unit commitment problem according to equations (1)-(5) of:
|
||||
|
||||
|
||||
@@ -40,7 +40,9 @@ class MinWeightVertexCoverGenerator:
|
||||
]
|
||||
|
||||
|
||||
def build_vertexcover_model(data: Union[str, MinWeightVertexCoverData]) -> GurobiModel:
|
||||
def build_vertexcover_model_gurobipy(
|
||||
data: Union[str, MinWeightVertexCoverData]
|
||||
) -> GurobiModel:
|
||||
if isinstance(data, str):
|
||||
data = read_pkl_gz(data)
|
||||
assert isinstance(data, MinWeightVertexCoverData)
|
||||
@@ -48,7 +50,7 @@ def build_vertexcover_model(data: Union[str, MinWeightVertexCoverData]) -> Gurob
|
||||
nodes = list(data.graph.nodes)
|
||||
x = model.addVars(nodes, vtype=GRB.BINARY, name="x")
|
||||
model.setObjective(quicksum(data.weights[i] * x[i] for i in nodes))
|
||||
for (v1, v2) in data.graph.edges:
|
||||
for v1, v2 in data.graph.edges:
|
||||
model.addConstr(x[v1] + x[v2] >= 1)
|
||||
model.update()
|
||||
return GurobiModel(model)
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Optional, Dict
|
||||
from typing import Optional, Dict, Callable, Hashable, List, Any
|
||||
|
||||
import numpy as np
|
||||
|
||||
@@ -16,6 +16,20 @@ class AbstractModel(ABC):
|
||||
_supports_node_count = False
|
||||
_supports_solution_pool = False
|
||||
|
||||
WHERE_DEFAULT = "default"
|
||||
WHERE_CUTS = "cuts"
|
||||
WHERE_LAZY = "lazy"
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._lazy_enforce: Optional[Callable] = None
|
||||
self._lazy_separate: Optional[Callable] = None
|
||||
self._lazy: Optional[List[Any]] = None
|
||||
self._cuts_enforce: Optional[Callable] = None
|
||||
self._cuts_separate: Optional[Callable] = None
|
||||
self._cuts: Optional[List[Any]] = None
|
||||
self._cuts_aot: Optional[List[Any]] = None
|
||||
self._where = self.WHERE_DEFAULT
|
||||
|
||||
@abstractmethod
|
||||
def add_constrs(
|
||||
self,
|
||||
@@ -68,3 +82,16 @@ class AbstractModel(ABC):
|
||||
@abstractmethod
|
||||
def write(self, filename: str) -> None:
|
||||
pass
|
||||
|
||||
def set_cuts(self, cuts: List) -> None:
|
||||
self._cuts_aot = cuts
|
||||
|
||||
def lazy_enforce(self, violations: List[Any]) -> None:
|
||||
if self._lazy_enforce is not None:
|
||||
self._lazy_enforce(self, violations)
|
||||
|
||||
def _lazy_enforce_collected(self) -> None:
|
||||
"""Adds all lazy constraints identified in the callback as actual model constraints. Useful for generating
|
||||
a final MPS file with the constraints that were required in this run."""
|
||||
if self._lazy_enforce is not None:
|
||||
self._lazy_enforce(self, self._lazy)
|
||||
|
||||
@@ -1,17 +1,78 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
from typing import Dict, Optional, Callable, Any, List
|
||||
|
||||
import logging
|
||||
import json
|
||||
from typing import Dict, Optional, Callable, Any, List, Sequence
|
||||
|
||||
import gurobipy as gp
|
||||
from gurobipy import GRB, GurobiError
|
||||
from gurobipy import GRB, GurobiError, Var
|
||||
import numpy as np
|
||||
from scipy.sparse import lil_matrix
|
||||
|
||||
from miplearn.h5 import H5File
|
||||
from miplearn.solvers.abstract import AbstractModel
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class GurobiModel:
|
||||
def _gurobi_callback(model: AbstractModel, gp_model: gp.Model, where: int) -> None:
|
||||
assert isinstance(gp_model, gp.Model)
|
||||
|
||||
# Lazy constraints
|
||||
if model._lazy_separate is not None:
|
||||
assert model._lazy_enforce is not None
|
||||
assert model._lazy is not None
|
||||
if where == GRB.Callback.MIPSOL:
|
||||
model._where = model.WHERE_LAZY
|
||||
violations = model._lazy_separate(model)
|
||||
if len(violations) > 0:
|
||||
model._lazy.extend(violations)
|
||||
model._lazy_enforce(model, violations)
|
||||
|
||||
# User cuts
|
||||
if model._cuts_separate is not None:
|
||||
assert model._cuts_enforce is not None
|
||||
assert model._cuts is not None
|
||||
if where == GRB.Callback.MIPNODE:
|
||||
status = gp_model.cbGet(GRB.Callback.MIPNODE_STATUS)
|
||||
if status == GRB.OPTIMAL:
|
||||
model._where = model.WHERE_CUTS
|
||||
if model._cuts_aot is not None:
|
||||
violations = model._cuts_aot
|
||||
model._cuts_aot = None
|
||||
logger.info(f"Enforcing {len(violations)} cuts ahead-of-time...")
|
||||
else:
|
||||
violations = model._cuts_separate(model)
|
||||
if len(violations) > 0:
|
||||
model._cuts.extend(violations)
|
||||
model._cuts_enforce(model, violations)
|
||||
|
||||
# Cleanup
|
||||
model._where = model.WHERE_DEFAULT
|
||||
|
||||
|
||||
def _gurobi_add_constr(gp_model: gp.Model, where: str, constr: Any) -> None:
|
||||
if where == AbstractModel.WHERE_LAZY:
|
||||
gp_model.cbLazy(constr)
|
||||
elif where == AbstractModel.WHERE_CUTS:
|
||||
gp_model.cbCut(constr)
|
||||
else:
|
||||
gp_model.addConstr(constr)
|
||||
|
||||
|
||||
def _gurobi_set_required_params(model: AbstractModel, gp_model: gp.Model) -> None:
|
||||
# Required parameters for lazy constraints
|
||||
if model._lazy_enforce is not None:
|
||||
gp_model.setParam("PreCrush", 1)
|
||||
gp_model.setParam("LazyConstraints", 1)
|
||||
# Required parameters for user cuts
|
||||
if model._cuts_enforce is not None:
|
||||
gp_model.setParam("PreCrush", 1)
|
||||
|
||||
|
||||
class GurobiModel(AbstractModel):
|
||||
_supports_basis_status = True
|
||||
_supports_sensitivity_analysis = True
|
||||
_supports_node_count = True
|
||||
@@ -20,13 +81,17 @@ class GurobiModel:
|
||||
def __init__(
|
||||
self,
|
||||
inner: gp.Model,
|
||||
find_violations: Optional[Callable] = None,
|
||||
fix_violations: Optional[Callable] = None,
|
||||
lazy_separate: Optional[Callable] = None,
|
||||
lazy_enforce: Optional[Callable] = None,
|
||||
cuts_separate: Optional[Callable] = None,
|
||||
cuts_enforce: Optional[Callable] = None,
|
||||
) -> None:
|
||||
self.fix_violations = fix_violations
|
||||
self.find_violations = find_violations
|
||||
super().__init__()
|
||||
self._lazy_separate = lazy_separate
|
||||
self._lazy_enforce = lazy_enforce
|
||||
self._cuts_separate = cuts_separate
|
||||
self._cuts_enforce = cuts_enforce
|
||||
self.inner = inner
|
||||
self.violations_: Optional[List[Any]] = None
|
||||
|
||||
def add_constrs(
|
||||
self,
|
||||
@@ -44,7 +109,11 @@ class GurobiModel:
|
||||
assert constrs_sense.shape == (nconstrs,)
|
||||
assert constrs_rhs.shape == (nconstrs,)
|
||||
|
||||
gp_vars = [self.inner.getVarByName(var_name.decode()) for var_name in var_names]
|
||||
gp_vars: list[Var] = []
|
||||
for var_name in var_names:
|
||||
v = self.inner.getVarByName(var_name.decode())
|
||||
assert v is not None, f"unknown var: {var_name}"
|
||||
gp_vars.append(v)
|
||||
self.inner.addMConstr(constrs_lhs, gp_vars, constrs_sense, constrs_rhs)
|
||||
|
||||
if stats is not None:
|
||||
@@ -52,6 +121,9 @@ class GurobiModel:
|
||||
stats["Added constraints"] = 0
|
||||
stats["Added constraints"] += nconstrs
|
||||
|
||||
def add_constr(self, constr: Any) -> None:
|
||||
_gurobi_add_constr(self.inner, self._where, constr)
|
||||
|
||||
def extract_after_load(self, h5: H5File) -> None:
|
||||
"""
|
||||
Given a model that has just been loaded, extracts static problem
|
||||
@@ -100,6 +172,10 @@ class GurobiModel:
|
||||
except AttributeError:
|
||||
pass
|
||||
self._extract_after_mip_solution_pool(h5)
|
||||
if self._lazy is not None:
|
||||
h5.put_scalar("mip_lazy", json.dumps(self._lazy))
|
||||
if self._cuts is not None:
|
||||
h5.put_scalar("mip_cuts", json.dumps(self._cuts))
|
||||
|
||||
def fix_variables(
|
||||
self,
|
||||
@@ -112,31 +188,28 @@ class GurobiModel:
|
||||
assert var_names.shape == var_values.shape
|
||||
|
||||
n_fixed = 0
|
||||
for (var_idx, var_name) in enumerate(var_names):
|
||||
for var_idx, var_name in enumerate(var_names):
|
||||
var_val = var_values[var_idx]
|
||||
if np.isfinite(var_val):
|
||||
var = self.inner.getVarByName(var_name.decode())
|
||||
var.vtype = "C"
|
||||
var.lb = var_val
|
||||
var.ub = var_val
|
||||
assert var is not None, f"unknown var: {var_name}"
|
||||
var.VType = "c"
|
||||
var.LB = var_val
|
||||
var.UB = var_val
|
||||
n_fixed += 1
|
||||
if stats is not None:
|
||||
stats["Fixed variables"] = n_fixed
|
||||
|
||||
def optimize(self) -> None:
|
||||
self.violations_ = []
|
||||
self._lazy = []
|
||||
self._cuts = []
|
||||
|
||||
def callback(m: gp.Model, where: int) -> None:
|
||||
assert self.find_violations is not None
|
||||
assert self.violations_ is not None
|
||||
assert self.fix_violations is not None
|
||||
if where == GRB.Callback.MIPSOL:
|
||||
violations = self.find_violations(self)
|
||||
self.violations_.extend(violations)
|
||||
self.fix_violations(self, violations, "cb")
|
||||
def callback(_: gp.Model, where: int) -> None:
|
||||
_gurobi_callback(self, self.inner, where)
|
||||
|
||||
if self.fix_violations is not None:
|
||||
self.inner.Params.lazyConstraints = 1
|
||||
_gurobi_set_required_params(self, self.inner)
|
||||
|
||||
if self.lazy_enforce is not None or self.cuts_enforce is not None:
|
||||
self.inner.optimize(callback)
|
||||
else:
|
||||
self.inner.optimize()
|
||||
@@ -145,7 +218,7 @@ class GurobiModel:
|
||||
return GurobiModel(self.inner.relax())
|
||||
|
||||
def set_time_limit(self, time_limit_sec: float) -> None:
|
||||
self.inner.params.timeLimit = time_limit_sec
|
||||
self.inner.params.TimeLimit = time_limit_sec
|
||||
|
||||
def set_warm_starts(
|
||||
self,
|
||||
@@ -160,12 +233,13 @@ class GurobiModel:
|
||||
|
||||
self.inner.numStart = n_starts
|
||||
for start_idx in range(n_starts):
|
||||
self.inner.params.startNumber = start_idx
|
||||
for (var_idx, var_name) in enumerate(var_names):
|
||||
self.inner.params.StartNumber = start_idx
|
||||
for var_idx, var_name in enumerate(var_names):
|
||||
var_val = var_values[start_idx, var_idx]
|
||||
if np.isfinite(var_val):
|
||||
var = self.inner.getVarByName(var_name.decode())
|
||||
var.start = var_val
|
||||
assert var is not None, f"unknown var: {var_name}"
|
||||
var.Start = var_val
|
||||
|
||||
if stats is not None:
|
||||
stats["WS: Count"] = n_starts
|
||||
@@ -175,14 +249,14 @@ class GurobiModel:
|
||||
|
||||
def _extract_after_load_vars(self, h5: H5File) -> None:
|
||||
gp_vars = self.inner.getVars()
|
||||
for (h5_field, gp_field) in {
|
||||
for h5_field, gp_field in {
|
||||
"static_var_names": "varName",
|
||||
"static_var_types": "vtype",
|
||||
}.items():
|
||||
h5.put_array(
|
||||
h5_field, np.array(self.inner.getAttr(gp_field, gp_vars), dtype="S")
|
||||
)
|
||||
for (h5_field, gp_field) in {
|
||||
for h5_field, gp_field in {
|
||||
"static_var_upper_bounds": "ub",
|
||||
"static_var_lower_bounds": "lb",
|
||||
"static_var_obj_coeffs": "obj",
|
||||
@@ -199,7 +273,7 @@ class GurobiModel:
|
||||
names = np.array(self.inner.getAttr("constrName", gp_constrs), dtype="S")
|
||||
nrows, ncols = len(gp_constrs), len(gp_vars)
|
||||
tmp = lil_matrix((nrows, ncols), dtype=float)
|
||||
for (i, gp_constr) in enumerate(gp_constrs):
|
||||
for i, gp_constr in enumerate(gp_constrs):
|
||||
expr = self.inner.getRow(gp_constr)
|
||||
for j in range(expr.size()):
|
||||
tmp[i, expr.getVar(j).index] = expr.getCoeff(j)
|
||||
@@ -234,7 +308,7 @@ class GurobiModel:
|
||||
dtype="S",
|
||||
),
|
||||
)
|
||||
for (h5_field, gp_field) in {
|
||||
for h5_field, gp_field in {
|
||||
"lp_var_reduced_costs": "rc",
|
||||
"lp_var_sa_obj_up": "saobjUp",
|
||||
"lp_var_sa_obj_down": "saobjLow",
|
||||
@@ -268,7 +342,7 @@ class GurobiModel:
|
||||
dtype="S",
|
||||
),
|
||||
)
|
||||
for (h5_field, gp_field) in {
|
||||
for h5_field, gp_field in {
|
||||
"lp_constr_dual_values": "pi",
|
||||
"lp_constr_sa_rhs_up": "saRhsUp",
|
||||
"lp_constr_sa_rhs_down": "saRhsLow",
|
||||
|
||||
@@ -3,32 +3,43 @@
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
from os.path import exists
|
||||
from tempfile import NamedTemporaryFile
|
||||
from typing import List, Any, Union
|
||||
from typing import List, Any, Union, Dict, Callable, Optional, Tuple
|
||||
|
||||
from miplearn.h5 import H5File
|
||||
from miplearn.io import _to_h5_filename
|
||||
from miplearn.solvers.abstract import AbstractModel
|
||||
import shutil
|
||||
|
||||
|
||||
class LearningSolver:
|
||||
def __init__(self, components: List[Any], skip_lp=False):
|
||||
def __init__(self, components: List[Any], skip_lp: bool = False) -> None:
|
||||
self.components = components
|
||||
self.skip_lp = skip_lp
|
||||
|
||||
def fit(self, data_filenames):
|
||||
def fit(self, data_filenames: List[str]) -> None:
|
||||
h5_filenames = [_to_h5_filename(f) for f in data_filenames]
|
||||
for comp in self.components:
|
||||
comp.fit(h5_filenames)
|
||||
|
||||
def optimize(self, model: Union[str, AbstractModel], build_model=None):
|
||||
def optimize(
|
||||
self,
|
||||
model: Union[str, AbstractModel],
|
||||
build_model: Optional[Callable] = None,
|
||||
) -> Tuple[AbstractModel, Dict[str, Any]]:
|
||||
h5_filename, mode = NamedTemporaryFile().name, "w"
|
||||
if isinstance(model, str):
|
||||
h5_filename = _to_h5_filename(model)
|
||||
assert build_model is not None
|
||||
old_h5_filename = _to_h5_filename(model)
|
||||
model = build_model(model)
|
||||
else:
|
||||
h5_filename = NamedTemporaryFile().name
|
||||
stats = {}
|
||||
mode = "r+" if exists(h5_filename) else "w"
|
||||
assert isinstance(model, AbstractModel)
|
||||
|
||||
# If the instance has an associate H5 file, we make a temporary copy of it,
|
||||
# then work on that copy. We keep the original file unmodified
|
||||
if exists(old_h5_filename):
|
||||
shutil.copy(old_h5_filename, h5_filename)
|
||||
mode = "r+"
|
||||
|
||||
stats: Dict[str, Any] = {}
|
||||
with H5File(h5_filename, mode) as h5:
|
||||
model.extract_after_load(h5)
|
||||
if not self.skip_lp:
|
||||
@@ -36,8 +47,10 @@ class LearningSolver:
|
||||
relaxed.optimize()
|
||||
relaxed.extract_after_lp(h5)
|
||||
for comp in self.components:
|
||||
comp.before_mip(h5_filename, model, stats)
|
||||
comp_stats = comp.before_mip(h5_filename, model, stats)
|
||||
if comp_stats is not None:
|
||||
stats.update(comp_stats)
|
||||
model.optimize()
|
||||
model.extract_after_mip(h5)
|
||||
|
||||
return stats
|
||||
return model, stats
|
||||
|
||||
@@ -2,10 +2,11 @@
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
from numbers import Number
|
||||
from typing import Optional, Dict, List, Any
|
||||
from typing import Optional, Dict, List, Any, Tuple, Callable
|
||||
|
||||
import numpy as np
|
||||
import pyomo
|
||||
import pyomo.environ as pe
|
||||
from pyomo.core import Objective, Var, Suffix
|
||||
from pyomo.core.base import _GeneralVarData
|
||||
from pyomo.core.expr.numeric_expr import SumExpression, MonomialTermExpression
|
||||
@@ -13,24 +14,52 @@ from scipy.sparse import coo_matrix
|
||||
|
||||
from miplearn.h5 import H5File
|
||||
from miplearn.solvers.abstract import AbstractModel
|
||||
import pyomo.environ as pe
|
||||
from miplearn.solvers.gurobi import (
|
||||
_gurobi_callback,
|
||||
_gurobi_add_constr,
|
||||
_gurobi_set_required_params,
|
||||
)
|
||||
|
||||
|
||||
class PyomoModel(AbstractModel):
|
||||
def __init__(self, model: pe.ConcreteModel, solver_name: str = "gurobi_persistent"):
|
||||
def __init__(
|
||||
self,
|
||||
model: pe.ConcreteModel,
|
||||
solver_name: str = "gurobi_persistent",
|
||||
lazy_separate: Optional[Callable] = None,
|
||||
lazy_enforce: Optional[Callable] = None,
|
||||
cuts_separate: Optional[Callable] = None,
|
||||
cuts_enforce: Optional[Callable] = None,
|
||||
):
|
||||
super().__init__()
|
||||
self.inner = model
|
||||
self.solver_name = solver_name
|
||||
self._lazy_separate = lazy_separate
|
||||
self._lazy_enforce = lazy_enforce
|
||||
self._cuts_separate = cuts_separate
|
||||
self._cuts_enforce = cuts_enforce
|
||||
self.solver = pe.SolverFactory(solver_name)
|
||||
self.is_persistent = hasattr(self.solver, "set_instance")
|
||||
if self.is_persistent:
|
||||
self.solver.set_instance(model)
|
||||
self.results = None
|
||||
self.results: Optional[Dict] = None
|
||||
self._is_warm_start_available = False
|
||||
if not hasattr(self.inner, "dual"):
|
||||
self.inner.dual = Suffix(direction=Suffix.IMPORT)
|
||||
self.inner.rc = Suffix(direction=Suffix.IMPORT)
|
||||
self.inner.slack = Suffix(direction=Suffix.IMPORT)
|
||||
|
||||
def add_constr(self, constr: Any) -> None:
|
||||
assert (
|
||||
self.solver_name == "gurobi_persistent"
|
||||
), "Callbacks are currently only supported on gurobi_persistent"
|
||||
if self._where in [AbstractModel.WHERE_CUTS, AbstractModel.WHERE_LAZY]:
|
||||
_gurobi_add_constr(self.solver, self._where, constr)
|
||||
else:
|
||||
# outside callbacks, add_constr shouldn't do anything, as the constraint
|
||||
# has already been added to the ConstraintList object
|
||||
pass
|
||||
|
||||
def add_constrs(
|
||||
self,
|
||||
var_names: np.ndarray,
|
||||
@@ -56,7 +85,7 @@ class PyomoModel(AbstractModel):
|
||||
raise Exception(f"Unknown sense: {sense}")
|
||||
self.solver.add_constraint(eq)
|
||||
|
||||
def _var_names_to_vars(self, var_names):
|
||||
def _var_names_to_vars(self, var_names: np.ndarray) -> List[Any]:
|
||||
varname_to_var = {}
|
||||
for var in self.inner.component_objects(Var):
|
||||
for idx in var:
|
||||
@@ -70,12 +99,14 @@ class PyomoModel(AbstractModel):
|
||||
h5.put_scalar("static_sense", self._get_sense())
|
||||
|
||||
def extract_after_lp(self, h5: H5File) -> None:
|
||||
assert self.results is not None
|
||||
self._extract_after_lp_vars(h5)
|
||||
self._extract_after_lp_constrs(h5)
|
||||
h5.put_scalar("lp_obj_value", self.results["Problem"][0]["Lower bound"])
|
||||
h5.put_scalar("lp_wallclock_time", self._get_runtime())
|
||||
|
||||
def _get_runtime(self):
|
||||
def _get_runtime(self) -> float:
|
||||
assert self.results is not None
|
||||
solver_dict = self.results["Solver"][0]
|
||||
for key in ["Wallclock time", "User time"]:
|
||||
if isinstance(solver_dict[key], Number):
|
||||
@@ -83,6 +114,7 @@ class PyomoModel(AbstractModel):
|
||||
raise Exception("Time unavailable")
|
||||
|
||||
def extract_after_mip(self, h5: H5File) -> None:
|
||||
assert self.results is not None
|
||||
h5.put_scalar("mip_wallclock_time", self._get_runtime())
|
||||
if self.results["Solver"][0]["Termination condition"] == "infeasible":
|
||||
return
|
||||
@@ -97,6 +129,10 @@ class PyomoModel(AbstractModel):
|
||||
h5.put_scalar("mip_obj_value", obj_value)
|
||||
h5.put_scalar("mip_obj_bound", obj_bound)
|
||||
h5.put_scalar("mip_gap", self._gap(obj_value, obj_bound))
|
||||
if self._lazy is not None:
|
||||
h5.put_scalar("mip_lazy", repr(self._lazy))
|
||||
if self._cuts is not None:
|
||||
h5.put_scalar("mip_cuts", repr(self._cuts))
|
||||
|
||||
def fix_variables(
|
||||
self,
|
||||
@@ -105,12 +141,26 @@ class PyomoModel(AbstractModel):
|
||||
stats: Optional[Dict] = None,
|
||||
) -> None:
|
||||
variables = self._var_names_to_vars(var_names)
|
||||
for (var, val) in zip(variables, var_values):
|
||||
for var, val in zip(variables, var_values):
|
||||
if np.isfinite(val):
|
||||
var.fix(val)
|
||||
self.solver.update_var(var)
|
||||
|
||||
def optimize(self) -> None:
|
||||
self._lazy = []
|
||||
self._cuts = []
|
||||
|
||||
if self._lazy_enforce is not None or self._cuts_enforce is not None:
|
||||
assert (
|
||||
self.solver_name == "gurobi_persistent"
|
||||
), "Callbacks are currently only supported on gurobi_persistent"
|
||||
_gurobi_set_required_params(self, self.solver._solver_model)
|
||||
|
||||
def callback(_: Any, __: Any, where: int) -> None:
|
||||
_gurobi_callback(self, self.solver._solver_model, where)
|
||||
|
||||
self.solver.set_callback(callback)
|
||||
|
||||
if self.is_persistent:
|
||||
self.results = self.solver.solve(
|
||||
tee=True,
|
||||
@@ -145,12 +195,12 @@ class PyomoModel(AbstractModel):
|
||||
assert var_names.shape[0] == n_vars
|
||||
assert n_starts == 1, "Pyomo does not support multiple warm starts"
|
||||
variables = self._var_names_to_vars(var_names)
|
||||
for (var, val) in zip(variables, var_values[0, :]):
|
||||
for var, val in zip(variables, var_values[0, :]):
|
||||
if np.isfinite(val):
|
||||
var.value = val
|
||||
self._is_warm_start_available = True
|
||||
|
||||
def _extract_after_load_vars(self, h5):
|
||||
def _extract_after_load_vars(self, h5: H5File) -> None:
|
||||
names: List[str] = []
|
||||
types: List[str] = []
|
||||
upper_bounds: List[float] = []
|
||||
@@ -165,7 +215,7 @@ class PyomoModel(AbstractModel):
|
||||
obj_count += 1
|
||||
assert obj_count == 1, f"One objective function expected; found {obj_count}"
|
||||
|
||||
for (i, var) in enumerate(self.inner.component_objects(pyomo.core.Var)):
|
||||
for i, var in enumerate(self.inner.component_objects(pyomo.core.Var)):
|
||||
for idx in var:
|
||||
v = var[idx]
|
||||
|
||||
@@ -211,7 +261,7 @@ class PyomoModel(AbstractModel):
|
||||
h5.put_array("static_var_obj_coeffs", np.array(obj_coeffs))
|
||||
h5.put_scalar("static_obj_offset", obj_offset)
|
||||
|
||||
def _extract_after_load_constrs(self, h5):
|
||||
def _extract_after_load_constrs(self, h5: H5File) -> None:
|
||||
names: List[str] = []
|
||||
rhs: List[float] = []
|
||||
senses: List[str] = []
|
||||
@@ -219,7 +269,7 @@ class PyomoModel(AbstractModel):
|
||||
lhs_col: List[int] = []
|
||||
lhs_data: List[float] = []
|
||||
|
||||
varname_to_idx = {}
|
||||
varname_to_idx: Dict[str, int] = {}
|
||||
for var in self.inner.component_objects(Var):
|
||||
for idx in var:
|
||||
varname = var.name
|
||||
@@ -266,15 +316,13 @@ class PyomoModel(AbstractModel):
|
||||
raise Exception(f"Unknown expression type: {expr.__class__.__name__}")
|
||||
|
||||
curr_row = 0
|
||||
for (i, constr) in enumerate(
|
||||
self.inner.component_objects(pyomo.core.Constraint)
|
||||
):
|
||||
if len(constr) > 0:
|
||||
for i, constr in enumerate(self.inner.component_objects(pyomo.core.Constraint)):
|
||||
if len(constr) > 1:
|
||||
for idx in constr:
|
||||
names.append(constr[idx].name)
|
||||
_parse_constraint(constr[idx], curr_row)
|
||||
curr_row += 1
|
||||
else:
|
||||
elif len(constr) == 1:
|
||||
names.append(constr.name)
|
||||
_parse_constraint(constr, curr_row)
|
||||
curr_row += 1
|
||||
@@ -285,7 +333,7 @@ class PyomoModel(AbstractModel):
|
||||
h5.put_array("static_constr_rhs", np.array(rhs))
|
||||
h5.put_array("static_constr_sense", np.array(senses, dtype="S"))
|
||||
|
||||
def _extract_after_lp_vars(self, h5):
|
||||
def _extract_after_lp_vars(self, h5: H5File) -> None:
|
||||
rc = []
|
||||
values = []
|
||||
for var in self.inner.component_objects(Var):
|
||||
@@ -296,7 +344,7 @@ class PyomoModel(AbstractModel):
|
||||
h5.put_array("lp_var_reduced_costs", np.array(rc))
|
||||
h5.put_array("lp_var_values", np.array(values))
|
||||
|
||||
def _extract_after_lp_constrs(self, h5):
|
||||
def _extract_after_lp_constrs(self, h5: H5File) -> None:
|
||||
dual = []
|
||||
slacks = []
|
||||
for constr in self.inner.component_objects(pyomo.core.Constraint):
|
||||
@@ -307,7 +355,7 @@ class PyomoModel(AbstractModel):
|
||||
h5.put_array("lp_constr_dual_values", np.array(dual))
|
||||
h5.put_array("lp_constr_slacks", np.array(slacks))
|
||||
|
||||
def _extract_after_mip_vars(self, h5):
|
||||
def _extract_after_mip_vars(self, h5: H5File) -> None:
|
||||
values = []
|
||||
for var in self.inner.component_objects(Var):
|
||||
for idx in var:
|
||||
@@ -315,15 +363,16 @@ class PyomoModel(AbstractModel):
|
||||
values.append(v.value)
|
||||
h5.put_array("mip_var_values", np.array(values))
|
||||
|
||||
def _extract_after_mip_constrs(self, h5):
|
||||
def _extract_after_mip_constrs(self, h5: H5File) -> None:
|
||||
slacks = []
|
||||
for constr in self.inner.component_objects(pyomo.core.Constraint):
|
||||
for idx in constr:
|
||||
c = constr[idx]
|
||||
slacks.append(abs(self.inner.slack[c]))
|
||||
if c in self.inner.slack:
|
||||
slacks.append(abs(self.inner.slack[c]))
|
||||
h5.put_array("mip_constr_slacks", np.array(slacks))
|
||||
|
||||
def _parse_pyomo_expr(self, expr: Any):
|
||||
def _parse_pyomo_expr(self, expr: Any) -> Tuple[Dict[str, float], float]:
|
||||
lhs = {}
|
||||
offset = 0.0
|
||||
if isinstance(expr, SumExpression):
|
||||
@@ -332,7 +381,7 @@ class PyomoModel(AbstractModel):
|
||||
lhs[term._args_[1].name] = float(term._args_[0])
|
||||
elif isinstance(term, _GeneralVarData):
|
||||
lhs[term.name] = 1.0
|
||||
elif isinstance(term, Number):
|
||||
elif isinstance(term, float):
|
||||
offset += term
|
||||
else:
|
||||
raise Exception(f"Unknown term type: {term.__class__.__name__}")
|
||||
@@ -342,7 +391,7 @@ class PyomoModel(AbstractModel):
|
||||
raise Exception(f"Unknown expression type: {expr.__class__.__name__}")
|
||||
return lhs, offset
|
||||
|
||||
def _gap(self, zp, zd, tol=1e-6):
|
||||
def _gap(self, zp: float, zd: float, tol: float = 1e-6) -> float:
|
||||
# Reference: https://www.gurobi.com/documentation/9.5/refman/mipgap2.html
|
||||
if abs(zp) < tol:
|
||||
if abs(zd) < tol:
|
||||
@@ -352,7 +401,7 @@ class PyomoModel(AbstractModel):
|
||||
else:
|
||||
return abs(zp - zd) / abs(zp)
|
||||
|
||||
def _get_sense(self):
|
||||
def _get_sense(self) -> str:
|
||||
for obj in self.inner.component_objects(Objective):
|
||||
sense = obj.sense
|
||||
if sense == pyomo.core.kernel.objective.minimize:
|
||||
@@ -361,6 +410,7 @@ class PyomoModel(AbstractModel):
|
||||
return "max"
|
||||
else:
|
||||
raise Exception(f"Unknown sense: ${sense}")
|
||||
raise Exception(f"No objective")
|
||||
|
||||
def write(self, filename: str) -> None:
|
||||
self.inner.write(filename, io_options={"symbolic_solver_labels": True})
|
||||
|
||||
14
setup.py
14
setup.py
@@ -6,7 +6,7 @@ from setuptools import setup, find_namespace_packages
|
||||
|
||||
setup(
|
||||
name="miplearn",
|
||||
version="0.3.0.dev1",
|
||||
version="0.4.3",
|
||||
author="Alinson S. Xavier",
|
||||
author_email="axavier@anl.gov",
|
||||
description="Extensible Framework for Learning-Enhanced Mixed-Integer Optimization",
|
||||
@@ -15,7 +15,7 @@ setup(
|
||||
python_requires=">=3.9",
|
||||
install_requires=[
|
||||
"Jinja2<3.1",
|
||||
"gurobipy>=10,<11",
|
||||
"gurobipy>=12,<13",
|
||||
"h5py>=3,<4",
|
||||
"networkx>=2,<3",
|
||||
"numpy>=1,<2",
|
||||
@@ -30,15 +30,19 @@ setup(
|
||||
"dev": [
|
||||
"Sphinx>=3,<4",
|
||||
"black==22.6.0",
|
||||
"mypy==0.971",
|
||||
"mypy==1.8",
|
||||
"myst-parser==0.14.0",
|
||||
"nbsphinx>=0.9,<0.10",
|
||||
"pyflakes==2.5.0",
|
||||
"pytest>=7,<8",
|
||||
"sphinx-book-theme==0.1.0",
|
||||
"sphinxcontrib-applehelp==1.0.4",
|
||||
"sphinxcontrib-devhelp==1.0.2",
|
||||
"sphinxcontrib-htmlhelp==2.0.1",
|
||||
"sphinxcontrib-serializinghtml==1.1.5",
|
||||
"sphinxcontrib-qthelp==1.0.3",
|
||||
"sphinx-multitoc-numbering>=0.1,<0.2",
|
||||
"twine>=4,<5"
|
||||
"twine>=6,<7",
|
||||
]
|
||||
},
|
||||
|
||||
)
|
||||
|
||||
0
tests/components/cuts/__init__.py
Normal file
0
tests/components/cuts/__init__.py
Normal file
75
tests/components/cuts/test_mem.py
Normal file
75
tests/components/cuts/test_mem.py
Normal file
@@ -0,0 +1,75 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2023, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
from typing import Any, List, Dict
|
||||
from unittest.mock import Mock
|
||||
|
||||
from miplearn.components.cuts.mem import MemorizingCutsComponent
|
||||
from miplearn.extractors.abstract import FeaturesExtractor
|
||||
from miplearn.problems.stab import build_stab_model_gurobipy, build_stab_model_pyomo
|
||||
from miplearn.solvers.learning import LearningSolver
|
||||
from sklearn.dummy import DummyClassifier
|
||||
from sklearn.neighbors import KNeighborsClassifier
|
||||
from typing import Callable
|
||||
|
||||
|
||||
def test_mem_component_gp(
|
||||
stab_gp_h5: List[str],
|
||||
stab_pyo_h5: List[str],
|
||||
default_extractor: FeaturesExtractor,
|
||||
) -> None:
|
||||
for h5 in [stab_pyo_h5, stab_gp_h5]:
|
||||
clf = Mock(wraps=DummyClassifier())
|
||||
comp = MemorizingCutsComponent(clf=clf, extractor=default_extractor)
|
||||
comp.fit(h5)
|
||||
|
||||
# Should call fit method with correct arguments
|
||||
clf.fit.assert_called()
|
||||
x, y = clf.fit.call_args.args
|
||||
assert x.shape == (3, 50)
|
||||
assert y.shape == (3, 412)
|
||||
y = y.tolist()
|
||||
assert y[0][40:50] == [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
|
||||
assert y[1][40:50] == [1, 1, 0, 1, 1, 1, 1, 1, 1, 1]
|
||||
assert y[2][40:50] == [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
|
||||
|
||||
# Should store violations
|
||||
assert comp.constrs_ is not None
|
||||
assert comp.n_features_ == 50
|
||||
assert comp.n_targets_ == 412
|
||||
assert len(comp.constrs_) == 412
|
||||
|
||||
# Call before-mip
|
||||
stats: Dict[str, Any] = {}
|
||||
model = Mock()
|
||||
comp.before_mip(h5[0], model, stats)
|
||||
|
||||
# Should call predict with correct args
|
||||
clf.predict.assert_called()
|
||||
(x_test,) = clf.predict.call_args.args
|
||||
assert x_test.shape == (1, 50)
|
||||
|
||||
# Should call set_cuts
|
||||
model.set_cuts.assert_called()
|
||||
(cuts_aot_,) = model.set_cuts.call_args.args
|
||||
assert cuts_aot_ is not None
|
||||
assert len(cuts_aot_) == 256
|
||||
|
||||
|
||||
def test_usage_stab(
|
||||
stab_gp_h5: List[str],
|
||||
stab_pyo_h5: List[str],
|
||||
default_extractor: FeaturesExtractor,
|
||||
) -> None:
|
||||
for h5, build_model in [
|
||||
(stab_pyo_h5, build_stab_model_pyomo),
|
||||
(stab_gp_h5, build_stab_model_gurobipy),
|
||||
]:
|
||||
data_filenames = [f.replace(".h5", ".pkl.gz") for f in h5]
|
||||
clf = KNeighborsClassifier(n_neighbors=1)
|
||||
comp = MemorizingCutsComponent(clf=clf, extractor=default_extractor)
|
||||
solver = LearningSolver(components=[comp])
|
||||
solver.fit(data_filenames)
|
||||
model, stats = solver.optimize(data_filenames[0], build_model) # type: ignore
|
||||
assert stats["Cuts: AOT"] > 0
|
||||
0
tests/components/lazy/__init__.py
Normal file
0
tests/components/lazy/__init__.py
Normal file
69
tests/components/lazy/test_mem.py
Normal file
69
tests/components/lazy/test_mem.py
Normal file
@@ -0,0 +1,69 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
from typing import List, Dict, Any
|
||||
from unittest.mock import Mock
|
||||
|
||||
from sklearn.dummy import DummyClassifier
|
||||
from sklearn.neighbors import KNeighborsClassifier
|
||||
|
||||
from miplearn.components.lazy.mem import MemorizingLazyComponent
|
||||
from miplearn.extractors.abstract import FeaturesExtractor
|
||||
from miplearn.problems.tsp import build_tsp_model_gurobipy, build_tsp_model_pyomo
|
||||
from miplearn.solvers.learning import LearningSolver
|
||||
|
||||
|
||||
def test_mem_component(
|
||||
tsp_gp_h5: List[str],
|
||||
tsp_pyo_h5: List[str],
|
||||
default_extractor: FeaturesExtractor,
|
||||
) -> None:
|
||||
for h5 in [tsp_gp_h5, tsp_pyo_h5]:
|
||||
clf = Mock(wraps=DummyClassifier())
|
||||
comp = MemorizingLazyComponent(clf=clf, extractor=default_extractor)
|
||||
comp.fit(tsp_gp_h5)
|
||||
|
||||
# Should call fit method with correct arguments
|
||||
clf.fit.assert_called()
|
||||
x, y = clf.fit.call_args.args
|
||||
assert x.shape == (3, 190)
|
||||
assert y.tolist() == [
|
||||
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0],
|
||||
[1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0],
|
||||
[1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1],
|
||||
]
|
||||
|
||||
# Should store violations
|
||||
assert comp.constrs_ is not None
|
||||
assert comp.n_features_ == 190
|
||||
assert comp.n_targets_ == 20
|
||||
assert len(comp.constrs_) == 20
|
||||
|
||||
# Call before-mip
|
||||
stats: Dict[str, Any] = {}
|
||||
model = Mock()
|
||||
comp.before_mip(tsp_gp_h5[0], model, stats)
|
||||
|
||||
# Should call predict with correct args
|
||||
clf.predict.assert_called()
|
||||
(x_test,) = clf.predict.call_args.args
|
||||
assert x_test.shape == (1, 190)
|
||||
|
||||
|
||||
def test_usage_tsp(
|
||||
tsp_gp_h5: List[str],
|
||||
tsp_pyo_h5: List[str],
|
||||
default_extractor: FeaturesExtractor,
|
||||
) -> None:
|
||||
for h5, build_model in [
|
||||
(tsp_pyo_h5, build_tsp_model_pyomo),
|
||||
(tsp_gp_h5, build_tsp_model_gurobipy),
|
||||
]:
|
||||
data_filenames = [f.replace(".h5", ".pkl.gz") for f in h5]
|
||||
clf = KNeighborsClassifier(n_neighbors=1)
|
||||
comp = MemorizingLazyComponent(clf=clf, extractor=default_extractor)
|
||||
solver = LearningSolver(components=[comp])
|
||||
solver.fit(data_filenames)
|
||||
model, stats = solver.optimize(data_filenames[0], build_model) # type: ignore
|
||||
assert stats["Lazy Constraints: AOT"] > 0
|
||||
@@ -20,7 +20,8 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def test_mem_component(
|
||||
multiknapsack_h5: List[str], default_extractor: FeaturesExtractor
|
||||
multiknapsack_h5: List[str],
|
||||
default_extractor: FeaturesExtractor,
|
||||
) -> None:
|
||||
# Create mock classifier
|
||||
clf = Mock(wraps=DummyClassifier())
|
||||
|
||||
@@ -1,20 +1,69 @@
|
||||
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
|
||||
# Released under the modified BSD license. See COPYING.md for more details.
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
from glob import glob
|
||||
from os.path import dirname
|
||||
from typing import List
|
||||
from os.path import dirname, basename, isfile
|
||||
from tempfile import NamedTemporaryFile
|
||||
from typing import List, Any
|
||||
|
||||
import pytest
|
||||
|
||||
from miplearn.extractors.fields import H5FieldsExtractor
|
||||
from miplearn.extractors.abstract import FeaturesExtractor
|
||||
from miplearn.extractors.fields import H5FieldsExtractor
|
||||
|
||||
|
||||
def _h5_fixture(pattern: str, request: Any) -> List[str]:
|
||||
"""
|
||||
Create a temporary copy of the provided .h5 files, along with the companion
|
||||
.pkl.gz files, and return the path to the copy. Also register a finalizer,
|
||||
so that the temporary folder is removed after the tests.
|
||||
"""
|
||||
filenames = glob(f"{dirname(__file__)}/fixtures/{pattern}")
|
||||
print(filenames)
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
|
||||
def cleanup() -> None:
|
||||
shutil.rmtree(tmpdir)
|
||||
|
||||
request.addfinalizer(cleanup)
|
||||
|
||||
print(tmpdir)
|
||||
for f in filenames:
|
||||
fbase, _ = os.path.splitext(f)
|
||||
for ext in [".h5", ".pkl.gz"]:
|
||||
dest = os.path.join(tmpdir, f"{basename(fbase)}{ext}")
|
||||
print(dest)
|
||||
shutil.copy(f"{fbase}{ext}", dest)
|
||||
assert isfile(dest)
|
||||
return sorted(glob(f"{tmpdir}/*.h5"))
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def multiknapsack_h5() -> List[str]:
|
||||
return sorted(glob(f"{dirname(__file__)}/fixtures/multiknapsack*.h5"))
|
||||
def multiknapsack_h5(request: Any) -> List[str]:
|
||||
return _h5_fixture("multiknapsack*.h5", request)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def tsp_gp_h5(request: Any) -> List[str]:
|
||||
return _h5_fixture("tsp-gp*.h5", request)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def tsp_pyo_h5(request: Any) -> List[str]:
|
||||
return _h5_fixture("tsp-pyo*.h5", request)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def stab_gp_h5(request: Any) -> List[str]:
|
||||
return _h5_fixture("stab-gp*.h5", request)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def stab_pyo_h5(request: Any) -> List[str]:
|
||||
return _h5_fixture("stab-pyo*.h5", request)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
|
||||
44
tests/fixtures/gen_stab.py
vendored
Normal file
44
tests/fixtures/gen_stab.py
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
from os.path import dirname
|
||||
|
||||
import numpy as np
|
||||
from scipy.stats import uniform, randint
|
||||
|
||||
from miplearn.collectors.basic import BasicCollector
|
||||
from miplearn.io import write_pkl_gz
|
||||
from miplearn.problems.stab import (
|
||||
MaxWeightStableSetGenerator,
|
||||
build_stab_model_gurobipy,
|
||||
build_stab_model_pyomo,
|
||||
)
|
||||
|
||||
|
||||
np.random.seed(42)
|
||||
gen = MaxWeightStableSetGenerator(
|
||||
w=uniform(10.0, scale=1.0),
|
||||
n=randint(low=50, high=51),
|
||||
p=uniform(loc=0.5, scale=0.0),
|
||||
fix_graph=True,
|
||||
)
|
||||
data = gen.generate(3)
|
||||
|
||||
params = {"seed": 42, "threads": 1}
|
||||
|
||||
# Gurobipy
|
||||
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="stab-gp-n50-")
|
||||
collector = BasicCollector()
|
||||
collector.collect(
|
||||
data_filenames,
|
||||
lambda data: build_stab_model_gurobipy(data, params=params),
|
||||
progress=True,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Pyomo
|
||||
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="stab-pyo-n50-")
|
||||
collector = BasicCollector()
|
||||
collector.collect(
|
||||
data_filenames,
|
||||
lambda model: build_stab_model_pyomo(model, params=params),
|
||||
progress=True,
|
||||
verbose=True,
|
||||
)
|
||||
46
tests/fixtures/gen_tsp.py
vendored
Normal file
46
tests/fixtures/gen_tsp.py
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
from os.path import dirname
|
||||
|
||||
import numpy as np
|
||||
from scipy.stats import uniform, randint
|
||||
|
||||
from miplearn.collectors.basic import BasicCollector
|
||||
from miplearn.io import write_pkl_gz
|
||||
from miplearn.problems.tsp import (
|
||||
TravelingSalesmanGenerator,
|
||||
build_tsp_model_gurobipy,
|
||||
build_tsp_model_pyomo,
|
||||
)
|
||||
|
||||
np.random.seed(42)
|
||||
gen = TravelingSalesmanGenerator(
|
||||
x=uniform(loc=0.0, scale=1000.0),
|
||||
y=uniform(loc=0.0, scale=1000.0),
|
||||
n=randint(low=20, high=21),
|
||||
gamma=uniform(loc=1.0, scale=0.25),
|
||||
fix_cities=True,
|
||||
round=True,
|
||||
)
|
||||
|
||||
data = gen.generate(3)
|
||||
|
||||
params = {"seed": 42, "threads": 1}
|
||||
|
||||
# Gurobipy
|
||||
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="tsp-gp-n20-")
|
||||
collector = BasicCollector()
|
||||
collector.collect(
|
||||
data_filenames,
|
||||
lambda d: build_tsp_model_gurobipy(d, params=params),
|
||||
progress=True,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Pyomo
|
||||
data_filenames = write_pkl_gz(data, dirname(__file__), prefix="tsp-pyo-n20-")
|
||||
collector = BasicCollector()
|
||||
collector.collect(
|
||||
data_filenames,
|
||||
lambda d: build_tsp_model_pyomo(d, params=params),
|
||||
progress=True,
|
||||
verbose=True,
|
||||
)
|
||||
BIN
tests/fixtures/multiknapsack-n100-m4-00000.h5
vendored
Normal file
BIN
tests/fixtures/multiknapsack-n100-m4-00000.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/multiknapsack-n100-m4-00000.mps.gz
vendored
Normal file
BIN
tests/fixtures/multiknapsack-n100-m4-00000.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/multiknapsack-n100-m4-00000.pkl.gz
vendored
Normal file
BIN
tests/fixtures/multiknapsack-n100-m4-00000.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/multiknapsack-n100-m4-00001.h5
vendored
Normal file
BIN
tests/fixtures/multiknapsack-n100-m4-00001.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/multiknapsack-n100-m4-00001.mps.gz
vendored
Normal file
BIN
tests/fixtures/multiknapsack-n100-m4-00001.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/multiknapsack-n100-m4-00001.pkl.gz
vendored
Normal file
BIN
tests/fixtures/multiknapsack-n100-m4-00001.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/multiknapsack-n100-m4-00002.h5
vendored
Normal file
BIN
tests/fixtures/multiknapsack-n100-m4-00002.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/multiknapsack-n100-m4-00002.mps.gz
vendored
Normal file
BIN
tests/fixtures/multiknapsack-n100-m4-00002.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/multiknapsack-n100-m4-00002.pkl.gz
vendored
Normal file
BIN
tests/fixtures/multiknapsack-n100-m4-00002.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-gp-n50-00000.h5
vendored
Normal file
BIN
tests/fixtures/stab-gp-n50-00000.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-gp-n50-00000.mps.gz
vendored
Normal file
BIN
tests/fixtures/stab-gp-n50-00000.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-gp-n50-00000.pkl.gz
vendored
Normal file
BIN
tests/fixtures/stab-gp-n50-00000.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-gp-n50-00001.h5
vendored
Normal file
BIN
tests/fixtures/stab-gp-n50-00001.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-gp-n50-00001.mps.gz
vendored
Normal file
BIN
tests/fixtures/stab-gp-n50-00001.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-gp-n50-00001.pkl.gz
vendored
Normal file
BIN
tests/fixtures/stab-gp-n50-00001.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-gp-n50-00002.h5
vendored
Normal file
BIN
tests/fixtures/stab-gp-n50-00002.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-gp-n50-00002.mps.gz
vendored
Normal file
BIN
tests/fixtures/stab-gp-n50-00002.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-gp-n50-00002.pkl.gz
vendored
Normal file
BIN
tests/fixtures/stab-gp-n50-00002.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-pyo-n50-00000.h5
vendored
Normal file
BIN
tests/fixtures/stab-pyo-n50-00000.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-pyo-n50-00000.mps.gz
vendored
Normal file
BIN
tests/fixtures/stab-pyo-n50-00000.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-pyo-n50-00000.pkl.gz
vendored
Normal file
BIN
tests/fixtures/stab-pyo-n50-00000.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-pyo-n50-00001.h5
vendored
Normal file
BIN
tests/fixtures/stab-pyo-n50-00001.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-pyo-n50-00001.mps.gz
vendored
Normal file
BIN
tests/fixtures/stab-pyo-n50-00001.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-pyo-n50-00001.pkl.gz
vendored
Normal file
BIN
tests/fixtures/stab-pyo-n50-00001.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-pyo-n50-00002.h5
vendored
Normal file
BIN
tests/fixtures/stab-pyo-n50-00002.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-pyo-n50-00002.mps.gz
vendored
Normal file
BIN
tests/fixtures/stab-pyo-n50-00002.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/stab-pyo-n50-00002.pkl.gz
vendored
Normal file
BIN
tests/fixtures/stab-pyo-n50-00002.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-gp-n20-00000.h5
vendored
Normal file
BIN
tests/fixtures/tsp-gp-n20-00000.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-gp-n20-00000.mps.gz
vendored
Normal file
BIN
tests/fixtures/tsp-gp-n20-00000.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-gp-n20-00000.pkl.gz
vendored
Normal file
BIN
tests/fixtures/tsp-gp-n20-00000.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-gp-n20-00001.h5
vendored
Normal file
BIN
tests/fixtures/tsp-gp-n20-00001.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-gp-n20-00001.mps.gz
vendored
Normal file
BIN
tests/fixtures/tsp-gp-n20-00001.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-gp-n20-00001.pkl.gz
vendored
Normal file
BIN
tests/fixtures/tsp-gp-n20-00001.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-gp-n20-00002.h5
vendored
Normal file
BIN
tests/fixtures/tsp-gp-n20-00002.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-gp-n20-00002.mps.gz
vendored
Normal file
BIN
tests/fixtures/tsp-gp-n20-00002.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-gp-n20-00002.pkl.gz
vendored
Normal file
BIN
tests/fixtures/tsp-gp-n20-00002.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-pyo-n20-00000.h5
vendored
Normal file
BIN
tests/fixtures/tsp-pyo-n20-00000.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-pyo-n20-00000.mps.gz
vendored
Normal file
BIN
tests/fixtures/tsp-pyo-n20-00000.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-pyo-n20-00000.pkl.gz
vendored
Normal file
BIN
tests/fixtures/tsp-pyo-n20-00000.pkl.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-pyo-n20-00001.h5
vendored
Normal file
BIN
tests/fixtures/tsp-pyo-n20-00001.h5
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-pyo-n20-00001.mps.gz
vendored
Normal file
BIN
tests/fixtures/tsp-pyo-n20-00001.mps.gz
vendored
Normal file
Binary file not shown.
BIN
tests/fixtures/tsp-pyo-n20-00001.pkl.gz
vendored
Normal file
BIN
tests/fixtures/tsp-pyo-n20-00001.pkl.gz
vendored
Normal file
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user