parent
9bd64c885a
commit
22c1e0d269
@ -1,58 +0,0 @@
|
||||
```{sectnum}
|
||||
---
|
||||
start: 4
|
||||
depth: 2
|
||||
suffix: .
|
||||
---
|
||||
```
|
||||
|
||||
# About
|
||||
|
||||
## Authors
|
||||
|
||||
* **Alinson S. Xavier,** Argonne National Laboratory <<axavier@anl.gov>>
|
||||
* **Feng Qiu,** Argonne National Laboratory <<fqiu@anl.gov>>
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
* Based upon work supported by **Laboratory Directed Research and Development** (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357, and the **U.S. Department of Energy Advanced Grid Modeling Program** under Grant DE-OE0000875.
|
||||
|
||||
## References
|
||||
|
||||
|
||||
If you use MIPLearn in your research, or the included problem generators, we kindly request that you cite the package as follows:
|
||||
|
||||
* **Alinson S. Xavier, Feng Qiu.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization*. Zenodo (2020). DOI: [10.5281/zenodo.4287567](https://doi.org/10.5281/zenodo.4287567)
|
||||
|
||||
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:
|
||||
|
||||
* **Alinson S. Xavier, Feng Qiu, Shabbir Ahmed.** *Learning to Solve Large-Scale Unit Commitment Problems.* INFORMS Journal on Computing (2020). DOI: [10.1287/ijoc.2020.0976](https://doi.org/10.1287/ijoc.2020.0976)
|
||||
|
||||
## License
|
||||
|
||||
```text
|
||||
MIPLearn, an extensible framework for Learning-Enhanced Mixed-Integer Optimization
|
||||
Copyright © 2020, UChicago Argonne, LLC. All Rights Reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification, are permitted
|
||||
provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this list of
|
||||
conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright notice, this list of
|
||||
conditions and the following disclaimer in the documentation and/or other materials provided
|
||||
with the distribution.
|
||||
3. Neither the name of the copyright holder nor the names of its contributors may be used to
|
||||
endorse or promote products derived from this software without specific prior written
|
||||
permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
|
||||
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
|
||||
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
|
||||
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGE.
|
||||
```
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "792bbfa2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Facility Location\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,43 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "423ee254",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Preliminaries\n",
|
||||
"\n",
|
||||
"## Benchmark challenges\n",
|
||||
"\n",
|
||||
"When evaluating the performance of a conventional MIP solver, *benchmark sets*, such as MIPLIB and TSPLIB, are typically used. The performance of newly proposed solvers or solution techniques are typically measured as the average (or total) running time the solver takes to solve the entire benchmark set. For Learning-Enhanced MIP solvers, it is also necessary to specify what instances should the solver be trained on (the *training instances*) before solving the actual set of instances we are interested in (the *test instances*). If the training instances are very similar to the test instances, we would expect a Learning-Enhanced Solver to present stronger perfomance benefits.\n",
|
||||
"\n",
|
||||
"In MIPLearn, each optimization problem comes with a set of **benchmark challenges**, which specify how should the training and test instances be generated. The first challenges are typically easier, in the sense that training and test instances are very similar. Later challenges gradually make the sets more distinct, and therefore harder to learn from.\n",
|
||||
"\n",
|
||||
"## Baseline results\n",
|
||||
"\n",
|
||||
"To illustrate the performance of `LearningSolver`, and to set a baseline for newly proposed techniques, we present in this page, for each benchmark challenge, a small set of computational results measuring the solution speed of the solver and the solution quality with default parameters. For more detailed computational studies, see [references](index.md#references). We compare three solvers:\n",
|
||||
"\n",
|
||||
"* **baseline:** Gurobi 9.1 with default settings (a conventional state-of-the-art MIP solver)\n",
|
||||
"* **ml-exact:** `LearningSolver` with default settings, using Gurobi 9.0 as internal MIP solver\n",
|
||||
"* **ml-heuristic:** Same as above, but with `mode=\"heuristic\"`\n",
|
||||
"\n",
|
||||
"All experiments presented here were performed on a Linux server (Ubuntu Linux 18.04 LTS) with Intel Xeon Gold 6230s (2 processors, 40 cores, 80 threads) and 256 GB RAM (DDR4, 2933 MHz). All solvers were restricted to use 4 threads, with no time limits, and 10 instances were solved simultaneously at a time."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,62 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "23083bd9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Maximum Weight Stable Set\n",
|
||||
"\n",
|
||||
"## Problem definition\n",
|
||||
"\n",
|
||||
"Given a simple undirected graph $G=(V,E)$ and weights $w \\in \\mathbb{R}^V$, the problem is to find a stable set $S \\subseteq V$ that maximizes $ \\sum_{v \\in V} w_v$. We recall that a subset $S \\subseteq V$ is a *stable set* if no two vertices of $S$ are adjacent. This is one of Karp's 21 NP-complete problems.\n",
|
||||
"\n",
|
||||
"## Random instance generator\n",
|
||||
"\n",
|
||||
"The class `MaxWeightStableSetGenerator` can be used to generate random instances of this problem, with user-specified probability distributions. When the constructor parameter `fix_graph=True` is provided, one random Erdős-Rényi graph $G_{n,p}$ is generated during the constructor, where $n$ and $p$ are sampled from user-provided probability distributions `n` and `p`. To generate each instance, the generator independently samples each $w_v$ from the user-provided probability distribution `w`. When `fix_graph=False`, a new random graph is generated for each instance, while the remaining parameters are sampled in the same way.\n",
|
||||
"\n",
|
||||
"## Challenge A\n",
|
||||
"\n",
|
||||
"* Fixed random Erdős-Rényi graph $G_{n,p}$ with $n=200$ and $p=5\\%$\n",
|
||||
"* Random vertex weights $w_v \\sim U(100, 150)$\n",
|
||||
"* 500 training instances, 50 test instances"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "207c7846",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"MaxWeightStableSetGenerator(\n",
|
||||
" w=uniform(loc=100., scale=50.),\n",
|
||||
" n=randint(low=200, high=201),\n",
|
||||
" p=uniform(loc=0.05, scale=0.0),\n",
|
||||
" fix_graph=True,\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "miplearn",
|
||||
"language": "python",
|
||||
"name": "miplearn"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,88 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2f39414b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Traveling Salesman\n",
|
||||
"\n",
|
||||
"### Problem definition\n",
|
||||
"\n",
|
||||
"Given a list of cities and the distance between each pair of cities, the problem asks for the\n",
|
||||
"shortest route starting at the first city, visiting each other city exactly once, then returning\n",
|
||||
"to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's\n",
|
||||
"21 NP-complete problems.\n",
|
||||
"\n",
|
||||
"### Random problem generator\n",
|
||||
"\n",
|
||||
"The class `TravelingSalesmanGenerator` can be used to generate random instances of this\n",
|
||||
"problem. Initially, the generator creates $n$ cities $(x_1,y_1),\\ldots,(x_n,y_n) \\in \\mathbb{R}^2$,\n",
|
||||
"where $n, x_i$ and $y_i$ are sampled independently from the provided probability distributions `n`,\n",
|
||||
"`x` and `y`. For each pair of cities $(i,j)$, the distance $d_{i,j}$ between them is set to:\n",
|
||||
"$$\n",
|
||||
" d_{i,j} = \\gamma_{i,j} \\sqrt{(x_i-x_j)^2 + (y_i - y_j)^2}\n",
|
||||
"$$\n",
|
||||
"where $\\gamma_{i,j}$ is sampled from the distribution `gamma`.\n",
|
||||
"\n",
|
||||
"If `fix_cities=True` is provided, the list of cities is kept the same for all generated instances.\n",
|
||||
"The $gamma$ values, and therefore also the distances, are still different.\n",
|
||||
"\n",
|
||||
"By default, all distances $d_{i,j}$ are rounded to the nearest integer. If `round=False`\n",
|
||||
"is provided, this rounding will be disabled.\n",
|
||||
"\n",
|
||||
"### Challenge A\n",
|
||||
"\n",
|
||||
"* Fixed list of 350 cities in the $[0, 1000]^2$ square\n",
|
||||
"* $\\gamma_{i,j} \\sim U(0.95, 1.05)$\n",
|
||||
"* 500 training instances, 50 test instances"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6b2c4ff9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"TravelingSalesmanGenerator(\n",
|
||||
" x=uniform(loc=0.0, scale=1000.0),\n",
|
||||
" y=uniform(loc=0.0, scale=1000.0),\n",
|
||||
" n=randint(low=350, high=351),\n",
|
||||
" gamma=uniform(loc=0.95, scale=0.1),\n",
|
||||
" fix_cities=True,\n",
|
||||
" round=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "cc125860",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "miplearn",
|
||||
"language": "python",
|
||||
"name": "miplearn"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ed7d4bdc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Unit Commitment\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,182 +0,0 @@
|
||||
```{sectnum}
|
||||
---
|
||||
start: 3
|
||||
depth: 2
|
||||
suffix: .
|
||||
---
|
||||
```
|
||||
|
||||
# Customization
|
||||
|
||||
## Customizing solver parameters
|
||||
|
||||
### Selecting the internal MIP solver
|
||||
|
||||
By default, `LearningSolver` uses [Gurobi](https://www.gurobi.com/) as its internal MIP solver, and expects models to be provided using the Pyomo modeling language. Supported solvers and modeling languages include:
|
||||
|
||||
* `GurobiPyomoSolver`: Gurobi with Pyomo (default).
|
||||
* `CplexPyomoSolver`: [IBM ILOG CPLEX](https://www.ibm.com/products/ilog-cplex-optimization-studio) with Pyomo.
|
||||
* `XpressPyomoSolver`: [FICO XPRESS Solver](https://www.fico.com/en/products/fico-xpress-solver) with Pyomo.
|
||||
* `GurobiSolver`: Gurobi without any modeling language.
|
||||
|
||||
To switch between solvers, provide the desired class using the `solver` argument:
|
||||
|
||||
```python
|
||||
from miplearn import LearningSolver, CplexPyomoSolver
|
||||
solver = LearningSolver(solver=CplexPyomoSolver)
|
||||
```
|
||||
|
||||
To configure a particular solver, use the `params` constructor argument, as shown below.
|
||||
|
||||
```python
|
||||
from miplearn import LearningSolver, GurobiPyomoSolver
|
||||
solver = LearningSolver(
|
||||
solver=lambda: GurobiPyomoSolver(
|
||||
params={
|
||||
"TimeLimit": 900,
|
||||
"MIPGap": 1e-3,
|
||||
"NodeLimit": 1000,
|
||||
}
|
||||
),
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
## Customizing solver components
|
||||
|
||||
`LearningSolver` is composed by a number of individual machine-learning components, each targeting a different part of the solution process. Each component can be individually enabled, disabled or customized. The following components are enabled by default:
|
||||
|
||||
* `LazyConstraintComponent`: Predicts which lazy constraint to initially enforce.
|
||||
* `ObjectiveValueComponent`: Predicts the optimal value of the optimization problem, given the optimal solution to the LP relaxation.
|
||||
* `PrimalSolutionComponent`: Predicts optimal values for binary decision variables. In heuristic mode, this component fixes the variables to their predicted values. In exact mode, the predicted values are provided to the solver as a (partial) MIP start.
|
||||
|
||||
The following components are also available, but not enabled by default:
|
||||
|
||||
* `BranchPriorityComponent`: Predicts good branch priorities for decision variables.
|
||||
|
||||
### Selecting components
|
||||
|
||||
To create a `LearningSolver` with a specific set of components, the `components` constructor argument may be used, as the next example shows:
|
||||
|
||||
```python
|
||||
# Create a solver without any components
|
||||
solver1 = LearningSolver(components=[])
|
||||
|
||||
# Create a solver with only two components
|
||||
solver2 = LearningSolver(components=[
|
||||
LazyConstraintComponent(...),
|
||||
PrimalSolutionComponent(...),
|
||||
])
|
||||
```
|
||||
|
||||
### Adjusting component aggressiveness
|
||||
|
||||
The aggressiveness of classification components, such as `PrimalSolutionComponent` and `LazyConstraintComponent`, can be adjusted through the `threshold` constructor argument. Internally, these components ask the machine learning models how confident are they on each prediction they make, then automatically discard all predictions that have low confidence. The `threshold` argument specifies how confident should the ML models be for a prediction to be considered trustworthy. Lowering a component's threshold increases its aggressiveness, while raising a component's threshold makes it more conservative.
|
||||
|
||||
For example, if the ML model predicts that a certain binary variable will assume value `1.0` in the optimal solution with 75% confidence, and if the `PrimalSolutionComponent` is configured to discard all predictions with less than 90% confidence, then this variable will not be included in the predicted MIP start.
|
||||
|
||||
MIPLearn currently provides two types of thresholds:
|
||||
|
||||
* `MinProbabilityThreshold(p: List[float])` A threshold which indicates that a prediction is trustworthy if its probability of being correct, as computed by the machine learning model, is above a fixed value.
|
||||
* `MinPrecisionThreshold(p: List[float])` A dynamic threshold which automatically adjusts itself during training to ensure that the component achieves at least a given precision on the training data set. Note that increasing a component's precision may reduce its recall.
|
||||
|
||||
The example below shows how to build a `PrimalSolutionComponent` which fixes variables to zero with at least 80% precision, and to one with at least 95% precision. Other components are configured similarly.
|
||||
|
||||
```python
|
||||
from miplearn import PrimalSolutionComponent, MinPrecisionThreshold
|
||||
|
||||
PrimalSolutionComponent(
|
||||
mode="heuristic",
|
||||
threshold=MinPrecisionThreshold([0.80, 0.95]),
|
||||
)
|
||||
```
|
||||
|
||||
### Evaluating component performance
|
||||
|
||||
MIPLearn allows solver components to be modified, trained and evaluated in isolation. In the following example, we build and
|
||||
fit `PrimalSolutionComponent` outside the solver, then evaluate its performance.
|
||||
|
||||
```python
|
||||
from miplearn import PrimalSolutionComponent
|
||||
|
||||
# User-provided set of previously-solved instances
|
||||
train_instances = [...]
|
||||
|
||||
# Construct and fit component on a subset of training instances
|
||||
comp = PrimalSolutionComponent()
|
||||
comp.fit(train_instances[:100])
|
||||
|
||||
# Evaluate performance on an additional set of training instances
|
||||
ev = comp.evaluate(train_instances[100:150])
|
||||
```
|
||||
|
||||
The method `evaluate` returns a dictionary with performance evaluation statistics for each training instance provided,
|
||||
and for each type of prediction the component makes. To obtain a summary across all instances, pandas may be used, as below:
|
||||
|
||||
```python
|
||||
import pandas as pd
|
||||
pd.DataFrame(ev["Fix one"]).mean(axis=1)
|
||||
```
|
||||
```text
|
||||
Predicted positive 3.120000
|
||||
Predicted negative 196.880000
|
||||
Condition positive 62.500000
|
||||
Condition negative 137.500000
|
||||
True positive 3.060000
|
||||
True negative 137.440000
|
||||
False positive 0.060000
|
||||
False negative 59.440000
|
||||
Accuracy 0.702500
|
||||
F1 score 0.093050
|
||||
Recall 0.048921
|
||||
Precision 0.981667
|
||||
Predicted positive (%) 1.560000
|
||||
Predicted negative (%) 98.440000
|
||||
Condition positive (%) 31.250000
|
||||
Condition negative (%) 68.750000
|
||||
True positive (%) 1.530000
|
||||
True negative (%) 68.720000
|
||||
False positive (%) 0.030000
|
||||
False negative (%) 29.720000
|
||||
dtype: float64
|
||||
```
|
||||
|
||||
Regression components (such as `ObjectiveValueComponent`) can also be trained and evaluated similarly,
|
||||
as the next example shows:
|
||||
|
||||
```python
|
||||
from miplearn import ObjectiveValueComponent
|
||||
comp = ObjectiveValueComponent()
|
||||
comp.fit(train_instances[:100])
|
||||
ev = comp.evaluate(train_instances[100:150])
|
||||
|
||||
import pandas as pd
|
||||
pd.DataFrame(ev).mean(axis=1)
|
||||
```
|
||||
```text
|
||||
Mean squared error 7001.977827
|
||||
Explained variance 0.519790
|
||||
Max error 242.375804
|
||||
Mean absolute error 65.843924
|
||||
R2 0.517612
|
||||
Median absolute error 65.843924
|
||||
dtype: float64
|
||||
```
|
||||
|
||||
### Using customized ML classifiers and regressors
|
||||
|
||||
By default, given a training set of instantes, MIPLearn trains a fixed set of ML classifiers and regressors, then selects the best one based on cross-validation performance. Alternatively, the user may specify which ML model a component should use through the `classifier` or `regressor` contructor parameters. Scikit-learn classifiers and regressors are currently supported. A future version of the package will add compatibility with Keras models.
|
||||
|
||||
The example below shows how to construct a `PrimalSolutionComponent` which internally uses scikit-learn's `KNeighborsClassifiers`. Any other scikit-learn classifier or pipeline can be used. It needs to be wrapped in `ScikitLearnClassifier` to ensure that all the proper data transformations are applied.
|
||||
|
||||
```python
|
||||
from miplearn import PrimalSolutionComponent, ScikitLearnClassifier
|
||||
from sklearn.neighbors import KNeighborsClassifier
|
||||
|
||||
comp = PrimalSolutionComponent(
|
||||
classifier=ScikitLearnClassifier(
|
||||
KNeighborsClassifier(n_neighbors=5),
|
||||
),
|
||||
)
|
||||
comp.fit(train_instances)
|
||||
```
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ad9274ff",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Abstract component\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "780b4172",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Training data collection\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5e3dd4c0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Dynamic lazy constraints & user cuts\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c6d0d8dc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Primal solutions\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ac509ea5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Solver interfaces\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ae350662",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Static lazy constraints\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,700 @@
|
||||
# This file is machine-generated - editing it directly is not advised
|
||||
|
||||
[[ASL_jll]]
|
||||
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
|
||||
git-tree-sha1 = "370cafc70604b2522f2c7cf9915ebcd17b4cd38b"
|
||||
uuid = "ae81ac8f-d209-56e5-92de-9978fef736f9"
|
||||
version = "0.1.2+0"
|
||||
|
||||
[[ArgTools]]
|
||||
uuid = "0dad84c5-d112-42e6-8d28-ef12dabb789f"
|
||||
|
||||
[[Artifacts]]
|
||||
uuid = "56f22d72-fd6d-98f1-02f0-08ddc0907c33"
|
||||
|
||||
[[Base64]]
|
||||
uuid = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f"
|
||||
|
||||
[[BenchmarkTools]]
|
||||
deps = ["JSON", "Logging", "Printf", "Statistics", "UUIDs"]
|
||||
git-tree-sha1 = "42ac5e523869a84eac9669eaceed9e4aa0e1587b"
|
||||
uuid = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
|
||||
version = "1.1.4"
|
||||
|
||||
[[BinaryProvider]]
|
||||
deps = ["Libdl", "Logging", "SHA"]
|
||||
git-tree-sha1 = "ecdec412a9abc8db54c0efc5548c64dfce072058"
|
||||
uuid = "b99e7846-7c00-51b0-8f62-c81ae34c0232"
|
||||
version = "0.5.10"
|
||||
|
||||
[[Bzip2_jll]]
|
||||
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
|
||||
git-tree-sha1 = "19a35467a82e236ff51bc17a3a44b69ef35185a2"
|
||||
uuid = "6e34b625-4abd-537c-b88f-471c36dfa7a0"
|
||||
version = "1.0.8+0"
|
||||
|
||||
[[CEnum]]
|
||||
git-tree-sha1 = "215a9aa4a1f23fbd05b92769fdd62559488d70e9"
|
||||
uuid = "fa961155-64e5-5f13-b03f-caf6b980ea82"
|
||||
version = "0.4.1"
|
||||
|
||||
[[CSV]]
|
||||
deps = ["Dates", "Mmap", "Parsers", "PooledArrays", "SentinelArrays", "Tables", "Unicode"]
|
||||
git-tree-sha1 = "b83aa3f513be680454437a0eee21001607e5d983"
|
||||
uuid = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
|
||||
version = "0.8.5"
|
||||
|
||||
[[Calculus]]
|
||||
deps = ["LinearAlgebra"]
|
||||
git-tree-sha1 = "f641eb0a4f00c343bbc32346e1217b86f3ce9dad"
|
||||
uuid = "49dc2e85-a5d0-5ad3-a950-438e2897f1b9"
|
||||
version = "0.5.1"
|
||||
|
||||
[[Cbc]]
|
||||
deps = ["BinaryProvider", "CEnum", "Cbc_jll", "Libdl", "MathOptInterface", "SparseArrays"]
|
||||
git-tree-sha1 = "98e3692f90b26a340f32e17475c396c3de4180de"
|
||||
uuid = "9961bab8-2fa3-5c5a-9d89-47fab24efd76"
|
||||
version = "0.8.1"
|
||||
|
||||
[[Cbc_jll]]
|
||||
deps = ["ASL_jll", "Artifacts", "Cgl_jll", "Clp_jll", "CoinUtils_jll", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "OpenBLAS32_jll", "Osi_jll", "Pkg"]
|
||||
git-tree-sha1 = "7693a7ca006d25e0d0097a5eee18ce86368e00cd"
|
||||
uuid = "38041ee0-ae04-5750-a4d2-bb4d0d83d27d"
|
||||
version = "200.1000.500+1"
|
||||
|
||||
[[Cgl_jll]]
|
||||
deps = ["Artifacts", "Clp_jll", "CoinUtils_jll", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Osi_jll", "Pkg"]
|
||||
git-tree-sha1 = "b5557f48e0e11819bdbda0200dbfa536dd12d9d9"
|
||||
uuid = "3830e938-1dd0-5f3e-8b8e-b3ee43226782"
|
||||
version = "0.6000.200+0"
|
||||
|
||||
[[ChainRulesCore]]
|
||||
deps = ["Compat", "LinearAlgebra", "SparseArrays"]
|
||||
git-tree-sha1 = "bdc0937269321858ab2a4f288486cb258b9a0af7"
|
||||
uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
|
||||
version = "1.3.0"
|
||||
|
||||
[[Clp]]
|
||||
deps = ["BinaryProvider", "CEnum", "Clp_jll", "Libdl", "MathOptInterface", "SparseArrays"]
|
||||
git-tree-sha1 = "3df260c4a5764858f312ec2a17f5925624099f3a"
|
||||
uuid = "e2554f3b-3117-50c0-817c-e040a3ddf72d"
|
||||
version = "0.8.4"
|
||||
|
||||
[[Clp_jll]]
|
||||
deps = ["Artifacts", "CoinUtils_jll", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "METIS_jll", "MUMPS_seq_jll", "OpenBLAS32_jll", "Osi_jll", "Pkg"]
|
||||
git-tree-sha1 = "5e4f9a825408dc6356e6bf1015e75d2b16250ec8"
|
||||
uuid = "06985876-5285-5a41-9fcb-8948a742cc53"
|
||||
version = "100.1700.600+0"
|
||||
|
||||
[[CodeTracking]]
|
||||
deps = ["InteractiveUtils", "UUIDs"]
|
||||
git-tree-sha1 = "9aa8a5ebb6b5bf469a7e0e2b5202cf6f8c291104"
|
||||
uuid = "da1fd8a2-8d9e-5ec2-8556-3022fb5608a2"
|
||||
version = "1.0.6"
|
||||
|
||||
[[CodecBzip2]]
|
||||
deps = ["Bzip2_jll", "Libdl", "TranscodingStreams"]
|
||||
git-tree-sha1 = "2e62a725210ce3c3c2e1a3080190e7ca491f18d7"
|
||||
uuid = "523fee87-0ab8-5b00-afb7-3ecf72e48cfd"
|
||||
version = "0.7.2"
|
||||
|
||||
[[CodecZlib]]
|
||||
deps = ["TranscodingStreams", "Zlib_jll"]
|
||||
git-tree-sha1 = "ded953804d019afa9a3f98981d99b33e3db7b6da"
|
||||
uuid = "944b1d66-785c-5afd-91f1-9de20f533193"
|
||||
version = "0.7.0"
|
||||
|
||||
[[CoinUtils_jll]]
|
||||
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "OpenBLAS32_jll", "Pkg"]
|
||||
git-tree-sha1 = "9b4a8b1087376c56189d02c3c1a48a0bba098ec2"
|
||||
uuid = "be027038-0da8-5614-b30d-e42594cb92df"
|
||||
version = "2.11.4+2"
|
||||
|
||||
[[CommonSubexpressions]]
|
||||
deps = ["MacroTools", "Test"]
|
||||
git-tree-sha1 = "7b8a93dba8af7e3b42fecabf646260105ac373f7"
|
||||
uuid = "bbf7d656-a473-5ed7-a52c-81e309532950"
|
||||
version = "0.3.0"
|
||||
|
||||
[[Compat]]
|
||||
deps = ["Base64", "Dates", "DelimitedFiles", "Distributed", "InteractiveUtils", "LibGit2", "Libdl", "LinearAlgebra", "Markdown", "Mmap", "Pkg", "Printf", "REPL", "Random", "SHA", "Serialization", "SharedArrays", "Sockets", "SparseArrays", "Statistics", "Test", "UUIDs", "Unicode"]
|
||||
git-tree-sha1 = "727e463cfebd0c7b999bbf3e9e7e16f254b94193"
|
||||
uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
|
||||
version = "3.34.0"
|
||||
|
||||
[[CompilerSupportLibraries_jll]]
|
||||
deps = ["Artifacts", "Libdl"]
|
||||
uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
|
||||
|
||||
[[Conda]]
|
||||
deps = ["JSON", "VersionParsing"]
|
||||
git-tree-sha1 = "299304989a5e6473d985212c28928899c74e9421"
|
||||
uuid = "8f4d0f93-b110-5947-807f-2305c1781a2d"
|
||||
version = "1.5.2"
|
||||
|
||||
[[Crayons]]
|
||||
git-tree-sha1 = "3f71217b538d7aaee0b69ab47d9b7724ca8afa0d"
|
||||
uuid = "a8cc5b0e-0ffa-5ad4-8c14-923d3ee1735f"
|
||||
version = "4.0.4"
|
||||
|
||||
[[DataAPI]]
|
||||
git-tree-sha1 = "ee400abb2298bd13bfc3df1c412ed228061a2385"
|
||||
uuid = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a"
|
||||
version = "1.7.0"
|
||||
|
||||
[[DataFrames]]
|
||||
deps = ["Compat", "DataAPI", "Future", "InvertedIndices", "IteratorInterfaceExtensions", "LinearAlgebra", "Markdown", "Missings", "PooledArrays", "PrettyTables", "Printf", "REPL", "Reexport", "SortingAlgorithms", "Statistics", "TableTraits", "Tables", "Unicode"]
|
||||
git-tree-sha1 = "d785f42445b63fc86caa08bb9a9351008be9b765"
|
||||
uuid = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
|
||||
version = "1.2.2"
|
||||
|
||||
[[DataStructures]]
|
||||
deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
|
||||
git-tree-sha1 = "7d9d316f04214f7efdbb6398d545446e246eff02"
|
||||
uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
|
||||
version = "0.18.10"
|
||||
|
||||
[[DataValueInterfaces]]
|
||||
git-tree-sha1 = "bfc1187b79289637fa0ef6d4436ebdfe6905cbd6"
|
||||
uuid = "e2d170a0-9d28-54be-80f0-106bbe20a464"
|
||||
version = "1.0.0"
|
||||
|
||||
[[Dates]]
|
||||
deps = ["Printf"]
|
||||
uuid = "ade2ca70-3891-5945-98fb-dc099432e06a"
|
||||
|
||||
[[DelimitedFiles]]
|
||||
deps = ["Mmap"]
|
||||
uuid = "8bb1440f-4735-579b-a4ab-409b98df4dab"
|
||||
|
||||
[[DiffResults]]
|
||||
deps = ["StaticArrays"]
|
||||
git-tree-sha1 = "c18e98cba888c6c25d1c3b048e4b3380ca956805"
|
||||
uuid = "163ba53b-c6d8-5494-b064-1a9d43ac40c5"
|
||||
version = "1.0.3"
|
||||
|
||||
[[DiffRules]]
|
||||
deps = ["NaNMath", "Random", "SpecialFunctions"]
|
||||
git-tree-sha1 = "3ed8fa7178a10d1cd0f1ca524f249ba6937490c0"
|
||||
uuid = "b552c78f-8df3-52c6-915a-8e097449b14b"
|
||||
version = "1.3.0"
|
||||
|
||||
[[Distributed]]
|
||||
deps = ["Random", "Serialization", "Sockets"]
|
||||
uuid = "8ba89e20-285c-5b6f-9357-94700520ee1b"
|
||||
|
||||
[[Distributions]]
|
||||
deps = ["ChainRulesCore", "FillArrays", "LinearAlgebra", "PDMats", "Printf", "QuadGK", "Random", "SparseArrays", "SpecialFunctions", "Statistics", "StatsBase", "StatsFuns"]
|
||||
git-tree-sha1 = "c2dbc7e0495c3f956e4615b78d03c7aa10091d0c"
|
||||
uuid = "31c24e10-a181-5473-b8eb-7969acd0382f"
|
||||
version = "0.25.15"
|
||||
|
||||
[[DocStringExtensions]]
|
||||
deps = ["LibGit2"]
|
||||
git-tree-sha1 = "a32185f5428d3986f47c2ab78b1f216d5e6cc96f"
|
||||
uuid = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
|
||||
version = "0.8.5"
|
||||
|
||||
[[Downloads]]
|
||||
deps = ["ArgTools", "LibCURL", "NetworkOptions"]
|
||||
uuid = "f43a241f-c20a-4ad4-852c-f6b1247861c6"
|
||||
|
||||
[[ExprTools]]
|
||||
git-tree-sha1 = "b7e3d17636b348f005f11040025ae8c6f645fe92"
|
||||
uuid = "e2ba6199-217a-4e67-a87a-7c52f15ade04"
|
||||
version = "0.1.6"
|
||||
|
||||
[[FileIO]]
|
||||
deps = ["Pkg", "Requires", "UUIDs"]
|
||||
git-tree-sha1 = "937c29268e405b6808d958a9ac41bfe1a31b08e7"
|
||||
uuid = "5789e2e9-d7fb-5bc7-8068-2c6fae9b9549"
|
||||
version = "1.11.0"
|
||||
|
||||
[[FileWatching]]
|
||||
uuid = "7b1f6079-737a-58dc-b8bc-7a2ca5c1b5ee"
|
||||
|
||||
[[FillArrays]]
|
||||
deps = ["LinearAlgebra", "Random", "SparseArrays", "Statistics"]
|
||||
git-tree-sha1 = "a3b7b041753094f3b17ffa9d2e2e07d8cace09cd"
|
||||
uuid = "1a297f60-69ca-5386-bcde-b61e274b549b"
|
||||
version = "0.12.3"
|
||||
|
||||
[[Formatting]]
|
||||
deps = ["Printf"]
|
||||
git-tree-sha1 = "8339d61043228fdd3eb658d86c926cb282ae72a8"
|
||||
uuid = "59287772-0a20-5a39-b81b-1366585eb4c0"
|
||||
version = "0.4.2"
|
||||
|
||||
[[ForwardDiff]]
|
||||
deps = ["CommonSubexpressions", "DiffResults", "DiffRules", "LinearAlgebra", "NaNMath", "Printf", "Random", "SpecialFunctions", "StaticArrays"]
|
||||
git-tree-sha1 = "b5e930ac60b613ef3406da6d4f42c35d8dc51419"
|
||||
uuid = "f6369f11-7733-5829-9624-2563aa707210"
|
||||
version = "0.10.19"
|
||||
|
||||
[[Future]]
|
||||
deps = ["Random"]
|
||||
uuid = "9fa8497b-333b-5362-9e8d-4d0656e87820"
|
||||
|
||||
[[GMP_jll]]
|
||||
deps = ["Artifacts", "Libdl"]
|
||||
uuid = "781609d7-10c4-51f6-84f2-b8444358ff6d"
|
||||
|
||||
[[Glob]]
|
||||
git-tree-sha1 = "4df9f7e06108728ebf00a0a11edee4b29a482bb2"
|
||||
uuid = "c27321d9-0574-5035-807b-f59d2c89b15c"
|
||||
version = "1.3.0"
|
||||
|
||||
[[HTTP]]
|
||||
deps = ["Base64", "Dates", "IniFile", "Logging", "MbedTLS", "NetworkOptions", "Sockets", "URIs"]
|
||||
git-tree-sha1 = "60ed5f1643927479f845b0135bb369b031b541fa"
|
||||
uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3"
|
||||
version = "0.9.14"
|
||||
|
||||
[[IniFile]]
|
||||
deps = ["Test"]
|
||||
git-tree-sha1 = "098e4d2c533924c921f9f9847274f2ad89e018b8"
|
||||
uuid = "83e8ac13-25f8-5344-8a64-a9f2b223428f"
|
||||
version = "0.5.0"
|
||||
|
||||
[[InteractiveUtils]]
|
||||
deps = ["Markdown"]
|
||||
uuid = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
|
||||
|
||||
[[InvertedIndices]]
|
||||
deps = ["Test"]
|
||||
git-tree-sha1 = "15732c475062348b0165684ffe28e85ea8396afc"
|
||||
uuid = "41ab1584-1d38-5bbf-9106-f11c6c58b48f"
|
||||
version = "1.0.0"
|
||||
|
||||
[[Ipopt_jll]]
|
||||
deps = ["ASL_jll", "Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "MUMPS_seq_jll", "OpenBLAS32_jll", "Pkg"]
|
||||
git-tree-sha1 = "82124f27743f2802c23fcb05febc517d0b15d86e"
|
||||
uuid = "9cc047cb-c261-5740-88fc-0cf96f7bdcc7"
|
||||
version = "3.13.4+2"
|
||||
|
||||
[[IrrationalConstants]]
|
||||
git-tree-sha1 = "f76424439413893a832026ca355fe273e93bce94"
|
||||
uuid = "92d709cd-6900-40b7-9082-c6be49f344b6"
|
||||
version = "0.1.0"
|
||||
|
||||
[[IteratorInterfaceExtensions]]
|
||||
git-tree-sha1 = "a3f24677c21f5bbe9d2a714f95dcd58337fb2856"
|
||||
uuid = "82899510-4779-5014-852e-03e436cf321d"
|
||||
version = "1.0.0"
|
||||
|
||||
[[JLD2]]
|
||||
deps = ["DataStructures", "FileIO", "MacroTools", "Mmap", "Pkg", "Printf", "Reexport", "TranscodingStreams", "UUIDs"]
|
||||
git-tree-sha1 = "59ee430ac5dc87bc3eec833cc2a37853425750b4"
|
||||
uuid = "033835bb-8acc-5ee8-8aae-3f567f8a3819"
|
||||
version = "0.4.13"
|
||||
|
||||
[[JLLWrappers]]
|
||||
deps = ["Preferences"]
|
||||
git-tree-sha1 = "642a199af8b68253517b80bd3bfd17eb4e84df6e"
|
||||
uuid = "692b3bcd-3c85-4b1f-b108-f13ce0eb3210"
|
||||
version = "1.3.0"
|
||||
|
||||
[[JSON]]
|
||||
deps = ["Dates", "Mmap", "Parsers", "Unicode"]
|
||||
git-tree-sha1 = "8076680b162ada2a031f707ac7b4953e30667a37"
|
||||
uuid = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
|
||||
version = "0.21.2"
|
||||
|
||||
[[JSONSchema]]
|
||||
deps = ["HTTP", "JSON", "URIs"]
|
||||
git-tree-sha1 = "2f49f7f86762a0fbbeef84912265a1ae61c4ef80"
|
||||
uuid = "7d188eb4-7ad8-530c-ae41-71a32a6d4692"
|
||||
version = "0.3.4"
|
||||
|
||||
[[JuMP]]
|
||||
deps = ["Calculus", "DataStructures", "ForwardDiff", "JSON", "LinearAlgebra", "MathOptInterface", "MutableArithmetics", "NaNMath", "Printf", "Random", "SparseArrays", "SpecialFunctions", "Statistics"]
|
||||
git-tree-sha1 = "4f0a771949bbe24bf70c89e8032c107ebe03f6ba"
|
||||
uuid = "4076af6c-e467-56ae-b986-b466b2749572"
|
||||
version = "0.21.9"
|
||||
|
||||
[[JuliaInterpreter]]
|
||||
deps = ["CodeTracking", "InteractiveUtils", "Random", "UUIDs"]
|
||||
git-tree-sha1 = "e273807f38074f033d94207a201e6e827d8417db"
|
||||
uuid = "aa1ae85d-cabe-5617-a682-6adf51b2e16a"
|
||||
version = "0.8.21"
|
||||
|
||||
[[LibCURL]]
|
||||
deps = ["LibCURL_jll", "MozillaCACerts_jll"]
|
||||
uuid = "b27032c2-a3e7-50c8-80cd-2d36dbcbfd21"
|
||||
|
||||
[[LibCURL_jll]]
|
||||
deps = ["Artifacts", "LibSSH2_jll", "Libdl", "MbedTLS_jll", "Zlib_jll", "nghttp2_jll"]
|
||||
uuid = "deac9b47-8bc7-5906-a0fe-35ac56dc84c0"
|
||||
|
||||
[[LibGit2]]
|
||||
deps = ["Base64", "NetworkOptions", "Printf", "SHA"]
|
||||
uuid = "76f85450-5226-5b5a-8eaa-529ad045b433"
|
||||
|
||||
[[LibSSH2_jll]]
|
||||
deps = ["Artifacts", "Libdl", "MbedTLS_jll"]
|
||||
uuid = "29816b5a-b9ab-546f-933c-edad1886dfa8"
|
||||
|
||||
[[Libdl]]
|
||||
uuid = "8f399da3-3557-5675-b5ff-fb832c97cbdb"
|
||||
|
||||
[[LinearAlgebra]]
|
||||
deps = ["Libdl"]
|
||||
uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
|
||||
|
||||
[[LogExpFunctions]]
|
||||
deps = ["DocStringExtensions", "IrrationalConstants", "LinearAlgebra"]
|
||||
git-tree-sha1 = "3d682c07e6dd250ed082f883dc88aee7996bf2cc"
|
||||
uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688"
|
||||
version = "0.3.0"
|
||||
|
||||
[[Logging]]
|
||||
uuid = "56ddb016-857b-54e1-b83d-db4d58db5568"
|
||||
|
||||
[[LoweredCodeUtils]]
|
||||
deps = ["JuliaInterpreter"]
|
||||
git-tree-sha1 = "491a883c4fef1103077a7f648961adbf9c8dd933"
|
||||
uuid = "6f1432cf-f94c-5a45-995e-cdbf5db27b0b"
|
||||
version = "2.1.2"
|
||||
|
||||
[[METIS_jll]]
|
||||
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
|
||||
git-tree-sha1 = "2dc1a9fc87e57e32b1fc186db78811157b30c118"
|
||||
uuid = "d00139f3-1899-568f-a2f0-47f597d42d70"
|
||||
version = "5.1.0+5"
|
||||
|
||||
[[MIPLearn]]
|
||||
deps = ["CSV", "Cbc", "Clp", "Conda", "DataFrames", "Distributed", "JLD2", "JSON", "JuMP", "Logging", "MathOptInterface", "PackageCompiler", "Printf", "PyCall", "SparseArrays", "TimerOutputs"]
|
||||
path = "/home/axavier/Packages/MIPLearn.jl/dev"
|
||||
uuid = "2b1277c3-b477-4c49-a15e-7ba350325c68"
|
||||
version = "0.2.0"
|
||||
|
||||
[[MUMPS_seq_jll]]
|
||||
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "METIS_jll", "OpenBLAS32_jll", "Pkg"]
|
||||
git-tree-sha1 = "1a11a84b2af5feb5a62a820574804056cdc59c39"
|
||||
uuid = "d7ed1dd3-d0ae-5e8e-bfb4-87a502085b8d"
|
||||
version = "5.2.1+4"
|
||||
|
||||
[[MacroTools]]
|
||||
deps = ["Markdown", "Random"]
|
||||
git-tree-sha1 = "0fb723cd8c45858c22169b2e42269e53271a6df7"
|
||||
uuid = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
|
||||
version = "0.5.7"
|
||||
|
||||
[[Markdown]]
|
||||
deps = ["Base64"]
|
||||
uuid = "d6f4376e-aef5-505a-96c1-9c027394607a"
|
||||
|
||||
[[MathOptInterface]]
|
||||
deps = ["BenchmarkTools", "CodecBzip2", "CodecZlib", "JSON", "JSONSchema", "LinearAlgebra", "MutableArithmetics", "OrderedCollections", "SparseArrays", "Test", "Unicode"]
|
||||
git-tree-sha1 = "575644e3c05b258250bb599e57cf73bbf1062901"
|
||||
uuid = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
|
||||
version = "0.9.22"
|
||||
|
||||
[[MbedTLS]]
|
||||
deps = ["Dates", "MbedTLS_jll", "Random", "Sockets"]
|
||||
git-tree-sha1 = "1c38e51c3d08ef2278062ebceade0e46cefc96fe"
|
||||
uuid = "739be429-bea8-5141-9913-cc70e7f3736d"
|
||||
version = "1.0.3"
|
||||
|
||||
[[MbedTLS_jll]]
|
||||
deps = ["Artifacts", "Libdl"]
|
||||
uuid = "c8ffd9c3-330d-5841-b78e-0817d7145fa1"
|
||||
|
||||
[[Missings]]
|
||||
deps = ["DataAPI"]
|
||||
git-tree-sha1 = "2ca267b08821e86c5ef4376cffed98a46c2cb205"
|
||||
uuid = "e1d29d7a-bbdc-5cf2-9ac0-f12de2c33e28"
|
||||
version = "1.0.1"
|
||||
|
||||
[[Mmap]]
|
||||
uuid = "a63ad114-7e13-5084-954f-fe012c677804"
|
||||
|
||||
[[MozillaCACerts_jll]]
|
||||
uuid = "14a3606d-f60d-562e-9121-12d972cd8159"
|
||||
|
||||
[[MutableArithmetics]]
|
||||
deps = ["LinearAlgebra", "SparseArrays", "Test"]
|
||||
git-tree-sha1 = "3927848ccebcc165952dc0d9ac9aa274a87bfe01"
|
||||
uuid = "d8a4904e-b15c-11e9-3269-09a3773c0cb0"
|
||||
version = "0.2.20"
|
||||
|
||||
[[NaNMath]]
|
||||
git-tree-sha1 = "bfe47e760d60b82b66b61d2d44128b62e3a369fb"
|
||||
uuid = "77ba4419-2d1f-58cd-9bb1-8ffee604a2e3"
|
||||
version = "0.3.5"
|
||||
|
||||
[[NetworkOptions]]
|
||||
uuid = "ca575930-c2e3-43a9-ace4-1e988b2c1908"
|
||||
|
||||
[[OpenBLAS32_jll]]
|
||||
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"]
|
||||
git-tree-sha1 = "ba4a8f683303c9082e84afba96f25af3c7fb2436"
|
||||
uuid = "656ef2d0-ae68-5445-9ca0-591084a874a2"
|
||||
version = "0.3.12+1"
|
||||
|
||||
[[OpenSpecFun_jll]]
|
||||
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"]
|
||||
git-tree-sha1 = "13652491f6856acfd2db29360e1bbcd4565d04f1"
|
||||
uuid = "efe28fd5-8261-553b-a9e1-b2916fc3738e"
|
||||
version = "0.5.5+0"
|
||||
|
||||
[[OrderedCollections]]
|
||||
git-tree-sha1 = "85f8e6578bf1f9ee0d11e7bb1b1456435479d47c"
|
||||
uuid = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
|
||||
version = "1.4.1"
|
||||
|
||||
[[Osi_jll]]
|
||||
deps = ["Artifacts", "CoinUtils_jll", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "OpenBLAS32_jll", "Pkg"]
|
||||
git-tree-sha1 = "6a9967c4394858f38b7fc49787b983ba3847e73d"
|
||||
uuid = "7da25872-d9ce-5375-a4d3-7a845f58efdd"
|
||||
version = "0.108.6+2"
|
||||
|
||||
[[PDMats]]
|
||||
deps = ["LinearAlgebra", "SparseArrays", "SuiteSparse"]
|
||||
git-tree-sha1 = "4dd403333bcf0909341cfe57ec115152f937d7d8"
|
||||
uuid = "90014a1f-27ba-587c-ab20-58faa44d9150"
|
||||
version = "0.11.1"
|
||||
|
||||
[[PackageCompiler]]
|
||||
deps = ["Libdl", "Pkg", "UUIDs"]
|
||||
git-tree-sha1 = "bb40ed7cb3aac2b4cdf42f898c26a58ab797ac62"
|
||||
uuid = "9b87118b-4619-50d2-8e1e-99f35a4d4d9d"
|
||||
version = "1.3.0"
|
||||
|
||||
[[Parsers]]
|
||||
deps = ["Dates"]
|
||||
git-tree-sha1 = "bfd7d8c7fd87f04543810d9cbd3995972236ba1b"
|
||||
uuid = "69de0a69-1ddd-5017-9359-2bf0b02dc9f0"
|
||||
version = "1.1.2"
|
||||
|
||||
[[Pkg]]
|
||||
deps = ["Artifacts", "Dates", "Downloads", "LibGit2", "Libdl", "Logging", "Markdown", "Printf", "REPL", "Random", "SHA", "Serialization", "TOML", "Tar", "UUIDs", "p7zip_jll"]
|
||||
uuid = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
|
||||
|
||||
[[PooledArrays]]
|
||||
deps = ["DataAPI", "Future"]
|
||||
git-tree-sha1 = "a193d6ad9c45ada72c14b731a318bedd3c2f00cf"
|
||||
uuid = "2dfb63ee-cc39-5dd5-95bd-886bf059d720"
|
||||
version = "1.3.0"
|
||||
|
||||
[[Preferences]]
|
||||
deps = ["TOML"]
|
||||
git-tree-sha1 = "00cfd92944ca9c760982747e9a1d0d5d86ab1e5a"
|
||||
uuid = "21216c6a-2e73-6563-6e65-726566657250"
|
||||
version = "1.2.2"
|
||||
|
||||
[[PrettyTables]]
|
||||
deps = ["Crayons", "Formatting", "Markdown", "Reexport", "Tables"]
|
||||
git-tree-sha1 = "0d1245a357cc61c8cd61934c07447aa569ff22e6"
|
||||
uuid = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
|
||||
version = "1.1.0"
|
||||
|
||||
[[Printf]]
|
||||
deps = ["Unicode"]
|
||||
uuid = "de0858da-6303-5e67-8744-51eddeeeb8d7"
|
||||
|
||||
[[ProgressBars]]
|
||||
deps = ["Printf"]
|
||||
git-tree-sha1 = "938525cc66a4058f6ed75b84acd13a00fbecea11"
|
||||
uuid = "49802e3a-d2f1-5c88-81d8-b72133a6f568"
|
||||
version = "1.4.0"
|
||||
|
||||
[[PyCall]]
|
||||
deps = ["Conda", "Dates", "Libdl", "LinearAlgebra", "MacroTools", "Serialization", "VersionParsing"]
|
||||
git-tree-sha1 = "169bb8ea6b1b143c5cf57df6d34d022a7b60c6db"
|
||||
uuid = "438e738f-606a-5dbb-bf0a-cddfbfd45ab0"
|
||||
version = "1.92.3"
|
||||
|
||||
[[QuadGK]]
|
||||
deps = ["DataStructures", "LinearAlgebra"]
|
||||
git-tree-sha1 = "12fbe86da16df6679be7521dfb39fbc861e1dc7b"
|
||||
uuid = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
|
||||
version = "2.4.1"
|
||||
|
||||
[[REPL]]
|
||||
deps = ["InteractiveUtils", "Markdown", "Sockets", "Unicode"]
|
||||
uuid = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb"
|
||||
|
||||
[[Random]]
|
||||
deps = ["Serialization"]
|
||||
uuid = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
|
||||
|
||||
[[Reexport]]
|
||||
git-tree-sha1 = "45e428421666073eab6f2da5c9d310d99bb12f9b"
|
||||
uuid = "189a3867-3050-52da-a836-e630ba90ab69"
|
||||
version = "1.2.2"
|
||||
|
||||
[[Requires]]
|
||||
deps = ["UUIDs"]
|
||||
git-tree-sha1 = "4036a3bd08ac7e968e27c203d45f5fff15020621"
|
||||
uuid = "ae029012-a4dd-5104-9daa-d747884805df"
|
||||
version = "1.1.3"
|
||||
|
||||
[[Revise]]
|
||||
deps = ["CodeTracking", "Distributed", "FileWatching", "JuliaInterpreter", "LibGit2", "LoweredCodeUtils", "OrderedCollections", "Pkg", "REPL", "Requires", "UUIDs", "Unicode"]
|
||||
git-tree-sha1 = "1947d2d75463bd86d87eaba7265b0721598dd803"
|
||||
uuid = "295af30f-e4ad-537b-8983-00126c2a3abe"
|
||||
version = "3.1.19"
|
||||
|
||||
[[Rmath]]
|
||||
deps = ["Random", "Rmath_jll"]
|
||||
git-tree-sha1 = "bf3188feca147ce108c76ad82c2792c57abe7b1f"
|
||||
uuid = "79098fc4-a85e-5d69-aa6a-4863f24498fa"
|
||||
version = "0.7.0"
|
||||
|
||||
[[Rmath_jll]]
|
||||
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
|
||||
git-tree-sha1 = "68db32dff12bb6127bac73c209881191bf0efbb7"
|
||||
uuid = "f50d1b31-88e8-58de-be2c-1cc44531875f"
|
||||
version = "0.3.0+0"
|
||||
|
||||
[[SCIP]]
|
||||
deps = ["Ipopt_jll", "Libdl", "MathOptInterface", "SCIP_jll"]
|
||||
git-tree-sha1 = "6b799e6a23746633f94f4f10a9ac234f8b86f680"
|
||||
repo-rev = "7aa79aaa"
|
||||
repo-url = "https://github.com/scipopt/SCIP.jl.git"
|
||||
uuid = "82193955-e24f-5292-bf16-6f2c5261a85f"
|
||||
version = "0.9.8"
|
||||
|
||||
[[SCIP_jll]]
|
||||
deps = ["Artifacts", "CompilerSupportLibraries_jll", "GMP_jll", "Ipopt_jll", "JLLWrappers", "Libdl", "Pkg", "Zlib_jll", "bliss_jll"]
|
||||
git-tree-sha1 = "83d35a061885aa73491aa2f8db28310214bbd521"
|
||||
uuid = "e5ac4fe4-a920-5659-9bf8-f9f73e9e79ce"
|
||||
version = "0.1.3+0"
|
||||
|
||||
[[SHA]]
|
||||
uuid = "ea8e919c-243c-51af-8825-aaa63cd721ce"
|
||||
|
||||
[[SentinelArrays]]
|
||||
deps = ["Dates", "Random"]
|
||||
git-tree-sha1 = "54f37736d8934a12a200edea2f9206b03bdf3159"
|
||||
uuid = "91c51154-3ec4-41a3-a24f-3f23e20d615c"
|
||||
version = "1.3.7"
|
||||
|
||||
[[Serialization]]
|
||||
uuid = "9e88b42a-f829-5b0c-bbe9-9e923198166b"
|
||||
|
||||
[[SharedArrays]]
|
||||
deps = ["Distributed", "Mmap", "Random", "Serialization"]
|
||||
uuid = "1a1011a3-84de-559e-8e89-a11a2f7dc383"
|
||||
|
||||
[[Sockets]]
|
||||
uuid = "6462fe0b-24de-5631-8697-dd941f90decc"
|
||||
|
||||
[[SortingAlgorithms]]
|
||||
deps = ["DataStructures"]
|
||||
git-tree-sha1 = "b3363d7460f7d098ca0912c69b082f75625d7508"
|
||||
uuid = "a2af1166-a08f-5f64-846c-94a0d3cef48c"
|
||||
version = "1.0.1"
|
||||
|
||||
[[SparseArrays]]
|
||||
deps = ["LinearAlgebra", "Random"]
|
||||
uuid = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
|
||||
|
||||
[[SpecialFunctions]]
|
||||
deps = ["ChainRulesCore", "LogExpFunctions", "OpenSpecFun_jll"]
|
||||
git-tree-sha1 = "a322a9493e49c5f3a10b50df3aedaf1cdb3244b7"
|
||||
uuid = "276daf66-3868-5448-9aa4-cd146d93841b"
|
||||
version = "1.6.1"
|
||||
|
||||
[[StaticArrays]]
|
||||
deps = ["LinearAlgebra", "Random", "Statistics"]
|
||||
git-tree-sha1 = "3240808c6d463ac46f1c1cd7638375cd22abbccb"
|
||||
uuid = "90137ffa-7385-5640-81b9-e52037218182"
|
||||
version = "1.2.12"
|
||||
|
||||
[[Statistics]]
|
||||
deps = ["LinearAlgebra", "SparseArrays"]
|
||||
uuid = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
|
||||
|
||||
[[StatsAPI]]
|
||||
git-tree-sha1 = "1958272568dc176a1d881acb797beb909c785510"
|
||||
uuid = "82ae8749-77ed-4fe6-ae5f-f523153014b0"
|
||||
version = "1.0.0"
|
||||
|
||||
[[StatsBase]]
|
||||
deps = ["DataAPI", "DataStructures", "LinearAlgebra", "Missings", "Printf", "Random", "SortingAlgorithms", "SparseArrays", "Statistics", "StatsAPI"]
|
||||
git-tree-sha1 = "8cbbc098554648c84f79a463c9ff0fd277144b6c"
|
||||
uuid = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
|
||||
version = "0.33.10"
|
||||
|
||||
[[StatsFuns]]
|
||||
deps = ["ChainRulesCore", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"]
|
||||
git-tree-sha1 = "46d7ccc7104860c38b11966dd1f72ff042f382e4"
|
||||
uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c"
|
||||
version = "0.9.10"
|
||||
|
||||
[[SuiteSparse]]
|
||||
deps = ["Libdl", "LinearAlgebra", "Serialization", "SparseArrays"]
|
||||
uuid = "4607b0f0-06f3-5cda-b6b1-a6196a1729e9"
|
||||
|
||||
[[TOML]]
|
||||
deps = ["Dates"]
|
||||
uuid = "fa267f1f-6049-4f14-aa54-33bafae1ed76"
|
||||
|
||||
[[TableTraits]]
|
||||
deps = ["IteratorInterfaceExtensions"]
|
||||
git-tree-sha1 = "c06b2f539df1c6efa794486abfb6ed2022561a39"
|
||||
uuid = "3783bdb8-4a98-5b6b-af9a-565f29a5fe9c"
|
||||
version = "1.0.1"
|
||||
|
||||
[[Tables]]
|
||||
deps = ["DataAPI", "DataValueInterfaces", "IteratorInterfaceExtensions", "LinearAlgebra", "TableTraits", "Test"]
|
||||
git-tree-sha1 = "d0c690d37c73aeb5ca063056283fde5585a41710"
|
||||
uuid = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
|
||||
version = "1.5.0"
|
||||
|
||||
[[Tar]]
|
||||
deps = ["ArgTools", "SHA"]
|
||||
uuid = "a4e569a6-e804-4fa4-b0f3-eef7a1d5b13e"
|
||||
|
||||
[[Test]]
|
||||
deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
|
||||
uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
|
||||
|
||||
[[TimerOutputs]]
|
||||
deps = ["ExprTools", "Printf"]
|
||||
git-tree-sha1 = "209a8326c4f955e2442c07b56029e88bb48299c7"
|
||||
uuid = "a759f4b9-e2f1-59dc-863e-4aeb61b1ea8f"
|
||||
version = "0.5.12"
|
||||
|
||||
[[TranscodingStreams]]
|
||||
deps = ["Random", "Test"]
|
||||
git-tree-sha1 = "216b95ea110b5972db65aa90f88d8d89dcb8851c"
|
||||
uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa"
|
||||
version = "0.9.6"
|
||||
|
||||
[[URIs]]
|
||||
git-tree-sha1 = "97bbe755a53fe859669cd907f2d96aee8d2c1355"
|
||||
uuid = "5c2747f8-b7ea-4ff2-ba2e-563bfd36b1d4"
|
||||
version = "1.3.0"
|
||||
|
||||
[[UUIDs]]
|
||||
deps = ["Random", "SHA"]
|
||||
uuid = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
|
||||
|
||||
[[Unicode]]
|
||||
uuid = "4ec0a83e-493e-50e2-b9ac-8f72acf5a8f5"
|
||||
|
||||
[[VersionParsing]]
|
||||
git-tree-sha1 = "80229be1f670524750d905f8fc8148e5a8c4537f"
|
||||
uuid = "81def892-9a0e-5fdd-b105-ffc91e053289"
|
||||
version = "1.2.0"
|
||||
|
||||
[[Zlib_jll]]
|
||||
deps = ["Libdl"]
|
||||
uuid = "83775a58-1f1d-513f-b197-d71354ab007a"
|
||||
|
||||
[[bliss_jll]]
|
||||
deps = ["Artifacts", "GMP_jll", "JLLWrappers", "Libdl", "Pkg"]
|
||||
git-tree-sha1 = "efa0ae50a40cdf404e18ce375dfb764001f38b92"
|
||||
uuid = "508c9074-7a14-5c94-9582-3d4bc1871065"
|
||||
version = "0.73.0+1"
|
||||
|
||||
[[nghttp2_jll]]
|
||||
deps = ["Artifacts", "Libdl"]
|
||||
uuid = "8e850ede-7688-5339-a07c-302acd2aaf8d"
|
||||
|
||||
[[p7zip_jll]]
|
||||
deps = ["Artifacts", "Libdl"]
|
||||
uuid = "3f19e933-33d8-53b3-aaab-bd5110c3b7a0"
|
@ -0,0 +1,9 @@
|
||||
[deps]
|
||||
Cbc = "9961bab8-2fa3-5c5a-9d89-47fab24efd76"
|
||||
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
|
||||
Glob = "c27321d9-0574-5035-807b-f59d2c89b15c"
|
||||
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
|
||||
MIPLearn = "2b1277c3-b477-4c49-a15e-7ba350325c68"
|
||||
ProgressBars = "49802e3a-d2f1-5c88-81d8-b72133a6f568"
|
||||
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
|
||||
SCIP = "82193955-e24f-5292-bf16-6f2c5261a85f"
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ea2dc06a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Customizing the ML models\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,758 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c5a596fb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Getting started with MIPLearn\n",
|
||||
"\n",
|
||||
"## Introduction\n",
|
||||
"\n",
|
||||
"**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of both commercial and open source mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS, Cbc or SCIP). In this tutorial, we will:\n",
|
||||
"\n",
|
||||
"1. Install the Julia/JuMP version of MIPLearn\n",
|
||||
"2. Model a simple optimization problem using JuMP\n",
|
||||
"3. Generate training data and train the ML models\n",
|
||||
"4. Use the ML models together with SCIP to solve new instances\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-info\">\n",
|
||||
"Note\n",
|
||||
" \n",
|
||||
"We use SCIP in this tutorial because it is a fast and widely available noncommercial MIP solver. All the steps shown here also work for Gurobi, CPLEX and XPRESS, although the performance impact might be different.\n",
|
||||
" \n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-warning\">\n",
|
||||
"Warning\n",
|
||||
" \n",
|
||||
"MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n",
|
||||
" \n",
|
||||
"</div>\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1f59417f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Installing MIPLearn\n",
|
||||
"\n",
|
||||
"MIPLearn is available in two versions:\n",
|
||||
"\n",
|
||||
"- Python version, compatible with the Pyomo modeling language,\n",
|
||||
"- Julia version, compatible with the JuMP modeling language.\n",
|
||||
"\n",
|
||||
"In this tutorial, we will demonstrate how to use and install the Julia/JuMP version of the package. The first step is to install the Julia programming language in your computer. [See the official instructions for more details](https://julialang.org/downloads/). Note that MIPLearn was developed and tested with Julia 1.6, and may not be compatible with newer versions of the language. After Julia is installed, launch its console and run the following commands to download and install the package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "1ddeeb8e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Path `/home/axavier/Packages/MIPLearn.jl/dev` exists and looks like the correct package. Using existing path.\n",
|
||||
"\u001b[32m\u001b[1m Resolving\u001b[22m\u001b[39m package versions...\n",
|
||||
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Project.toml`\n",
|
||||
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Manifest.toml`\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"using Pkg\n",
|
||||
"Pkg.develop(PackageSpec(path=\"/home/axavier/Packages/MIPLearn.jl/dev\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "de7ab489",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In addition to MIPLearn itself, we will also install a few other packages that are required for this tutorial:\n",
|
||||
"\n",
|
||||
"- `SCIP`, a non-commercial mixed-integer programming solver\n",
|
||||
"- `JuMP`, an open-source modeling language for Julia\n",
|
||||
"- `Distributions`, a statistics package that we will use to generate random inputs\n",
|
||||
"- `Glob`, a package that retrieves all files in a directory matching a certain pattern"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "29d29925",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m registry at `~/.julia/registries/General`\n",
|
||||
"\u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m git-repo `https://github.com/JuliaRegistries/General.git`\n",
|
||||
"\u001b[32m\u001b[1m Resolving\u001b[22m\u001b[39m package versions...\n",
|
||||
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Project.toml`\n",
|
||||
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Manifest.toml`\n",
|
||||
"\u001b[32m\u001b[1mPrecompiling\u001b[22m\u001b[39m project...\n",
|
||||
"\u001b[32m ✓ \u001b[39mMIPLearn\n",
|
||||
"1 dependency successfully precompiled in 10 seconds (96 already precompiled)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"using Pkg\n",
|
||||
"Pkg.add([\n",
|
||||
" PackageSpec(url=\"https://github.com/scipopt/SCIP.jl.git\", rev=\"7aa79aaa\"),\n",
|
||||
" PackageSpec(name=\"JuMP\", version=\"0.21\"),\n",
|
||||
" PackageSpec(name=\"Distributions\", version=\"0.25\"),\n",
|
||||
" PackageSpec(name=\"Glob\", version=\"1\"),\n",
|
||||
"])\n",
|
||||
"using Revise"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "88074d87",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<div class=\"alert alert-info\">\n",
|
||||
" \n",
|
||||
"Note\n",
|
||||
" \n",
|
||||
"In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Julia projects.\n",
|
||||
" \n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "78482747",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Modeling a simple optimization problem\n",
|
||||
"\n",
|
||||
"To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n",
|
||||
"\n",
|
||||
"Suppose that you work at a utility company, and that it is your job to decide which electrical generators should be online at a certain hour of the day, and how much power should each generator produce. More specifically, assume that your company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs your company $c^\\text{fixed}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing, and costs nothing. You also know that the total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts). To minimize the costs to your company, which generators should be online, and how much power should they produce?\n",
|
||||
"\n",
|
||||
"This simple problem be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem we need to solve is given by:\n",
|
||||
"\n",
|
||||
"$$\n",
|
||||
"\\begin{align}\n",
|
||||
"\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n",
|
||||
"\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n",
|
||||
"& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n",
|
||||
"& \\sum_{i=1}^n y_i = d \\\\\n",
|
||||
"& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n",
|
||||
"& y_i \\geq 0 & i=1,\\ldots,n\n",
|
||||
"\\end{align}\n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-info\">\n",
|
||||
" \n",
|
||||
"Note\n",
|
||||
" \n",
|
||||
"We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem. See the benchmark sections for more details.\n",
|
||||
" \n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"Next, let us convert this abstract mathematical formulation into a concrete optimization model, using the Julia and the JuMP modeling language. We start by defining a data structure that holds all input data:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "ec7dbab4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"Base.@kwdef struct UnitCommitmentData\n",
|
||||
" demand::Float64\n",
|
||||
" pmin::Vector{Float64}\n",
|
||||
" pmax::Vector{Float64}\n",
|
||||
" cfix::Vector{Float64}\n",
|
||||
" cvar::Vector{Float64}\n",
|
||||
"end;"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c8f6a5b8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, we create a function that converts this data into a concrete JuMP model:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "14e84c92",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"using JuMP\n",
|
||||
"\n",
|
||||
"function build_uc_model(data::UnitCommitmentData)::Model\n",
|
||||
" model = Model()\n",
|
||||
" n = length(data.pmin)\n",
|
||||
" @variable(model, x[1:n], Bin)\n",
|
||||
" @variable(model, y[1:n] >= 0)\n",
|
||||
" @objective(\n",
|
||||
" model,\n",
|
||||
" Min,\n",
|
||||
" sum(\n",
|
||||
" data.cfix[i] * x[i] +\n",
|
||||
" data.cvar[i] * y[i]\n",
|
||||
" for i in 1:n\n",
|
||||
" )\n",
|
||||
" )\n",
|
||||
" @constraint(model, eq_max_power[i in 1:n], y[i] <= data.pmax[i] * x[i])\n",
|
||||
" @constraint(model, eq_min_power[i in 1:n], y[i] >= data.pmin[i] * x[i])\n",
|
||||
" @constraint(model, eq_demand, sum(y[i] for i in 1:n) == data.demand)\n",
|
||||
" return model\n",
|
||||
"end;"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f647734f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"At this point, we can already use JuMP and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators, using SCIP:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "b2abe5e2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"obj = 1320.0\n",
|
||||
" x = [0.0, 1.0, 1.0]\n",
|
||||
" y = [0.0, 60.0, 40.0]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"using SCIP\n",
|
||||
"using Printf\n",
|
||||
"\n",
|
||||
"model = build_uc_model(\n",
|
||||
" UnitCommitmentData(\n",
|
||||
" demand = 100.0,\n",
|
||||
" pmin = [10, 20, 30],\n",
|
||||
" pmax = [50, 60, 70],\n",
|
||||
" cfix = [700, 600, 500],\n",
|
||||
" cvar = [1.5, 2.0, 2.5],\n",
|
||||
" )\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"scip = optimizer_with_attributes(SCIP.Optimizer, \"limits/gap\" => 1e-4)\n",
|
||||
"set_optimizer(model, scip)\n",
|
||||
"set_silent(model)\n",
|
||||
"optimize!(model)\n",
|
||||
"\n",
|
||||
"println(\"obj = \", objective_value(model))\n",
|
||||
"println(\" x = \", round.(value.(model[:x])))\n",
|
||||
"println(\" y = \", round.(value.(model[:y]), digits=2));"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5be976f5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "96a1f952",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Generating training data\n",
|
||||
"\n",
|
||||
"Although SCIP could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** version of SCIP, which can solve new instances (similar to the ones it was trained on) faster.\n",
|
||||
"\n",
|
||||
"In the following, we will use MIPLearn to train machine learning models that can be used to accelerate SCIP's performance on a particular set of instances. More specifically, MIPLearn will train a model that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to SCIP as a warm start.\n",
|
||||
"\n",
|
||||
"Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "353e6199",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"using Distributions\n",
|
||||
"using Random\n",
|
||||
"\n",
|
||||
"function random_uc_data(; samples::Int, n::Int, seed=42)\n",
|
||||
" Random.seed!(seed)\n",
|
||||
" pmin = rand(Uniform(100, 500.0), n)\n",
|
||||
" pmax = pmin .* rand(Uniform(2.0, 2.5), n)\n",
|
||||
" cfix = pmin .* rand(Uniform(100.0, 125.0), n)\n",
|
||||
" cvar = rand(Uniform(1.25, 1.5), n)\n",
|
||||
" return [\n",
|
||||
" UnitCommitmentData(;\n",
|
||||
" pmin,\n",
|
||||
" pmax,\n",
|
||||
" cfix,\n",
|
||||
" cvar,\n",
|
||||
" demand = sum(pmax) * rand(Uniform(0.5, 0.75)),\n",
|
||||
" )\n",
|
||||
" for i in 1:samples\n",
|
||||
" ]\n",
|
||||
"end;"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2140968d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In this example, for simplicity, only the demands change from one instance to the next. We could also have made the prices and the production limits random. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n",
|
||||
"\n",
|
||||
"Now we generate 100 instances of this problem, each one with 1,000 generators. We will use the first 90 instances for training, and the remaining 10 instances to evaluate SCIP's performance."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "1bb24909",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data = random_uc_data(samples=100, n=1000);\n",
|
||||
"train_data = data[1:90]\n",
|
||||
"test_data = data[91:100];"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "96bc0e42",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, we will write these data structures to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold the entire training data, as well as the concrete JuMP models, in memory. Files also make it much easier to solve multiple instances simultaneously, potentially even on multiple machines. We will cover parallel and distributed computing in a future tutorial.\n",
|
||||
"\n",
|
||||
"The code below generates the files `uc/train/000001.jld2`, `uc/train/000002.jld2`, etc."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "8ec476b1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"using MIPLearn\n",
|
||||
"using Glob\n",
|
||||
"\n",
|
||||
"MIPLearn.save(data[1:90], \"uc/train/\")\n",
|
||||
"MIPLearn.save(data[91:100], \"uc/test/\")\n",
|
||||
"\n",
|
||||
"train_files = glob(\"uc/train/*.jld2\")\n",
|
||||
"test_files = glob(\"uc/test/*.jld2\");"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5d53a783",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, we use `MIPLearn.LearningSolver` and `MIPLearn.solve!` to solve all the training instances. `LearningSolver` is the main component provided by MIPLearn, which integrates MIP solvers and ML. The `solve!` function can be used to solve either one or multiple instances, and requires: (i) the list of files containing the training data; and (ii) the function that converts the data structure into a concrete JuMP model:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "514a3b3a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"101.279699 seconds (93.52 M allocations: 3.599 GiB, 1.23% gc time, 0.52% compilation time)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WARNING: Dual bound 1.98665e+07 is larger than the objective of the primal solution 1.98665e+07. The solution might not be optimal.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"using Glob\n",
|
||||
"solver = LearningSolver(scip)\n",
|
||||
"@time solve!(solver, train_files, build_uc_model);"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "72eb09f4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The macro `@time` shows us how long did the code take to run. We can see that SCIP was able to solve all training instances in about 2 minutes. The solutions, and other useful training data, is stored by MIPLearn in `.h5` files, stored side-by-side with the original `.jld2` files."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "90406b90",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Solving new instances\n",
|
||||
"\n",
|
||||
"Now that we have training data, we can fit the ML models using `MIPLearn.fit!`, then solve the test instances with `MIPLearn.solve!`, as shown below:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "e4de94db",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" 5.693951 seconds (9.33 M allocations: 334.689 MiB, 1.62% gc time)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"solver_ml = LearningSolver(scip)\n",
|
||||
"fit!(solver_ml, train_files, build_uc_model)\n",
|
||||
"@time solve!(solver_ml, test_files, build_uc_model);"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "247c1087",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The trained MIP solver was able to solve all test instances in about 5 seconds. To see that ML is being helpful here, let us repeat the code above, but remove the `fit!` line:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "62061b12",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" 9.829350 seconds (8.17 M allocations: 278.008 MiB, 0.47% gc time)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"solver_baseline = LearningSolver(scip)\n",
|
||||
"@time solve!(solver_baseline, test_files, build_uc_model);"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8ea5c423",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Without the help of the ML models, SCIP took around 10 seconds to solve the same test instances, or about twice as long.\n",
|
||||
"\n",
|
||||
"<div class=\"alert alert-info\">\n",
|
||||
"Note\n",
|
||||
" \n",
|
||||
"Note that is is not necessary to specify what ML models to use. MIPLearn, by default, will try a number of classical ML models and will choose the one that performs the best, based on k-fold cross validation. MIPLearn is also able to automatically collect features based on the MIP formulation of the problem and the solution to the LP relaxation, among other things, so it does not require handcrafted features. If you do want to customize the models and features, however, that is also possible, as we will see in a later tutorial.\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "569f7c7a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Understanding the acceleration\n",
|
||||
"\n",
|
||||
"Let us know go a bit deeper and try to understand how exactly did MIPLearn accelerate SCIP's performance. First, we are going to solve one of the training instances again, using the trained solver, but this time using the `tee=true` parameter, so that we can see SCIP's log:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "46739739",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"presolving:\n",
|
||||
"(round 1, fast) 861 del vars, 861 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
|
||||
"(round 2, fast) 861 del vars, 1722 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
|
||||
"(round 3, fast) 862 del vars, 1722 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
|
||||
"presolving (4 rounds: 4 fast, 1 medium, 1 exhaustive):\n",
|
||||
" 862 deleted vars, 1722 deleted constraints, 0 added constraints, 2000 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients\n",
|
||||
" 0 implications, 0 cliques\n",
|
||||
"presolved problem has 1138 variables (0 bin, 0 int, 0 impl, 1138 cont) and 279 constraints\n",
|
||||
" 279 constraints of type <linear>\n",
|
||||
"Presolving Time: 0.03\n",
|
||||
"\n",
|
||||
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
|
||||
"* 0.0s| 1 | 0 | 203 | - | LP | 0 |1138 | 279 | 279 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705035e+07 | 0.00%| unknown\n",
|
||||
" 0.0s| 1 | 0 | 203 | - | 8950k | 0 |1138 | 279 | 279 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705035e+07 | 0.00%| unknown\n",
|
||||
"\n",
|
||||
"SCIP Status : problem is solved [optimal solution found]\n",
|
||||
"Solving Time (sec) : 0.04\n",
|
||||
"Solving Nodes : 1\n",
|
||||
"Primal Bound : +1.70503465600131e+07 (1 solutions)\n",
|
||||
"Dual Bound : +1.70503465600131e+07\n",
|
||||
"Gap : 0.00 %\n",
|
||||
"\n",
|
||||
"violation: integrality condition of variable <> = 0.338047247943162\n",
|
||||
"all 1 solutions given by solution candidate storage are infeasible\n",
|
||||
"\n",
|
||||
"feasible solution found by completesol heuristic after 0.1 seconds, objective value 1.705169e+07\n",
|
||||
"presolving:\n",
|
||||
"(round 1, fast) 0 del vars, 0 del conss, 0 add conss, 3000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
|
||||
"(round 2, exhaustive) 0 del vars, 0 del conss, 0 add conss, 3000 chg bounds, 0 chg sides, 0 chg coeffs, 1000 upgd conss, 0 impls, 0 clqs\n",
|
||||
"(round 3, exhaustive) 0 del vars, 0 del conss, 0 add conss, 3000 chg bounds, 0 chg sides, 0 chg coeffs, 2000 upgd conss, 1000 impls, 0 clqs\n",
|
||||
" (0.1s) probing: 51/1000 (5.1%) - 0 fixings, 0 aggregations, 0 implications, 0 bound changes\n",
|
||||
" (0.1s) probing aborted: 50/50 successive totally useless probings\n",
|
||||
" (0.1s) symmetry computation started: requiring (bin +, int -, cont +), (fixed: bin -, int +, cont -)\n",
|
||||
" (0.1s) no symmetry present\n",
|
||||
"presolving (4 rounds: 4 fast, 3 medium, 3 exhaustive):\n",
|
||||
" 0 deleted vars, 0 deleted constraints, 0 added constraints, 3000 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients\n",
|
||||
" 2000 implications, 0 cliques\n",
|
||||
"presolved problem has 2000 variables (1000 bin, 0 int, 0 impl, 1000 cont) and 2001 constraints\n",
|
||||
" 2000 constraints of type <varbound>\n",
|
||||
" 1 constraints of type <linear>\n",
|
||||
"Presolving Time: 0.10\n",
|
||||
"transformed 1/1 original solutions to the transformed problem space\n",
|
||||
"\n",
|
||||
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
|
||||
" 0.2s| 1 | 0 | 1201 | - | 20M | 0 |2000 |2001 |2001 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705169e+07 | 0.01%| unknown\n",
|
||||
"\n",
|
||||
"SCIP Status : solving was interrupted [gap limit reached]\n",
|
||||
"Solving Time (sec) : 0.21\n",
|
||||
"Solving Nodes : 1\n",
|
||||
"Primal Bound : +1.70516871251443e+07 (1 solutions)\n",
|
||||
"Dual Bound : +1.70503465600130e+07\n",
|
||||
"Gap : 0.01 %\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"solve!(solver_ml, test_files[1], build_uc_model, tee=true);"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9cdc02d0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The log above is quite complicated if you have never seen it before, but the important line in the one starting with `feasible solution found [...] objective value 1.705169e+07`. This line indicates that MIPLearn was able to construct a warm start with value `1.705169e+07`. Using this warm start, SCIP then proceeded with the branch-and-cut process to either prove its optimality or find an even better solution. Very quickly, however, SCIP proved that the solution produced by MIPLearn was indeed optimal and terminated. It was able to do this without generating a single cutting plane or running any other heuristics; it could tell the optimality by the root LP relaxation alone, which was very fast. \n",
|
||||
"\n",
|
||||
"Let us now do the same thing again, but using the untrained solver this time:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "555af477",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"presolving:\n",
|
||||
"(round 1, fast) 861 del vars, 861 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
|
||||
"(round 2, fast) 861 del vars, 1722 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
|
||||
"(round 3, fast) 862 del vars, 1722 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
|
||||
"presolving (4 rounds: 4 fast, 1 medium, 1 exhaustive):\n",
|
||||
" 862 deleted vars, 1722 deleted constraints, 0 added constraints, 2000 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients\n",
|
||||
" 0 implications, 0 cliques\n",
|
||||
"presolved problem has 1138 variables (0 bin, 0 int, 0 impl, 1138 cont) and 279 constraints\n",
|
||||
" 279 constraints of type <linear>\n",
|
||||
"Presolving Time: 0.03\n",
|
||||
"\n",
|
||||
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
|
||||
"* 0.0s| 1 | 0 | 203 | - | LP | 0 |1138 | 279 | 279 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705035e+07 | 0.00%| unknown\n",
|
||||
" 0.0s| 1 | 0 | 203 | - | 8950k | 0 |1138 | 279 | 279 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705035e+07 | 0.00%| unknown\n",
|
||||
"\n",
|
||||
"SCIP Status : problem is solved [optimal solution found]\n",
|
||||
"Solving Time (sec) : 0.04\n",
|
||||
"Solving Nodes : 1\n",
|
||||
"Primal Bound : +1.70503465600131e+07 (1 solutions)\n",
|
||||
"Dual Bound : +1.70503465600131e+07\n",
|
||||
"Gap : 0.00 %\n",
|
||||
"\n",
|
||||
"violation: integrality condition of variable <> = 0.338047247943162\n",
|
||||
"all 1 solutions given by solution candidate storage are infeasible\n",
|
||||
"\n",
|
||||
"presolving:\n",
|
||||
"(round 1, fast) 0 del vars, 0 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
|
||||
"(round 2, exhaustive) 0 del vars, 0 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 1000 upgd conss, 0 impls, 0 clqs\n",
|
||||
"(round 3, exhaustive) 0 del vars, 0 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 2000 upgd conss, 1000 impls, 0 clqs\n",
|
||||
" (0.0s) probing: 51/1000 (5.1%) - 0 fixings, 0 aggregations, 0 implications, 0 bound changes\n",
|
||||
" (0.0s) probing aborted: 50/50 successive totally useless probings\n",
|
||||
" (0.0s) symmetry computation started: requiring (bin +, int -, cont +), (fixed: bin -, int +, cont -)\n",
|
||||
" (0.0s) no symmetry present\n",
|
||||
"presolving (4 rounds: 4 fast, 3 medium, 3 exhaustive):\n",
|
||||
" 0 deleted vars, 0 deleted constraints, 0 added constraints, 2000 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients\n",
|
||||
" 2000 implications, 0 cliques\n",
|
||||
"presolved problem has 2000 variables (1000 bin, 0 int, 0 impl, 1000 cont) and 2001 constraints\n",
|
||||
" 2000 constraints of type <varbound>\n",
|
||||
" 1 constraints of type <linear>\n",
|
||||
"Presolving Time: 0.03\n",
|
||||
"\n",
|
||||
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
|
||||
"p 0.0s| 1 | 0 | 1 | - | locks| 0 |2000 |2001 |2001 | 0 | 0 | 0 | 0 | 0.000000e+00 | 2.335200e+07 | Inf | unknown\n",
|
||||
"p 0.0s| 1 | 0 | 2 | - | vbounds| 0 |2000 |2001 |2001 | 0 | 0 | 0 | 0 | 0.000000e+00 | 1.839873e+07 | Inf | unknown\n",
|
||||
" 0.1s| 1 | 0 | 1204 | - | 20M | 0 |2000 |2001 |2001 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.839873e+07 | 7.91%| unknown\n",
|
||||
" 0.1s| 1 | 0 | 1207 | - | 22M | 0 |2000 |2001 |2002 | 1 | 1 | 0 | 0 | 1.705036e+07 | 1.839873e+07 | 7.91%| unknown\n",
|
||||
"r 0.1s| 1 | 0 | 1207 | - |shifting| 0 |2000 |2001 |2002 | 1 | 1 | 0 | 0 | 1.705036e+07 | 1.711399e+07 | 0.37%| unknown\n",
|
||||
" 0.1s| 1 | 0 | 1209 | - | 22M | 0 |2000 |2001 |2003 | 2 | 2 | 0 | 0 | 1.705037e+07 | 1.711399e+07 | 0.37%| unknown\n",
|
||||
"r 0.1s| 1 | 0 | 1209 | - |shifting| 0 |2000 |2001 |2003 | 2 | 2 | 0 | 0 | 1.705037e+07 | 1.706492e+07 | 0.09%| unknown\n",
|
||||
" 0.1s| 1 | 0 | 1210 | - | 22M | 0 |2000 |2001 |2004 | 3 | 3 | 0 | 0 | 1.705037e+07 | 1.706492e+07 | 0.09%| unknown\n",
|
||||
" 0.1s| 1 | 0 | 1211 | - | 23M | 0 |2000 |2001 |2005 | 4 | 4 | 0 | 0 | 1.705037e+07 | 1.706492e+07 | 0.09%| unknown\n",
|
||||
" 0.1s| 1 | 0 | 1212 | - | 23M | 0 |2000 |2001 |2006 | 5 | 5 | 0 | 0 | 1.705037e+07 | 1.706492e+07 | 0.09%| unknown\n",
|
||||
"r 0.1s| 1 | 0 | 1212 | - |shifting| 0 |2000 |2001 |2006 | 5 | 5 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
|
||||
" 0.1s| 1 | 0 | 1214 | - | 24M | 0 |2000 |2001 |2007 | 6 | 7 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
|
||||
" 0.2s| 1 | 0 | 1216 | - | 24M | 0 |2000 |2001 |2009 | 8 | 8 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
|
||||
" 0.2s| 1 | 0 | 1220 | - | 25M | 0 |2000 |2001 |2011 | 10 | 9 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
|
||||
" 0.2s| 1 | 0 | 1223 | - | 25M | 0 |2000 |2001 |2014 | 13 | 10 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
|
||||
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
|
||||
" 0.2s| 1 | 0 | 1229 | - | 26M | 0 |2000 |2001 |2015 | 14 | 11 | 0 | 0 | 1.705038e+07 | 1.706228e+07 | 0.07%| unknown\n",
|
||||
"r 0.2s| 1 | 0 | 1403 | - |intshift| 0 |2000 |2001 |2015 | 14 | 11 | 0 | 0 | 1.705038e+07 | 1.705687e+07 | 0.04%| unknown\n",
|
||||
"L 0.6s| 1 | 0 | 1707 | - | rens| 0 |2000 |2001 |2015 | 14 | 11 | 0 | 0 | 1.705038e+07 | 1.705332e+07 | 0.02%| unknown\n",
|
||||
"L 0.7s| 1 | 0 | 1707 | - | alns| 0 |2000 |2001 |2015 | 14 | 11 | 0 | 0 | 1.705038e+07 | 1.705178e+07 | 0.01%| unknown\n",
|
||||
"\n",
|
||||
"SCIP Status : solving was interrupted [gap limit reached]\n",
|
||||
"Solving Time (sec) : 0.67\n",
|
||||
"Solving Nodes : 1\n",
|
||||
"Primal Bound : +1.70517823853380e+07 (13 solutions)\n",
|
||||
"Dual Bound : +1.70503798271962e+07\n",
|
||||
"Gap : 0.01 %\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"solve!(solver_baseline, test_files[1], build_uc_model, tee=true);"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "72a52d26",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In this log file, notice how the line we saw before is now missing; SCIP needs to find an initial solution using its own internal heuristics. The solution SCIP initially found has value `2.335200e+07`, which is significantly worse than the one MIPLearn constructed before. SCIP then proceeds to improve this solution by generating a number of cutting planes and repeatedly running primal heuristics. In the end, it is able to find the optimal solution, as expected, but it takes longer."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "36fb5f02",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Accessing the solution\n",
|
||||
"\n",
|
||||
"In the example above, we used `MIPLearn.solve!` together with data files to solve both the training and the test instances. The solutions were saved to a `.h5` files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In this section we will use an easier method.\n",
|
||||
"\n",
|
||||
"We can use the function `MIPLearn.load!` to obtain a regular JuMP model:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "f62f28b4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"A JuMP Model\n",
|
||||
"Minimization problem with:\n",
|
||||
"Variables: 2000\n",
|
||||
"Objective function type: AffExpr\n",
|
||||
"`AffExpr`-in-`MathOptInterface.EqualTo{Float64}`: 1 constraint\n",
|
||||
"`AffExpr`-in-`MathOptInterface.GreaterThan{Float64}`: 1000 constraints\n",
|
||||
"`AffExpr`-in-`MathOptInterface.LessThan{Float64}`: 1000 constraints\n",
|
||||
"`VariableRef`-in-`MathOptInterface.GreaterThan{Float64}`: 1000 constraints\n",
|
||||
"`VariableRef`-in-`MathOptInterface.ZeroOne`: 1000 constraints\n",
|
||||
"Model mode: AUTOMATIC\n",
|
||||
"CachingOptimizer state: NO_OPTIMIZER\n",
|
||||
"Solver name: No optimizer attached.\n",
|
||||
"Names registered in the model: eq_demand, eq_max_power, eq_min_power, x, y"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"model = MIPLearn.load(\"uc/test/000001.jld2\", build_uc_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d5722dcf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can then solve this model as before, with `MIPLearn.solve!`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "e49f9e60",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"obj = 1.7051217395548128e7\n",
|
||||
" x = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0]\n",
|
||||
" y = [767.11, 646.61, 230.28, 365.46, 1150.99, 1103.36, 0.0, 0.0, 0.0, 0.0]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"solve!(solver_ml, model)\n",
|
||||
"println(\"obj = \", objective_value(model))\n",
|
||||
"println(\" x = \", round.(value.(model[:x][1:10])))\n",
|
||||
"println(\" y = \", round.(value.(model[:y][1:10]), digits=2))"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "18dd2957",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Modeling lazy constraints\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,29 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8e6b5f28",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Modeling user cuts\n",
|
||||
"\n",
|
||||
"TODO"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Julia 1.6.0",
|
||||
"language": "julia",
|
||||
"name": "julia-1.6"
|
||||
},
|
||||
"language_info": {
|
||||
"file_extension": ".jl",
|
||||
"mimetype": "application/julia",
|
||||
"name": "julia",
|
||||
"version": "1.6.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,246 +0,0 @@
|
||||
```{sectnum}
|
||||
---
|
||||
start: 1
|
||||
depth: 2
|
||||
suffix: .
|
||||
---
|
||||
```
|
||||
|
||||
# Using MIPLearn
|
||||
|
||||
## Installation
|
||||
|
||||
In these docs, we describe the Python/Pyomo version of the package, although a [Julia/JuMP version](https://github.com/ANL-CEEESA/MIPLearn.jl) is also available. A mixed-integer solver is also required and its Python bindings must be properly installed. Supported solvers are currently CPLEX, Gurobi and XPRESS.
|
||||
|
||||
To install MIPLearn, run:
|
||||
|
||||
```bash
|
||||
pip3 install --upgrade miplearn==0.2.*
|
||||
```
|
||||
|
||||
After installation, the package `miplearn` should become available to Python. It can be imported
|
||||
as follows:
|
||||
|
||||
```python
|
||||
import miplearn
|
||||
```
|
||||
|
||||
## Using `LearningSolver`
|
||||
|
||||
The main class provided by this package is `LearningSolver`, a learning-enhanced MIP solver which uses information from previously solved instances to accelerate the solution of new instances. The following example shows its basic usage:
|
||||
|
||||
```python
|
||||
from miplearn import LearningSolver
|
||||
|
||||
# List of user-provided instances
|
||||
training_instances = [...]
|
||||
test_instances = [...]
|
||||
|
||||
# Create solver
|
||||
solver = LearningSolver()
|
||||
|
||||
# Solve all training instances
|
||||
for instance in training_instances:
|
||||
solver.solve(instance)
|
||||
|
||||
# Learn from training instances
|
||||
solver.fit(training_instances)
|
||||
|
||||
# Solve all test instances
|
||||
for instance in test_instances:
|
||||
solver.solve(instance)
|
||||
```
|
||||
|
||||
In this example, we have two lists of user-provided instances: `training_instances` and `test_instances`. We start by solving all training instances. Since there is no historical information available at this point, the instances will be processed from scratch, with no ML acceleration. After solving each instance, the solver stores within each `instance` object the optimal solution, the optimal objective value, and other information that can be used to accelerate future solves. After all training instances are solved, we call `solver.fit(training_instances)`. This instructs the solver to train all its internal machine-learning models based on the solutions of the (solved) trained instances. Subsequent calls to `solver.solve(instance)` will automatically use the trained Machine Learning models to accelerate the solution process.
|
||||
|
||||
|
||||
## Describing problem instances
|
||||
|
||||
Instances to be solved by `LearningSolver` must derive from the abstract class `miplearn.Instance`. The following three abstract methods must be implemented:
|
||||
|
||||
* `instance.to_model()`, which returns a concrete Pyomo model corresponding to the instance;
|
||||
* `instance.get_instance_features()`, which returns a 1-dimensional Numpy array of (numerical) features describing the entire instance;
|
||||
* `instance.get_variable_features(var_name, index)`, which returns a 1-dimensional array of (numerical) features describing a particular decision variable.
|
||||
|
||||
The first method is used by `LearningSolver` to construct a concrete Pyomo model, which will be provided to the internal MIP solver. The second and third methods provide an encoding of the instance, which can be used by the ML models to make predictions. In the knapsack problem, for example, an implementation may decide to provide as instance features the average weights, average prices, number of items and the size of the knapsack. The weight and the price of each individual item could be provided as variable features. See `src/python/miplearn/problems/knapsack.py` for a concrete example.
|
||||
|
||||
An optional method which can be implemented is `instance.get_variable_category(var_name, index)`, which returns a category (a string, an integer or any hashable type) for each decision variable. If two variables have the same category, `LearningSolver` will use the same internal ML model to predict the values of both variables. By default, all variables belong to the `"default"` category, and therefore only one ML model is used for all variables. If the returned category is `None`, ML predictors will ignore the variable.
|
||||
|
||||
It is not necessary to have a one-to-one correspondence between features and problem instances. One important (and deliberate) limitation of MIPLearn, however, is that `get_instance_features()` must always return arrays of same length for all relevant instances of the problem. Similarly, `get_variable_features(var_name, index)` must also always return arrays of same length for all variables in each category. It is up to the user to decide how to encode variable-length characteristics of the problem into fixed-length vectors. In graph problems, for example, graph embeddings can be used to reduce the (variable-length) lists of nodes and edges into a fixed-length structure that still preserves some properties of the graph. Different instance encodings may have significant impact on performance.
|
||||
|
||||
|
||||
## Describing lazy constraints
|
||||
|
||||
For many MIP formulations, it is not desirable to add all constraints up-front, either because the total number of constraints is very large, or because some of the constraints, even in relatively small numbers, can still cause significant performance impact when added to the formulation. In these situations, it may be desirable to generate and add constraints incrementaly, during the solution process itself. Conventional MIP solvers typically start by solving the problem without any lazy constraints. Whenever a candidate solution is found, the solver finds all violated lazy constraints and adds them to the formulation. MIPLearn significantly accelerates this process by using ML to predict which lazy constraints should be enforced from the very beginning of the optimization process, even before a candidate solution is available.
|
||||
|
||||
MIPLearn supports two types of lazy constraints: through constraint annotations and through callbacks.
|
||||
|
||||
### Adding lazy constraints through annotations
|
||||
|
||||
The easiest way to create lazy constraints in MIPLearn is to add them to the model (just like any regular constraints), then annotate them as lazy, as described below. Just before the optimization starts, MIPLearn removes all lazy constraints from the model and places them in a lazy constraint pool. If any trained ML models are available, MIPLearn queries these models to decide which of these constraints should be moved back into the formulation. After this step, the optimization starts, and lazy constraints from the pool are added to the model in the conventional fashion.
|
||||
|
||||
To tag a constraint as lazy, the following methods must be implemented:
|
||||
|
||||
* `instance.has_static_lazy_constraints()`, which returns `True` if the model has any annotated lazy constraints. By default, this method returns `False`.
|
||||
* `instance.is_constraint_lazy(cid)`, which returns `True` if the constraint with name `cid` should be treated as a lazy constraint, and `False` otherwise.
|
||||
* `instance.get_constraint_features(cid)`, which returns a 1-dimensional Numpy array of (numerical) features describing the constraint.
|
||||
|
||||
For instances such that `has_lazy_constraints` returns `True`, MIPLearn calls `is_constraint_lazy` for each constraint in the formulation, providing the name of the constraint. For constraints such that `is_constraint_lazy` returns `True`, MIPLearn additionally calls `get_constraint_features` to gather a ML representation of each constraint. These features are used to predict which lazy constraints should be initially enforced.
|
||||
|
||||
An additional method that can be implemented is `get_lazy_constraint_category(cid)`, which returns a category (a string or any other hashable type) for each lazy constraint. Similarly to decision variable categories, if two lazy constraints have the same category, then MIPLearn will use the same internal ML model to decide whether to initially enforce them. By default, all lazy constraints belong to the `"default"` category, and therefore a single ML model is used.
|
||||
|
||||
!!! warning
|
||||
If two lazy constraints belong to the same category, their feature vectors should have the same length.
|
||||
|
||||
### Adding lazy constraints through callbacks
|
||||
|
||||
Although convenient, the method described in the previous subsection still requires the generation of all lazy constraints ahead of time, which can be prohibitively expensive. An alternative method is through a lazy constraint callbacks, described below. During the solution process, MIPLearn will repeatedly call a user-provided function to identify any violated lazy constraints. If violated constraints are identified, MIPLearn will additionally call another user-provided function to generate the constraint and add it to the formulation.
|
||||
|
||||
To describe lazy constraints through user callbacks, the following methods need to be implemented:
|
||||
|
||||
* `instance.has_dynamic_lazy_constraints()`, which returns `True` if the model has any lazy constraints generated by user callbacks. By default, this method returns `False`.
|
||||
* `instance.find_violated_lazy_constraints(model)`, which returns a list of identifiers corresponding to the lazy constraints found to be violated by the current solution. These identifiers should be strings, tuples or any other hashable type.
|
||||
* `instance.build_violated_lazy_constraints(model, cid)`, which returns either a list of Pyomo constraints, or a single Pyomo constraint, corresponding to the given lazy constraint identifier.
|
||||
* `instance.get_constraint_features(cid)`, which returns a 1-dimensional Numpy array of (numerical) features describing the constraint. If this constraint is not valid, returns `None`.
|
||||
* `instance.get_lazy_constraint_category(cid)`, which returns a category (a string or any other hashable type) for each lazy constraint, indicating which ML model to use. By default, returns `"default"`.
|
||||
|
||||
|
||||
Assuming that trained ML models are available, immediately after calling `solver.solve`, MIPLearn will call `get_constraint_features` for each lazy constraint identifier found in the training set. For constraints such that `get_constraint_features` returns a vector (instead of `None`), MIPLearn will call `get_constraint_category` to decide which trained ML model to use. It will then query the ML model to decide whether the constraint should be initially enforced. Assuming that the ML predicts this constraint will be necessary, MIPLearn calls `build_violated_constraints` then adds the returned list of Pyomo constraints to the model. The optimization then starts. When no trained ML models are available, this entire initial process is skipped, and MIPLearn behaves like a conventional solver.
|
||||
|
||||
After the optimization process starts, MIPLearn will periodically call `find_violated_lazy_constraints` to verify if the current solution violates any lazy constraints. If any violated lazy constraints are found, MIPLearn will call the method `build_violated_lazy_constraints` and add the returned constraints to the formulation.
|
||||
|
||||
```{tip}
|
||||
When implementing `find_violated_lazy_constraints(self, model)`, the current solution may be accessed through `self.solution[var_name][index]`.
|
||||
```
|
||||
|
||||
## Obtaining heuristic solutions
|
||||
|
||||
By default, `LearningSolver` uses Machine Learning to accelerate the MIP solution process, while maintaining all optimality guarantees provided by the MIP solver. In the default mode of operation, for example, predicted optimal solutions are used only as MIP starts.
|
||||
|
||||
For more significant performance benefits, `LearningSolver` can also be configured to place additional trust in the Machine Learning predictors, by using the `mode="heuristic"` constructor argument. When operating in this mode, if a ML model is statistically shown (through *stratified k-fold cross validation*) to have exceptionally high accuracy, the solver may decide to restrict the search space based on its predictions. The parts of the solution which the ML models cannot predict accurately will still be explored using traditional (branch-and-bound) methods. For particular applications, this mode has been shown to quickly produce optimal or near-optimal solutions (see [references](about.md#references) and [benchmark results](benchmark.md)).
|
||||
|
||||
|
||||
```{danger}
|
||||
The `heuristic` mode provides no optimality guarantees, and therefore should only be used if the solver is first trained on a large and representative set of training instances. Training on a small or non-representative set of instances may produce low-quality solutions, or make the solver incorrectly classify new instances as infeasible.
|
||||
```
|
||||
|
||||
## Scaling Up
|
||||
|
||||
### Saving and loading solver state
|
||||
|
||||
After solving a large number of training instances, it may be desirable to save the current state of `LearningSolver` to disk, so that the solver can still use the acquired knowledge after the application restarts. This can be accomplished by using the the utility functions `write_pickle_gz` and `read_pickle_gz`, as the following example illustrates:
|
||||
|
||||
```python
|
||||
from miplearn import LearningSolver, write_pickle_gz, read_pickle_gz
|
||||
|
||||
# Solve training instances
|
||||
training_instances = [...]
|
||||
solver = LearningSolver()
|
||||
for instance in training_instances:
|
||||
solver.solve(instance)
|
||||
|
||||
# Train machine-learning models
|
||||
solver.fit(training_instances)
|
||||
|
||||
# Save trained solver to disk
|
||||
write_pickle_gz(solver, "solver.pkl.gz")
|
||||
|
||||
# Application restarts...
|
||||
|
||||
# Load trained solver from disk
|
||||
solver = read_pickle_gz("solver.pkl.gz")
|
||||
|
||||
# Solve additional instances
|
||||
test_instances = [...]
|
||||
for instance in test_instances:
|
||||
solver.solve(instance)
|
||||
```
|
||||
|
||||
|
||||
### Solving instances in parallel
|
||||
|
||||
In many situations, instances can be solved in parallel to accelerate the training process. `LearningSolver` provides the method `parallel_solve(instances)` to easily achieve this:
|
||||
|
||||
```python
|
||||
from miplearn import LearningSolver
|
||||
|
||||
training_instances = [...]
|
||||
solver = LearningSolver()
|
||||
solver.parallel_solve(training_instances, n_jobs=4)
|
||||
solver.fit(training_instances)
|
||||
|
||||
# Test phase...
|
||||
test_instances = [...]
|
||||
solver.parallel_solve(test_instances)
|
||||
```
|
||||
|
||||
|
||||
### Solving instances from the disk
|
||||
|
||||
In all examples above, we have assumed that instances are available as Python objects, stored in memory. When problem instances are very large, or when there is a large number of problem instances, this approach may require an excessive amount of memory. To reduce memory requirements, MIPLearn can also operate on instances that are stored on disk, through the `PickleGzInstance` class, as the next example illustrates.
|
||||
|
||||
```python
|
||||
import pickle
|
||||
from miplearn import (
|
||||
LearningSolver,
|
||||
PickleGzInstance,
|
||||
write_pickle_gz,
|
||||
)
|
||||
|
||||
# Construct and pickle 600 problem instances
|
||||
for i in range(600):
|
||||
instance = MyProblemInstance([...])
|
||||
write_pickle_gz(instance, "instance_%03d.pkl" % i)
|
||||
|
||||
# Split instances into training and test
|
||||
test_instances = [PickleGzInstance("instance_%03d.pkl" % i) for i in range(500)]
|
||||
train_instances = [PickleGzInstance("instance_%03d.pkl" % i) for i in range(500, 600)]
|
||||
|
||||
# Create solver
|
||||
solver = LearningSolver([...])
|
||||
|
||||
# Solve training instances
|
||||
solver.parallel_solve(train_instances, n_jobs=4)
|
||||
|
||||
# Train ML models
|
||||
solver.fit(train_instances)
|
||||
|
||||
# Solve test instances
|
||||
solver.parallel_solve(test_instances, n_jobs=4)
|
||||
```
|
||||
|
||||
|
||||
By default, `solve` and `parallel_solve` modify files in place. That is, after the instances are loaded from disk and solved, MIPLearn writes them back to the disk, overwriting the original files. To discard the modifications instead, use `LearningSolver(..., discard_outputs=True)`. This can be useful, for example, during benchmarks.
|
||||
|
||||
## Running benchmarks
|
||||
|
||||
MIPLearn provides the utility class `BenchmarkRunner`, which simplifies the task of comparing the performance of different solvers. The snippet below shows its basic usage:
|
||||
|
||||
```python
|
||||
from miplearn import BenchmarkRunner, LearningSolver
|
||||
|
||||
# Create train and test instances
|
||||
train_instances = [...]
|
||||
test_instances = [...]
|
||||
|
||||
# Training phase...
|
||||
training_solver = LearningSolver(...)
|
||||
training_solver.parallel_solve(train_instances, n_jobs=10)
|
||||
|
||||
# Test phase...
|
||||
benchmark = BenchmarkRunner({
|
||||
"Baseline": LearningSolver(...),
|
||||
"Strategy A": LearningSolver(...),
|
||||
"Strategy B": LearningSolver(...),
|
||||
"Strategy C": LearningSolver(...),
|
||||
})
|
||||
benchmark.fit(train_instances)
|
||||
benchmark.parallel_solve(test_instances, n_jobs=5)
|
||||
benchmark.write_csv("results.csv")
|
||||
```
|
||||
|
||||
The method `fit` trains the ML models for each individual solver. The method `parallel_solve` solves the test instances in parallel, and collects solver statistics such as running time and optimal value. Finally, `write_csv` produces a table of results. The columns in the CSV file depend on the components added to the solver.
|
||||
|
||||
## Current Limitations
|
||||
|
||||
* Only binary and continuous decision variables are currently supported. General integer variables are not currently supported by some solver components.
|
Loading…
Reference in new issue