Update v0.3 docs

This commit is contained in:
2023-06-08 10:40:35 -05:00
parent d9d44ce4b2
commit 14a428e49e
25 changed files with 516 additions and 2757 deletions

View File

@@ -39,9 +39,12 @@
{
"cell_type": "code",
"execution_count": 3,
"id": "6d342a4e",
"id": "f906fe9c",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
@@ -103,18 +106,14 @@
},
{
"cell_type": "markdown",
"id": "8a46bb8a",
"metadata": {
"collapsed": false
},
"id": "50441907",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "d8699092",
"metadata": {
"collapsed": false
},
"id": "d0000c8d",
"metadata": {},
"source": [
"## Basic collector\n",
"\n",
@@ -131,10 +130,8 @@
},
{
"cell_type": "markdown",
"id": "99a122dc",
"metadata": {
"collapsed": false
},
"id": "6529f667",
"metadata": {},
"source": [
"### Data fields\n",
"\n",
@@ -172,10 +169,8 @@
},
{
"cell_type": "markdown",
"id": "7fe76973",
"metadata": {
"collapsed": false
},
"id": "f2894594",
"metadata": {},
"source": [
"### Example\n",
"\n",
@@ -185,9 +180,12 @@
{
"cell_type": "code",
"execution_count": 4,
"id": "425717fe",
"id": "ac6f8c6f",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
@@ -244,9 +242,12 @@
{
"cell_type": "code",
"execution_count": null,
"id": "30023d2b",
"id": "78f0b07a",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": []
@@ -254,7 +255,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -268,7 +269,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
"version": "3.9.16"
}
},
"nbformat": 4,

View File

@@ -118,7 +118,7 @@
</li>
<li class="toctree-l1">
<a class="reference internal" href="../solvers/">
8. Solvers
8. LearningSolver
</a>
</li>
</ul>

View File

@@ -7,15 +7,13 @@
"source": [
"# Feature Extractors\n",
"\n",
"In the previous page, we introduced *training data collectors*, which solve the optimization problem and collect raw training data, such as the optimal solution. In this page, we introduce **feature extractors**, which take the raw training data, stored in HDF5 files, and extract relevant information in order to train a machine learning model. We describe the extractors readily available in MIPLearn."
"In the previous page, we introduced *training data collectors*, which solve the optimization problem and collect raw training data, such as the optimal solution. In this page, we introduce **feature extractors**, which take the raw training data, stored in HDF5 files, and extract relevant information in order to train a machine learning model."
]
},
{
"cell_type": "markdown",
"id": "94df359d",
"metadata": {
"collapsed": false
},
"id": "b4026de5",
"metadata": {},
"source": [
"\n",
"## Overview\n",
@@ -32,10 +30,8 @@
},
{
"cell_type": "markdown",
"id": "d450370d",
"metadata": {
"collapsed": false
},
"id": "b2d9736c",
"metadata": {},
"source": [
"\n",
"## H5FieldsExtractor\n",
@@ -45,10 +41,8 @@
},
{
"cell_type": "markdown",
"id": "b0e96d25",
"metadata": {
"collapsed": false
},
"id": "e8184dff",
"metadata": {},
"source": [
"### Example\n",
"\n",
@@ -58,9 +52,12 @@
{
"cell_type": "code",
"execution_count": 5,
"id": "82609250",
"id": "ed9a18c8",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
@@ -174,10 +171,8 @@
},
{
"cell_type": "markdown",
"id": "b6912b56",
"metadata": {
"collapsed": false
},
"id": "2da2e74e",
"metadata": {},
"source": [
"\n",
"[H5FieldsExtractor]: ../../api/collectors/#miplearn.features.fields.H5FieldsExtractor"
@@ -185,10 +180,8 @@
},
{
"cell_type": "markdown",
"id": "81fd1d27",
"metadata": {
"collapsed": false
},
"id": "d879c0d3",
"metadata": {},
"source": [
"<div class=\"alert alert-warning\">\n",
"Warning\n",
@@ -199,10 +192,8 @@
},
{
"cell_type": "markdown",
"id": "fdbf5674",
"metadata": {
"collapsed": false
},
"id": "cd0ba071",
"metadata": {},
"source": [
"## AlvLouWeh2017Extractor\n",
"\n",
@@ -214,9 +205,12 @@
{
"cell_type": "code",
"execution_count": 6,
"id": "85ef526d",
"id": "a1bc38fe",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
@@ -303,10 +297,8 @@
},
{
"cell_type": "markdown",
"id": "3e17c5f8",
"metadata": {
"collapsed": false
},
"id": "286c9927",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
"References\n",
@@ -320,7 +312,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -334,7 +326,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
"version": "3.9.16"
}
},
"nbformat": 4,

View File

@@ -119,7 +119,7 @@
</li>
<li class="toctree-l1">
<a class="reference internal" href="../solvers/">
8. Solvers
8. LearningSolver
</a>
</li>
</ul>
@@ -263,7 +263,7 @@
<div class="section" id="Feature-Extractors">
<h1><span class="section-number">6. </span>Feature Extractors<a class="headerlink" href="#Feature-Extractors" title="Permalink to this headline"></a></h1>
<p>In the previous page, we introduced <em>training data collectors</em>, which solve the optimization problem and collect raw training data, such as the optimal solution. In this page, we introduce <strong>feature extractors</strong>, which take the raw training data, stored in HDF5 files, and extract relevant information in order to train a machine learning model. We describe the extractors readily available in MIPLearn.</p>
<p>In the previous page, we introduced <em>training data collectors</em>, which solve the optimization problem and collect raw training data, such as the optimal solution. In this page, we introduce <strong>feature extractors</strong>, which take the raw training data, stored in HDF5 files, and extract relevant information in order to train a machine learning model.</p>
<div class="section" id="Overview">
<h2><span class="section-number">6.1. </span>Overview<a class="headerlink" href="#Overview" title="Permalink to this headline"></a></h2>
<p>Feature extraction is an important step of the process of building a machine learning model because it helps to reduce the complexity of the data and convert it into a format that is more easily processed. Previous research has proposed converting absolute variable coefficients, for example, into relative values which are invariant to various transformations, such as problem scaling, making them more amenable to learning. Various other transformations have also been described.</p>

View File

@@ -283,7 +283,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
"version": "3.9.16"
}
},
"nbformat": 4,

View File

@@ -41,7 +41,7 @@
<script type="text/x-mathjax-config">MathJax.Hub.Config({"tex2jax": {"inlineMath": [["\\(", "\\)"]], "displayMath": [["\\[", "\\]"]], "processRefs": false, "processEnvironments": false}})</script>
<link rel="index" title="Index" href="../../genindex/" />
<link rel="search" title="Search" href="../../search/" />
<link rel="next" title="8. Solvers" href="../solvers/" />
<link rel="next" title="8. LearningSolver" href="../solvers/" />
<link rel="prev" title="6. Feature Extractors" href="../features/" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="docsearch:language" content="en" />
@@ -120,7 +120,7 @@
</li>
<li class="toctree-l1">
<a class="reference internal" href="../solvers/">
8. Solvers
8. LearningSolver
</a>
</li>
</ul>
@@ -511,7 +511,7 @@ been computed, then directly provides it to the solver using one of the availabl
<div class='prev-next-bottom'>
<a class='left-prev' id="prev-link" href="../features/" title="previous page"><span class="section-number">6. </span>Feature Extractors</a>
<a class='right-next' id="next-link" href="../solvers/" title="next page"><span class="section-number">8. </span>Solvers</a>
<a class='right-next' id="next-link" href="../solvers/" title="next page"><span class="section-number">8. </span>LearningSolver</a>
</div>

View File

@@ -9,7 +9,7 @@
"\n",
"## Overview\n",
"\n",
"Benchmark sets such as [MIPLIB](https://miplib.zib.de/) or [TSPLIB](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/) are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, unfortunately, make existing benchmark sets less than ideal for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.\n",
"Benchmark sets such as [MIPLIB](https://miplib.zib.de/) or [TSPLIB](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/) are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, however, make existing benchmark sets less suitable for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.\n",
"\n",
"To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. As of MIPLearn 0.3, nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.\n",
"\n",
@@ -18,10 +18,8 @@
},
{
"cell_type": "markdown",
"id": "1ba30e52",
"metadata": {
"collapsed": false
},
"id": "bd99c51f",
"metadata": {},
"source": [
"<div class=\"alert alert-warning\">\n",
"Warning\n",
@@ -64,10 +62,8 @@
},
{
"cell_type": "markdown",
"id": "218add9f",
"metadata": {
"collapsed": false
},
"id": "5e502345",
"metadata": {},
"source": [
"\n",
"$$\n",
@@ -85,10 +81,8 @@
},
{
"cell_type": "markdown",
"id": "3ffe5c46",
"metadata": {
"collapsed": false
},
"id": "9cba2077",
"metadata": {},
"source": [
"### Random instance generator\n",
"\n",
@@ -103,10 +97,8 @@
},
{
"cell_type": "markdown",
"id": "fd6cb059",
"metadata": {
"collapsed": false
},
"id": "2bc62803",
"metadata": {},
"source": [
"### Example"
]
@@ -234,10 +226,8 @@
},
{
"cell_type": "markdown",
"id": "307ab9bf",
"metadata": {
"collapsed": false
},
"id": "d0d3ea42",
"metadata": {},
"source": [
"\n",
"$$\n",
@@ -304,10 +294,8 @@
},
{
"cell_type": "markdown",
"id": "5caf77ba",
"metadata": {
"collapsed": false
},
"id": "f12a066f",
"metadata": {},
"source": [
"### Example"
]
@@ -472,10 +460,8 @@
},
{
"cell_type": "markdown",
"id": "838ef9d8",
"metadata": {
"collapsed": false
},
"id": "4e701397",
"metadata": {},
"source": [
"### Example"
]
@@ -599,10 +585,8 @@
},
{
"cell_type": "markdown",
"id": "96a26e2d",
"metadata": {
"collapsed": false
},
"id": "d5254e7a",
"metadata": {},
"source": [
"### Formulation\n",
"\n",
@@ -745,10 +729,8 @@
},
{
"cell_type": "markdown",
"id": "7c5d228d",
"metadata": {
"collapsed": false
},
"id": "19342eb1",
"metadata": {},
"source": [
"### Formulation\n",
"\n",
@@ -757,10 +739,8 @@
},
{
"cell_type": "markdown",
"id": "7361cea0",
"metadata": {
"collapsed": false
},
"id": "0391b35b",
"metadata": {},
"source": [
"$$\n",
"\\begin{align*}\n",
@@ -776,10 +756,8 @@
},
{
"cell_type": "markdown",
"id": "c32306f4",
"metadata": {
"collapsed": false
},
"id": "c2d7df7b",
"metadata": {},
"source": [
"### Random instance generator\n",
"\n",
@@ -794,9 +772,12 @@
{
"cell_type": "code",
"execution_count": 5,
"id": "4607dbda",
"id": "cc797da7",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
@@ -888,10 +869,8 @@
},
{
"cell_type": "markdown",
"id": "93235cdd",
"metadata": {
"collapsed": false
},
"id": "2f74dd10",
"metadata": {},
"source": [
"$$\n",
"\\begin{align*}\n",
@@ -905,10 +884,8 @@
},
{
"cell_type": "markdown",
"id": "90b9e623",
"metadata": {
"collapsed": false
},
"id": "ef030168",
"metadata": {},
"source": [
"\n",
"### Random instance generator\n",
@@ -1018,10 +995,8 @@
},
{
"cell_type": "markdown",
"id": "aa307ff0",
"metadata": {
"collapsed": false
},
"id": "da3ca69c",
"metadata": {},
"source": [
"### Formulation\n",
"\n",
@@ -1030,10 +1005,8 @@
},
{
"cell_type": "markdown",
"id": "a5436195",
"metadata": {
"collapsed": false
},
"id": "9cf296e9",
"metadata": {},
"source": [
"$$\n",
"\\begin{align*}\n",
@@ -1050,10 +1023,8 @@
},
{
"cell_type": "markdown",
"id": "df26c9f5",
"metadata": {
"collapsed": false
},
"id": "eba3dbe5",
"metadata": {},
"source": [
"### Random instance generator\n",
"\n",
@@ -1070,10 +1041,8 @@
},
{
"cell_type": "markdown",
"id": "0fd000fe",
"metadata": {
"collapsed": false
},
"id": "61f16c56",
"metadata": {},
"source": [
"### Example"
]
@@ -1081,9 +1050,12 @@
{
"cell_type": "code",
"execution_count": 7,
"id": "6ee78519",
"id": "9d0c56c6",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
@@ -1196,10 +1168,8 @@
},
{
"cell_type": "markdown",
"id": "fd30f83e",
"metadata": {
"collapsed": false
},
"id": "7048d771",
"metadata": {},
"source": [
"\n",
"<div class=\"alert alert-info\">\n",
@@ -1215,10 +1185,8 @@
},
{
"cell_type": "markdown",
"id": "1da000b8",
"metadata": {
"collapsed": false
},
"id": "bec5ee1c",
"metadata": {},
"source": [
"\n",
"$$\n",
@@ -1251,10 +1219,8 @@
},
{
"cell_type": "markdown",
"id": "721f7b0c",
"metadata": {
"collapsed": false
},
"id": "4a1ffb4c",
"metadata": {},
"source": [
"\n",
"The first set of inequalities enforces minimum up-time constraints: if unit $g$ is down at time $t$, then it cannot start up during the previous $L_g$ time steps. The second set of inequalities enforces minimum down-time constraints, and is symmetrical to the previous one. The third set ensures that if unit $g$ starts up at time $t$, then the start up variable must be one. The fourth set ensures that demand is satisfied at each time period. The fifth and sixth sets enforce bounds to the quantity of power generated by each unit.\n",
@@ -1268,10 +1234,8 @@
},
{
"cell_type": "markdown",
"id": "f49a5e24",
"metadata": {
"collapsed": false
},
"id": "01bed9fc",
"metadata": {},
"source": [
"\n",
"### Random instance generator\n",
@@ -1287,10 +1251,8 @@
},
{
"cell_type": "markdown",
"id": "cae4f51a",
"metadata": {
"collapsed": false
},
"id": "855b87b4",
"metadata": {},
"source": [
"### Example"
]
@@ -1298,9 +1260,12 @@
{
"cell_type": "code",
"execution_count": 8,
"id": "2d7295e0",
"id": "6217da7c",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
@@ -1440,10 +1405,8 @@
},
{
"cell_type": "markdown",
"id": "09ba5ccf",
"metadata": {
"collapsed": false
},
"id": "91f5781a",
"metadata": {},
"source": [
"\n",
"### Formulation\n",
@@ -1453,10 +1416,8 @@
},
{
"cell_type": "markdown",
"id": "c72baa43",
"metadata": {
"collapsed": false
},
"id": "544754cb",
"metadata": {},
"source": [
" $$\n",
"\\begin{align*}\n",
@@ -1472,10 +1433,8 @@
},
{
"cell_type": "markdown",
"id": "43bb19ae",
"metadata": {
"collapsed": false
},
"id": "35c99166",
"metadata": {},
"source": [
"### Random instance generator\n",
"\n",
@@ -1573,9 +1532,12 @@
{
"cell_type": "code",
"execution_count": null,
"id": "c0a76d28",
"id": "9f12e91f",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": []
@@ -1583,7 +1545,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -1597,7 +1559,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
"version": "3.9.16"
}
},
"nbformat": 4,

View File

@@ -121,7 +121,7 @@
</li>
<li class="toctree-l1">
<a class="reference internal" href="../solvers/">
8. Solvers
8. LearningSolver
</a>
</li>
</ul>
@@ -441,8 +441,8 @@
<h1><span class="section-number">4. </span>Benchmark Problems<a class="headerlink" href="#Benchmark-Problems" title="Permalink to this headline"></a></h1>
<div class="section" id="Overview">
<h2><span class="section-number">4.1. </span>Overview<a class="headerlink" href="#Overview" title="Permalink to this headline"></a></h2>
<p>Benchmark sets such as <a class="reference external" href="https://miplib.zib.de/">MIPLIB</a> or <a class="reference external" href="http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/">TSPLIB</a> are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, unfortunately, make existing benchmark sets less than ideal for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having
orders of magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.</p>
<p>Benchmark sets such as <a class="reference external" href="https://miplib.zib.de/">MIPLIB</a> or <a class="reference external" href="http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/">TSPLIB</a> are usually employed to evaluate the performance of conventional MIP solvers. Two shortcomings, however, make existing benchmark sets less suitable for evaluating the performance of learning-enhanced MIP solvers: (i) while existing benchmark sets typically contain hundreds or thousands of instances, machine learning (ML) methods typically benefit from having orders of
magnitude more instances available for training; (ii) current machine learning methods typically provide best performance on sets of homogeneous instances, buch general-purpose benchmark sets contain relatively few examples of each problem type.</p>
<p>To tackle this challenge, MIPLearn provides random instance generators for a wide variety of classical optimization problems, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. As of MIPLearn 0.3, nine problem generators are available, each customizable with user-provided probability distribution and flexible parameters. The generators can be configured, for example, to produce large sets of very
similar instances of same size, where only the objective function changes, or more diverse sets of instances, with various sizes and characteristics, belonging to a particular problem class.</p>
<p>In the following, we describe the problems included in the library, their MIP formulation and the generation algorithm.</p>

View File

@@ -1,20 +1,63 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "3371f072-be1e-4c47-b765-b5d30fdbfae6",
"id": "9ec1907b-db93-4840-9439-c9005902b968",
"metadata": {},
"source": [
"# Solvers\n",
"# Learning Solver\n",
"\n",
"## LearningSolver\n",
"On previous pages, we discussed various components of the MIPLearn framework, including training data collectors, feature extractors, and individual machine learning components. In this page, we introduce **LearningSolver**, the main class of the framework which integrates all the aforementioned components into a cohesive whole. Using **LearningSolver** involves three steps: (i) configuring the solver; (ii) training the ML components; and (iii) solving new MIP instances. In the following, we describe each of these steps, then conclude with a complete runnable example.\n",
"\n",
"### Example"
"### Configuring the solver\n",
"\n",
"**LearningSolver** is composed by multiple individual machine learning components, each targeting a different part of the solution process, or implementing a different machine learning strategy. This architecture allows strategies to be easily enabled, disabled or customized, making the framework flexible. By default, no components are provided and **LearningSolver** is equivalent to a traditional MIP solver. To specify additional components, the `components` constructor argument may be used:\n",
"\n",
"```python\n",
"solver = LearningSolver(\n",
" components=[\n",
" comp1,\n",
" comp2,\n",
" comp3,\n",
" ]\n",
")\n",
"```\n",
"\n",
"In this example, three components `comp1`, `comp2` and `comp3` are provided. The strategies implemented by these components are applied sequentially when solving the problem. For example, `comp1` and `comp2` could fix a subset of decision variables, while `comp3` constructs a warm start for the remaining problem.\n",
"\n",
"### Training and solving new instances\n",
"\n",
"Once a solver is configured, its ML components need to be trained. This can be achieved by the `solver.fit` method, as illustrated below. The method accepts a list of HDF5 files and trains each individual component sequentially. Once the solver is trained, new instances can be solved using `solver.optimize`. The method returns a dictionary of statistics collected by each component, such as the number of variables fixed.\n",
"\n",
"```python\n",
"# Build instances\n",
"train_data = ...\n",
"test_data = ...\n",
"\n",
"# Collect training data\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_model)\n",
"\n",
"# Build solver\n",
"solver = LearningSolver(...)\n",
"\n",
"# Train components\n",
"solver.fit(train_data)\n",
"\n",
"# Solve a new test instance\n",
"stats = solver.optimize(test_data[0], build_model)\n",
"\n",
"```\n",
"\n",
"### Complete example\n",
"\n",
"In the example below, we illustrate the usage of **LearningSolver** by building instances of the Traveling Salesman Problem, collecting training data, training the ML components, then solving a new instance."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 3,
"id": "92b09b98",
"metadata": {
"collapsed": false,
@@ -23,21 +66,15 @@
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/axavier/Software/anaconda3/envs/miplearn/lib/python3.8/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Restricted license - for non-production use only - expires 2023-10-25\n",
"Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (linux64)\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x6ddcd141\n",
"Coefficient statistics:\n",
@@ -52,11 +89,14 @@
" 0 6.3600000e+02 1.700000e+01 0.000000e+00 0s\n",
" 15 2.7610000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 15 iterations and 0.01 seconds (0.00 work units)\n",
"Solved in 15 iterations and 0.00 seconds (0.00 work units)\n",
"Optimal objective 2.761000000e+03\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (linux64)\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x74ca3d0a\n",
"Variable types: 0 continuous, 45 integer (45 binary)\n",
@@ -66,7 +106,7 @@
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"\n",
"User MIP start produced solution with objective 2796 (0.01s)\n",
"User MIP start produced solution with objective 2796 (0.00s)\n",
"Loaded user MIP start with objective 2796\n",
"\n",
"Presolve time: 0.00s\n",
@@ -78,48 +118,32 @@
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 2761.00000 0 - 2796.00000 2761.00000 1.25% - 0s\n",
" 0 0 cutoff 0 2796.00000 2796.00000 0.00% - 0s\n",
"\n",
"Cutting planes:\n",
" Lazy constraints: 3\n",
"\n",
"Explored 1 nodes (15 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"Explored 1 nodes (16 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 1: 2796 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 103, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (linux64)\n",
"Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x74ca3d0a\n",
"Variable types: 0 continuous, 45 integer (45 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [4e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Presolved: 10 rows, 45 columns, 90 nonzeros\n",
"\n",
"Continuing optimization...\n",
"\n",
"\n",
"Cutting planes:\n",
" Lazy constraints: 3\n",
"\n",
"Explored 1 nodes (15 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 12 (of 12 available processors)\n",
"\n",
"Solution count 1: 2796 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 27, time in user-callback 0.00 sec\n"
"User-callback calls 110, time in user-callback 0.00 sec\n"
]
},
{
"data": {
"text/plain": [
"{'WS: Count': 1, 'WS: Number of variables set': 41.0}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
@@ -135,7 +159,7 @@
"from miplearn.components.primal.actions import SetWarmStart\n",
"from miplearn.components.primal.indep import IndependentVarsPrimalComponent\n",
"from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n",
"from miplearn.io import save\n",
"from miplearn.io import write_pkl_gz\n",
"from miplearn.problems.tsp import (\n",
" TravelingSalesmanGenerator,\n",
" build_tsp_model,\n",
@@ -157,7 +181,7 @@
").generate(50)\n",
"\n",
"# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...\n",
"all_data = save(data, \"data/tsp\")\n",
"all_data = write_pkl_gz(data, \"data/tsp\")\n",
"\n",
"# Split train/test data\n",
"train_data = all_data[:40]\n",
@@ -187,12 +211,12 @@
"solver.fit(train_data)\n",
"\n",
"# Solve a test instance\n",
"solver.optimize(test_data[0], build_tsp_model);"
"solver.optimize(test_data[0], build_tsp_model)\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "e27d2cbd-5341-461d-bbc1-8131aee8d949",
"metadata": {},
"outputs": [],
@@ -215,7 +239,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
"version": "3.9.12"
}
},
"nbformat": 4,

View File

@@ -5,7 +5,7 @@
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>8. Solvers &#8212; MIPLearn 0.3</title>
<title>8. Learning Solver &#8212; MIPLearn 0.3</title>
<link href="../../_static/css/theme.css" rel="stylesheet" />
<link href="../../_static/css/index.c5995385ac14fb8791e8eb36b4908be2.css" rel="stylesheet" />
@@ -25,10 +25,6 @@
<link rel="stylesheet" href="../../_static/pygments.css" type="text/css" />
<link rel="stylesheet" href="../../_static/sphinx-book-theme.acff12b8f9c144ce68a297486a2fa670.css" type="text/css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/custom.css" />
<link rel="preload" as="script" href="../../_static/js/index.1c5a1a01449ed65a7b51.js">
@@ -122,7 +118,7 @@
</li>
<li class="toctree-l1 current active">
<a class="current reference internal" href="#">
8. Solvers
8. Learning Solver
</a>
</li>
</ul>
@@ -225,16 +221,19 @@
<nav id="bd-toc-nav">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#LearningSolver">
8.1. LearningSolver
<a class="reference internal nav-link" href="#Configuring-the-solver">
8.1. Configuring the solver
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Training-and-solving-new-instances">
8.2. Training and solving new instances
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Complete-example">
8.3. Complete example
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Example">
Example
</a>
</li>
</ul>
</li>
</ul>
@@ -247,14 +246,51 @@
<div>
<div class="section" id="Solvers">
<h1><span class="section-number">8. </span>Solvers<a class="headerlink" href="#Solvers" title="Permalink to this headline"></a></h1>
<div class="section" id="LearningSolver">
<h2><span class="section-number">8.1. </span>LearningSolver<a class="headerlink" href="#LearningSolver" title="Permalink to this headline"></a></h2>
<div class="section" id="Example">
<h3>Example<a class="headerlink" href="#Example" title="Permalink to this headline"></a></h3>
<div class="section" id="Learning-Solver">
<h1><span class="section-number">8. </span>Learning Solver<a class="headerlink" href="#Learning-Solver" title="Permalink to this headline"></a></h1>
<p>On previous pages, we discussed various components of the MIPLearn framework, including training data collectors, feature extractors, and individual machine learning components. In this page, we introduce <strong>LearningSolver</strong>, the main class of the framework which integrates all the aforementioned components into a cohesive whole. Using <strong>LearningSolver</strong> involves three steps: (i) configuring the solver; (ii) training the ML components; and (iii) solving new MIP instances. In the following, we
describe each of these steps, then conclude with a complete runnable example.</p>
<div class="section" id="Configuring-the-solver">
<h2><span class="section-number">8.1. </span>Configuring the solver<a class="headerlink" href="#Configuring-the-solver" title="Permalink to this headline"></a></h2>
<p><strong>LearningSolver</strong> is composed by multiple individual machine learning components, each targeting a different part of the solution process, or implementing a different machine learning strategy. This architecture allows strategies to be easily enabled, disabled or customized, making the framework flexible. By default, no components are provided and <strong>LearningSolver</strong> is equivalent to a traditional MIP solver. To specify additional components, the <code class="docutils literal notranslate"><span class="pre">components</span></code> constructor argument may be used:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">solver</span> <span class="o">=</span> <span class="n">LearningSolver</span><span class="p">(</span>
<span class="n">components</span><span class="o">=</span><span class="p">[</span>
<span class="n">comp1</span><span class="p">,</span>
<span class="n">comp2</span><span class="p">,</span>
<span class="n">comp3</span><span class="p">,</span>
<span class="p">]</span>
<span class="p">)</span>
</pre></div>
</div>
<p>In this example, three components <code class="docutils literal notranslate"><span class="pre">comp1</span></code>, <code class="docutils literal notranslate"><span class="pre">comp2</span></code> and <code class="docutils literal notranslate"><span class="pre">comp3</span></code> are provided. The strategies implemented by these components are applied sequentially when solving the problem. For example, <code class="docutils literal notranslate"><span class="pre">comp1</span></code> and <code class="docutils literal notranslate"><span class="pre">comp2</span></code> could fix a subset of decision variables, while <code class="docutils literal notranslate"><span class="pre">comp3</span></code> constructs a warm start for the remaining problem.</p>
</div>
<div class="section" id="Training-and-solving-new-instances">
<h2><span class="section-number">8.2. </span>Training and solving new instances<a class="headerlink" href="#Training-and-solving-new-instances" title="Permalink to this headline"></a></h2>
<p>Once a solver is configured, its ML components need to be trained. This can be achieved by the <code class="docutils literal notranslate"><span class="pre">solver.fit</span></code> method, as illustrated below. The method accepts a list of HDF5 files and trains each individual component sequentially. Once the solver is trained, new instances can be solved using <code class="docutils literal notranslate"><span class="pre">solver.optimize</span></code>. The method returns a dictionary of statistics collected by each component, such as the number of variables fixed.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Build instances</span>
<span class="n">train_data</span> <span class="o">=</span> <span class="o">...</span>
<span class="n">test_data</span> <span class="o">=</span> <span class="o">...</span>
<span class="c1"># Collect training data</span>
<span class="n">bc</span> <span class="o">=</span> <span class="n">BasicCollector</span><span class="p">()</span>
<span class="n">bc</span><span class="o">.</span><span class="n">collect</span><span class="p">(</span><span class="n">train_data</span><span class="p">,</span> <span class="n">build_model</span><span class="p">)</span>
<span class="c1"># Build solver</span>
<span class="n">solver</span> <span class="o">=</span> <span class="n">LearningSolver</span><span class="p">(</span><span class="o">...</span><span class="p">)</span>
<span class="c1"># Train components</span>
<span class="n">solver</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">train_data</span><span class="p">)</span>
<span class="c1"># Solve a new test instance</span>
<span class="n">stats</span> <span class="o">=</span> <span class="n">solver</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">test_data</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">build_model</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="Complete-example">
<h2><span class="section-number">8.3. </span>Complete example<a class="headerlink" href="#Complete-example" title="Permalink to this headline"></a></h2>
<p>In the example below, we illustrate the usage of <strong>LearningSolver</strong> by building instances of the Traveling Salesman Problem, collecting training data, training the ML components, then solving a new instance.</p>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[1]:
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[3]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">random</span>
@@ -269,7 +305,7 @@
<span class="kn">from</span> <span class="nn">miplearn.components.primal.actions</span> <span class="kn">import</span> <span class="n">SetWarmStart</span>
<span class="kn">from</span> <span class="nn">miplearn.components.primal.indep</span> <span class="kn">import</span> <span class="n">IndependentVarsPrimalComponent</span>
<span class="kn">from</span> <span class="nn">miplearn.extractors.AlvLouWeh2017</span> <span class="kn">import</span> <span class="n">AlvLouWeh2017Extractor</span>
<span class="kn">from</span> <span class="nn">miplearn.io</span> <span class="kn">import</span> <span class="n">save</span>
<span class="kn">from</span> <span class="nn">miplearn.io</span> <span class="kn">import</span> <span class="n">write_pkl_gz</span>
<span class="kn">from</span> <span class="nn">miplearn.problems.tsp</span> <span class="kn">import</span> <span class="p">(</span>
<span class="n">TravelingSalesmanGenerator</span><span class="p">,</span>
<span class="n">build_tsp_model</span><span class="p">,</span>
@@ -291,7 +327,7 @@
<span class="p">)</span><span class="o">.</span><span class="n">generate</span><span class="p">(</span><span class="mi">50</span><span class="p">)</span>
<span class="c1"># Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...</span>
<span class="n">all_data</span> <span class="o">=</span> <span class="n">save</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="s2">&quot;data/tsp&quot;</span><span class="p">)</span>
<span class="n">all_data</span> <span class="o">=</span> <span class="n">write_pkl_gz</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="s2">&quot;data/tsp&quot;</span><span class="p">)</span>
<span class="c1"># Split train/test data</span>
<span class="n">train_data</span> <span class="o">=</span> <span class="n">all_data</span><span class="p">[:</span><span class="mi">40</span><span class="p">]</span>
@@ -321,27 +357,20 @@
<span class="n">solver</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">train_data</span><span class="p">)</span>
<span class="c1"># Solve a test instance</span>
<span class="n">solver</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">test_data</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">build_tsp_model</span><span class="p">);</span>
</pre></div>
<span class="n">solver</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">test_data</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">build_tsp_model</span><span class="p">)</span>
<br/></pre></div>
</div>
</div>
<div class="nboutput docutils container">
<div class="prompt empty docutils container">
</div>
<div class="output_area stderr docutils container">
<div class="highlight"><pre>
/home/axavier/Software/anaconda3/envs/miplearn/lib/python3.8/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
</pre></div></div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt empty docutils container">
</div>
<div class="output_area docutils container">
<div class="highlight"><pre>
Restricted license - for non-production use only - expires 2023-10-25
Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (linux64)
Thread count: 6 physical cores, 12 logical processors, using up to 12 threads
Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Optimize a model with 10 rows, 45 columns and 90 nonzeros
Model fingerprint: 0x6ddcd141
Coefficient statistics:
@@ -356,11 +385,14 @@ Iteration Objective Primal Inf. Dual Inf. Time
0 6.3600000e+02 1.700000e+01 0.000000e+00 0s
15 2.7610000e+03 0.000000e+00 0.000000e+00 0s
Solved in 15 iterations and 0.01 seconds (0.00 work units)
Solved in 15 iterations and 0.00 seconds (0.00 work units)
Optimal objective 2.761000000e+03
Set parameter LazyConstraints to value 1
Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (linux64)
Thread count: 6 physical cores, 12 logical processors, using up to 12 threads
Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Optimize a model with 10 rows, 45 columns and 90 nonzeros
Model fingerprint: 0x74ca3d0a
Variable types: 0 continuous, 45 integer (45 binary)
@@ -370,7 +402,7 @@ Coefficient statistics:
Bounds range [1e+00, 1e+00]
RHS range [2e+00, 2e+00]
User MIP start produced solution with objective 2796 (0.01s)
User MIP start produced solution with objective 2796 (0.00s)
Loaded user MIP start with objective 2796
Presolve time: 0.00s
@@ -382,51 +414,34 @@ Root relaxation: objective 2.761000e+03, 14 iterations, 0.00 seconds (0.00 work
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 2761.00000 0 - 2796.00000 2761.00000 1.25% - 0s
0 0 cutoff 0 2796.00000 2796.00000 0.00% - 0s
Cutting planes:
Lazy constraints: 3
Explored 1 nodes (15 simplex iterations) in 0.01 seconds (0.00 work units)
Thread count was 12 (of 12 available processors)
Explored 1 nodes (16 simplex iterations) in 0.01 seconds (0.00 work units)
Thread count was 32 (of 32 available processors)
Solution count 1: 2796
Optimal solution found (tolerance 1.00e-04)
Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%
User-callback calls 103, time in user-callback 0.00 sec
Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (linux64)
Thread count: 6 physical cores, 12 logical processors, using up to 12 threads
Optimize a model with 10 rows, 45 columns and 90 nonzeros
Model fingerprint: 0x74ca3d0a
Variable types: 0 continuous, 45 integer (45 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [4e+01, 1e+03]
Bounds range [1e+00, 1e+00]
RHS range [2e+00, 2e+00]
Presolved: 10 rows, 45 columns, 90 nonzeros
Continuing optimization...
Cutting planes:
Lazy constraints: 3
Explored 1 nodes (15 simplex iterations) in 0.01 seconds (0.00 work units)
Thread count was 12 (of 12 available processors)
Solution count 1: 2796
Optimal solution found (tolerance 1.00e-04)
Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%
User-callback calls 27, time in user-callback 0.00 sec
User-callback calls 110, time in user-callback 0.00 sec
</pre></div></div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[3]:
</pre></div>
</div>
<div class="output_area docutils container">
<div class="highlight"><pre>
{&#39;WS: Count&#39;: 1, &#39;WS: Number of variables set&#39;: 41.0}
</pre></div></div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[1]:
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span>
@@ -434,7 +449,6 @@ User-callback calls 27, time in user-callback 0.00 sec
</div>
</div>
</div>
</div>
</div>