You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
MIPLearn/0.4/tutorials/getting-started-jump/index.html

762 lines
55 KiB

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>3. Getting started (JuMP) &#8212; MIPLearn 0.4</title>
<link href="../../_static/css/theme.css" rel="stylesheet" />
<link href="../../_static/css/index.c5995385ac14fb8791e8eb36b4908be2.css" rel="stylesheet" />
<link rel="stylesheet"
href="../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin
href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin
href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" href="../../_static/pygments.css" type="text/css" />
<link rel="stylesheet" href="../../_static/sphinx-book-theme.acff12b8f9c144ce68a297486a2fa670.css" type="text/css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/nbsphinx-code-cells.css" />
<link rel="stylesheet" type="text/css" href="../../_static/custom.css" />
<link rel="preload" as="script" href="../../_static/js/index.1c5a1a01449ed65a7b51.js">
<script id="documentation_options" data-url_root="../../" src="../../_static/documentation_options.js"></script>
<script src="../../_static/jquery.js"></script>
<script src="../../_static/underscore.js"></script>
<script src="../../_static/doctools.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script src="../../_static/sphinx-book-theme.12a9622fbb08dcb3a2a40b2c02b83a57.js"></script>
<script async="async" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/latest.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/x-mathjax-config">MathJax.Hub.Config({"tex2jax": {"inlineMath": [["\\(", "\\)"]], "displayMath": [["\\[", "\\]"]], "processRefs": false, "processEnvironments": false}})</script>
<link rel="index" title="Index" href="../../genindex/" />
<link rel="search" title="Search" href="../../search/" />
<link rel="next" title="4. User cuts and lazy constraints" href="../cuts-gurobipy/" />
<link rel="prev" title="2. Getting started (Gurobipy)" href="../getting-started-gurobipy/" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="docsearch:language" content="en" />
</head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="80">
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<div class="col-12 col-md-3 bd-sidebar site-navigation show" id="site-navigation">
<div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../">
<h1 class="site-logo" id="site-title">MIPLearn 0.4</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../search/" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off" >
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main navigation">
<div class="bd-toc-item active">
<p class="caption">
<span class="caption-text">
Tutorials
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../getting-started-pyomo/">
1. Getting started (Pyomo)
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../getting-started-gurobipy/">
2. Getting started (Gurobipy)
</a>
</li>
<li class="toctree-l1 current active">
<a class="current reference internal" href="#">
3. Getting started (JuMP)
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../cuts-gurobipy/">
4. User cuts and lazy constraints
</a>
</li>
</ul>
<p class="caption">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../guide/problems/">
5. Benchmark Problems
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../guide/collectors/">
6. Training Data Collectors
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../guide/features/">
7. Feature Extractors
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../guide/primal/">
8. Primal Components
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../guide/solvers/">
9. Learning Solver
</a>
</li>
</ul>
<p class="caption">
<span class="caption-text">
Python API Reference
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../api/problems/">
10. Benchmark Problems
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../api/collectors/">
11. Collectors &amp; Extractors
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../api/components/">
12. Components
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../api/solvers/">
13. Solvers
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../api/helpers/">
14. Helpers
</a>
</li>
</ul>
</div>
</nav> <!-- To handle the deprecated key -->
</div>
<main class="col py-md-3 pl-md-4 bd-content overflow-auto" role="main">
<div class="topbar container-xl fixed-top">
<div class="topbar-contents row">
<div class="col-12 col-md-3 bd-topbar-whitespace site-navigation show"></div>
<div class="col pl-md-4 topbar-main">
<button id="navbar-toggler" class="navbar-toggler ml-0" type="button" data-toggle="collapse"
data-toggle="tooltip" data-placement="bottom" data-target=".site-navigation" aria-controls="navbar-menu"
aria-expanded="true" aria-label="Toggle navigation" aria-controls="site-navigation"
title="Toggle navigation" data-toggle="tooltip" data-placement="left">
<i class="fas fa-bars"></i>
<i class="fas fa-arrow-left"></i>
<i class="fas fa-arrow-up"></i>
</button>
<div class="dropdown-buttons-trigger">
<button id="dropdown-buttons-trigger" class="btn btn-secondary topbarbtn" aria-label="Download this page"><i
class="fas fa-download"></i></button>
<div class="dropdown-buttons">
<!-- ipynb file if we had a myst markdown file -->
<!-- Download raw file -->
<a class="dropdown-buttons" href="../../_sources/tutorials/getting-started-jump.ipynb.txt"><button type="button"
class="btn btn-secondary topbarbtn" title="Download source file" data-toggle="tooltip"
data-placement="left">.ipynb</button></a>
<!-- Download PDF via print -->
<button type="button" id="download-print" class="btn btn-secondary topbarbtn" title="Print to PDF"
onClick="window.print()" data-toggle="tooltip" data-placement="left">.pdf</button>
</div>
</div>
<!-- Source interaction buttons -->
<!-- Full screen (wrap in <a> to have style consistency -->
<a class="full-screen-button"><button type="button" class="btn btn-secondary topbarbtn" data-toggle="tooltip"
data-placement="bottom" onclick="toggleFullScreen()" aria-label="Fullscreen mode"
title="Fullscreen mode"><i
class="fas fa-expand"></i></button></a>
<!-- Launch buttons -->
</div>
<!-- Table of contents -->
<div class="d-none d-md-block col-md-2 bd-toc show">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction">
3.1. Introduction
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Installation">
3.2. Installation
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Modeling-a-simple-optimization-problem">
3.3. Modeling a simple optimization problem
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Generating-training-data">
3.4. Generating training data
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Training-and-solving-test-instances">
3.5. Training and solving test instances
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Accessing-the-solution">
3.6. Accessing the solution
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<div id="main-content" class="row">
<div class="col-12 col-md-9 pl-md-3 pr-md-0">
<div>
<div class="section" id="Getting-started-(JuMP)">
<h1><span class="section-number">3. </span>Getting started (JuMP)<a class="headerlink" href="#Getting-started-(JuMP)" title="Permalink to this headline"></a></h1>
<div class="section" id="Introduction">
<h2><span class="section-number">3.1. </span>Introduction<a class="headerlink" href="#Introduction" title="Permalink to this headline"></a></h2>
<p><strong>MIPLearn</strong> is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:</p>
<ol class="arabic simple">
<li><p>Install the Julia/JuMP version of MIPLearn</p></li>
<li><p>Model a simple optimization problem using JuMP</p></li>
<li><p>Generate training data and train the ML models</p></li>
<li><p>Use the ML models together Gurobi to solve new instances</p></li>
</ol>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!</p>
</div>
</div>
<div class="section" id="Installation">
<h2><span class="section-number">3.2. </span>Installation<a class="headerlink" href="#Installation" title="Permalink to this headline"></a></h2>
<p>MIPLearn is available in two versions:</p>
<ul class="simple">
<li><p>Python version, compatible with the Pyomo and Gurobipy modeling languages,</p></li>
<li><p>Julia version, compatible with the JuMP modeling language.</p></li>
</ul>
<p>In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the <a class="reference external" href="https://julialang.org/downloads/">official Julia website for more instructions</a>. After Julia is installed, launch the Julia REPL, type <code class="docutils literal notranslate"><span class="pre">]</span></code> to enter package mode, then install MIPLearn:</p>
<div class="highlight-none notranslate"><div class="highlight"><pre><span></span>pkg&gt; add MIPLearn@0.4
</pre></div>
</div>
<p>In addition to MIPLearn itself, we will also install:</p>
<ul class="simple">
<li><p>the JuMP modeling language</p></li>
<li><p>Gurobi, a state-of-the-art commercial MILP solver</p></li>
<li><p>Distributions, to generate random data</p></li>
<li><p>PyCall, to access ML model from Scikit-Learn</p></li>
<li><p>Suppressor, to make the output cleaner</p></li>
</ul>
<div class="highlight-none notranslate"><div class="highlight"><pre><span></span>pkg&gt; add JuMP@1, Gurobi@1, Distributions@0.25, PyCall@1, Suppressor@0.2
</pre></div>
</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<ul class="simple">
<li><p>If you do not have a Gurobi license available, you can also follow the tutorial by installing an open-source solver, such as <code class="docutils literal notranslate"><span class="pre">HiGHS</span></code>, and replacing <code class="docutils literal notranslate"><span class="pre">Gurobi.Optimizer</span></code> by <code class="docutils literal notranslate"><span class="pre">HiGHS.Optimizer</span></code> in all the code examples.</p></li>
<li><p>In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Julia projects.</p></li>
</ul>
</div>
</div>
<div class="section" id="Modeling-a-simple-optimization-problem">
<h2><span class="section-number">3.3. </span>Modeling a simple optimization problem<a class="headerlink" href="#Modeling-a-simple-optimization-problem" title="Permalink to this headline"></a></h2>
<p>To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the <strong>unit commitment problem,</strong> a practical optimization problem solved daily by electric grid operators around the world.</p>
<p>Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns <span class="math notranslate nohighlight">\(n\)</span> generators, denoted by <span class="math notranslate nohighlight">\(g_1, \ldots, g_n\)</span>. Each generator can either be online or offline. An online generator <span class="math notranslate nohighlight">\(g_i\)</span> can produce between <span class="math notranslate nohighlight">\(p^\text{min}_i\)</span> to <span class="math notranslate nohighlight">\(p^\text{max}_i\)</span> megawatts of power, and it costs the company
<span class="math notranslate nohighlight">\(c^\text{fix}_i + c^\text{var}_i y_i\)</span>, where <span class="math notranslate nohighlight">\(y_i\)</span> is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand <span class="math notranslate nohighlight">\(d\)</span> (in megawatts).</p>
<p>This simple problem can be modeled as a <em>mixed-integer linear optimization</em> problem as follows. For each generator <span class="math notranslate nohighlight">\(g_i\)</span>, let <span class="math notranslate nohighlight">\(x_i \in \{0,1\}\)</span> be a decision variable indicating whether <span class="math notranslate nohighlight">\(g_i\)</span> is online, and let <span class="math notranslate nohighlight">\(y_i \geq 0\)</span> be a decision variable indicating how much power does <span class="math notranslate nohighlight">\(g_i\)</span> produce. The problem is then given by:</p>
<div class="math notranslate nohighlight">
\[\begin{split}\begin{align}
\text{minimize } \quad &amp; \sum_{i=1}^n \left( c^\text{fix}_i x_i + c^\text{var}_i y_i \right) \\
\text{subject to } \quad &amp; y_i \leq p^\text{max}_i x_i &amp; i=1,\ldots,n \\
&amp; y_i \geq p^\text{min}_i x_i &amp; i=1,\ldots,n \\
&amp; \sum_{i=1}^n y_i = d \\
&amp; x_i \in \{0,1\} &amp; i=1,\ldots,n \\
&amp; y_i \geq 0 &amp; i=1,\ldots,n
\end{align}\end{split}\]</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.</p>
</div>
<p>Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Julia and JuMP. We start by defining a data class <code class="docutils literal notranslate"><span class="pre">UnitCommitmentData</span></code>, which holds all the input data.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[1]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="k">struct</span> <span class="kt">UnitCommitmentData</span>
<span class="w"> </span><span class="n">demand</span><span class="o">::</span><span class="kt">Float64</span>
<span class="w"> </span><span class="n">pmin</span><span class="o">::</span><span class="kt">Vector</span><span class="p">{</span><span class="kt">Float64</span><span class="p">}</span>
<span class="w"> </span><span class="n">pmax</span><span class="o">::</span><span class="kt">Vector</span><span class="p">{</span><span class="kt">Float64</span><span class="p">}</span>
<span class="w"> </span><span class="n">cfix</span><span class="o">::</span><span class="kt">Vector</span><span class="p">{</span><span class="kt">Float64</span><span class="p">}</span>
<span class="w"> </span><span class="n">cvar</span><span class="o">::</span><span class="kt">Vector</span><span class="p">{</span><span class="kt">Float64</span><span class="p">}</span>
<span class="k">end</span><span class="p">;</span>
</pre></div>
</div>
</div>
<p>Next, we write a <code class="docutils literal notranslate"><span class="pre">build_uc_model</span></code> function, which converts the input data into a concrete JuMP model. The function accepts <code class="docutils literal notranslate"><span class="pre">UnitCommitmentData</span></code>, the data structure we previously defined, or the path to a JLD2 file containing this data.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[2]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="k">using</span><span class="w"> </span><span class="n">MIPLearn</span>
<span class="k">using</span><span class="w"> </span><span class="n">JuMP</span>
<span class="k">using</span><span class="w"> </span><span class="n">Gurobi</span>
<span class="k">function</span><span class="w"> </span><span class="n">build_uc_model</span><span class="p">(</span><span class="n">data</span><span class="p">)</span>
<span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="n">data</span><span class="w"> </span><span class="k">isa</span><span class="w"> </span><span class="kt">String</span>
<span class="w"> </span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">read_jld2</span><span class="p">(</span><span class="n">data</span><span class="p">)</span>
<span class="w"> </span><span class="k">end</span>
<span class="w"> </span><span class="n">model</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">Model</span><span class="p">(</span><span class="n">Gurobi</span><span class="o">.</span><span class="n">Optimizer</span><span class="p">)</span>
<span class="w"> </span><span class="n">G</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1</span><span class="o">:</span><span class="n">length</span><span class="p">(</span><span class="n">data</span><span class="o">.</span><span class="n">pmin</span><span class="p">)</span>
<span class="w"> </span><span class="nd">@variable</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="w"> </span><span class="n">x</span><span class="p">[</span><span class="n">G</span><span class="p">],</span><span class="w"> </span><span class="n">Bin</span><span class="p">)</span>
<span class="w"> </span><span class="nd">@variable</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="w"> </span><span class="n">y</span><span class="p">[</span><span class="n">G</span><span class="p">]</span><span class="w"> </span><span class="o">&gt;=</span><span class="w"> </span><span class="mi">0</span><span class="p">)</span>
<span class="w"> </span><span class="nd">@objective</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="w"> </span><span class="n">Min</span><span class="p">,</span><span class="w"> </span><span class="n">sum</span><span class="p">(</span><span class="n">data</span><span class="o">.</span><span class="n">cfix</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">data</span><span class="o">.</span><span class="n">cvar</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">y</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="k">for</span><span class="w"> </span><span class="n">g</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="n">G</span><span class="p">))</span>
<span class="w"> </span><span class="nd">@constraint</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="w"> </span><span class="n">eq_max_power</span><span class="p">[</span><span class="n">g</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="n">G</span><span class="p">],</span><span class="w"> </span><span class="n">y</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="o">&lt;=</span><span class="w"> </span><span class="n">data</span><span class="o">.</span><span class="n">pmax</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x</span><span class="p">[</span><span class="n">g</span><span class="p">])</span>
<span class="w"> </span><span class="nd">@constraint</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="w"> </span><span class="n">eq_min_power</span><span class="p">[</span><span class="n">g</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="n">G</span><span class="p">],</span><span class="w"> </span><span class="n">y</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="o">&gt;=</span><span class="w"> </span><span class="n">data</span><span class="o">.</span><span class="n">pmin</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">x</span><span class="p">[</span><span class="n">g</span><span class="p">])</span>
<span class="w"> </span><span class="nd">@constraint</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="w"> </span><span class="n">eq_demand</span><span class="p">,</span><span class="w"> </span><span class="n">sum</span><span class="p">(</span><span class="n">y</span><span class="p">[</span><span class="n">g</span><span class="p">]</span><span class="w"> </span><span class="k">for</span><span class="w"> </span><span class="n">g</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="n">G</span><span class="p">)</span><span class="w"> </span><span class="o">==</span><span class="w"> </span><span class="n">data</span><span class="o">.</span><span class="n">demand</span><span class="p">)</span>
<span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="n">JumpModel</span><span class="p">(</span><span class="n">model</span><span class="p">)</span>
<span class="k">end</span><span class="p">;</span>
</pre></div>
</div>
</div>
<p>At this point, we can already use Gurobi to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:</p>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[3]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">build_uc_model</span><span class="p">(</span>
<span class="w"> </span><span class="n">UnitCommitmentData</span><span class="p">(</span>
<span class="w"> </span><span class="mf">100.0</span><span class="p">,</span><span class="w"> </span><span class="c"># demand</span>
<span class="w"> </span><span class="p">[</span><span class="mi">10</span><span class="p">,</span><span class="w"> </span><span class="mi">20</span><span class="p">,</span><span class="w"> </span><span class="mi">30</span><span class="p">],</span><span class="w"> </span><span class="c"># pmin</span>
<span class="w"> </span><span class="p">[</span><span class="mi">50</span><span class="p">,</span><span class="w"> </span><span class="mi">60</span><span class="p">,</span><span class="w"> </span><span class="mi">70</span><span class="p">],</span><span class="w"> </span><span class="c"># pmax</span>
<span class="w"> </span><span class="p">[</span><span class="mi">700</span><span class="p">,</span><span class="w"> </span><span class="mi">600</span><span class="p">,</span><span class="w"> </span><span class="mi">500</span><span class="p">],</span><span class="w"> </span><span class="c"># cfix</span>
<span class="w"> </span><span class="p">[</span><span class="mf">1.5</span><span class="p">,</span><span class="w"> </span><span class="mf">2.0</span><span class="p">,</span><span class="w"> </span><span class="mf">2.5</span><span class="p">],</span><span class="w"> </span><span class="c"># cvar</span>
<span class="w"> </span><span class="p">)</span>
<span class="p">)</span>
<span class="n">model</span><span class="o">.</span><span class="n">optimize</span><span class="p">()</span>
<span class="nd">@show</span><span class="w"> </span><span class="n">objective_value</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">inner</span><span class="p">)</span>
<span class="nd">@show</span><span class="w"> </span><span class="kt">Vector</span><span class="p">(</span><span class="n">value</span><span class="o">.</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">inner</span><span class="p">[</span><span class="ss">:x</span><span class="p">]))</span>
<span class="nd">@show</span><span class="w"> </span><span class="kt">Vector</span><span class="p">(</span><span class="n">value</span><span class="o">.</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">inner</span><span class="p">[</span><span class="ss">:y</span><span class="p">]));</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt empty docutils container">
</div>
<div class="output_area docutils container">
<div class="highlight"><pre>
Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Optimize a model with 7 rows, 6 columns and 15 nonzeros
Model fingerprint: 0x55e33a07
Variable types: 3 continuous, 3 integer (3 binary)
Coefficient statistics:
Matrix range [1e+00, 7e+01]
Objective range [2e+00, 7e+02]
Bounds range [0e+00, 0e+00]
RHS range [1e+02, 1e+02]
Presolve removed 2 rows and 1 columns
Presolve time: 0.00s
Presolved: 5 rows, 5 columns, 13 nonzeros
Variable types: 0 continuous, 5 integer (3 binary)
Found heuristic solution: objective 1400.0000000
Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s
0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s
* 0 0 0 1320.0000000 1320.00000 0.00% - 0s
Explored 1 nodes (5 simplex iterations) in 0.00 seconds (0.00 work units)
Thread count was 32 (of 32 available processors)
Solution count 2: 1320 1400
Optimal solution found (tolerance 1.00e-04)
Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%
User-callback calls 371, time in user-callback 0.00 sec
objective_value(model.inner) = 1320.0
Vector(value.(model.inner[:x])) = [-0.0, 1.0, 1.0]
Vector(value.(model.inner[:y])) = [0.0, 60.0, 40.0]
</pre></div></div>
</div>
<p>Running the code above, we found that the optimal solution for our small problem instance costs $1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power.</p>
<div class="admonition note">
<p class="admonition-title">Notes</p>
<ul class="simple">
<li><p>In the example above, <code class="docutils literal notranslate"><span class="pre">JumpModel</span></code> is just a thin wrapper around a standard JuMP model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as <code class="docutils literal notranslate"><span class="pre">optimize</span></code>. For more control, and to query the solution, the original JuMP model can be accessed through <code class="docutils literal notranslate"><span class="pre">model.inner</span></code>, as illustrated above.</p></li>
</ul>
</div>
</div>
<div class="section" id="Generating-training-data">
<h2><span class="section-number">3.4. </span>Generating training data<a class="headerlink" href="#Generating-training-data" title="Permalink to this headline"></a></h2>
<p>Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a <strong>trained</strong> solver, which can optimize new instances (similar to the ones it was trained on) faster.</p>
<p>In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a
random instance generator:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[4]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="k">using</span><span class="w"> </span><span class="n">Distributions</span>
<span class="k">using</span><span class="w"> </span><span class="n">Random</span>
<span class="k">function</span><span class="w"> </span><span class="n">random_uc_data</span><span class="p">(;</span><span class="w"> </span><span class="n">samples</span><span class="o">::</span><span class="kt">Int</span><span class="p">,</span><span class="w"> </span><span class="n">n</span><span class="o">::</span><span class="kt">Int</span><span class="p">,</span><span class="w"> </span><span class="n">seed</span><span class="o">::</span><span class="kt">Int</span><span class="o">=</span><span class="mi">42</span><span class="p">)</span><span class="o">::</span><span class="kt">Vector</span>
<span class="w"> </span><span class="n">Random</span><span class="o">.</span><span class="n">seed!</span><span class="p">(</span><span class="n">seed</span><span class="p">)</span>
<span class="w"> </span><span class="n">pmin</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">rand</span><span class="p">(</span><span class="n">Uniform</span><span class="p">(</span><span class="mi">100_000</span><span class="p">,</span><span class="w"> </span><span class="mi">500_000</span><span class="p">),</span><span class="w"> </span><span class="n">n</span><span class="p">)</span>
<span class="w"> </span><span class="n">pmax</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">pmin</span><span class="w"> </span><span class="o">.*</span><span class="w"> </span><span class="n">rand</span><span class="p">(</span><span class="n">Uniform</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span><span class="w"> </span><span class="mf">2.5</span><span class="p">),</span><span class="w"> </span><span class="n">n</span><span class="p">)</span>
<span class="w"> </span><span class="n">cfix</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">pmin</span><span class="w"> </span><span class="o">.*</span><span class="w"> </span><span class="n">rand</span><span class="p">(</span><span class="n">Uniform</span><span class="p">(</span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="mi">125</span><span class="p">),</span><span class="w"> </span><span class="n">n</span><span class="p">)</span>
<span class="w"> </span><span class="n">cvar</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">rand</span><span class="p">(</span><span class="n">Uniform</span><span class="p">(</span><span class="mf">1.25</span><span class="p">,</span><span class="w"> </span><span class="mf">1.50</span><span class="p">),</span><span class="w"> </span><span class="n">n</span><span class="p">)</span>
<span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="n">UnitCommitmentData</span><span class="p">(</span>
<span class="w"> </span><span class="n">sum</span><span class="p">(</span><span class="n">pmax</span><span class="p">)</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="n">rand</span><span class="p">(</span><span class="n">Uniform</span><span class="p">(</span><span class="mf">0.5</span><span class="p">,</span><span class="w"> </span><span class="mf">0.75</span><span class="p">)),</span>
<span class="w"> </span><span class="n">pmin</span><span class="p">,</span>
<span class="w"> </span><span class="n">pmax</span><span class="p">,</span>
<span class="w"> </span><span class="n">cfix</span><span class="p">,</span>
<span class="w"> </span><span class="n">cvar</span><span class="p">,</span>
<span class="w"> </span><span class="p">)</span>
<span class="w"> </span><span class="k">for</span><span class="w"> </span><span class="n">_</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="mi">1</span><span class="o">:</span><span class="n">samples</span>
<span class="w"> </span><span class="p">]</span>
<span class="k">end</span><span class="p">;</span>
</pre></div>
</div>
</div>
<p>In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.</p>
<p>Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple
machines. The code below generates the files <code class="docutils literal notranslate"><span class="pre">uc/train/00001.jld2</span></code>, <code class="docutils literal notranslate"><span class="pre">uc/train/00002.jld2</span></code>, etc., which contain the input data in JLD2 format.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[5]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">random_uc_data</span><span class="p">(</span><span class="n">samples</span><span class="o">=</span><span class="mi">500</span><span class="p">,</span><span class="w"> </span><span class="n">n</span><span class="o">=</span><span class="mi">500</span><span class="p">)</span>
<span class="n">train_data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">write_jld2</span><span class="p">(</span><span class="n">data</span><span class="p">[</span><span class="mi">1</span><span class="o">:</span><span class="mi">450</span><span class="p">],</span><span class="w"> </span><span class="s">&quot;uc/train&quot;</span><span class="p">)</span>
<span class="n">test_data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">write_jld2</span><span class="p">(</span><span class="n">data</span><span class="p">[</span><span class="mi">451</span><span class="o">:</span><span class="mi">500</span><span class="p">],</span><span class="w"> </span><span class="s">&quot;uc/test&quot;</span><span class="p">);</span>
</pre></div>
</div>
</div>
<p>Finally, we use <code class="docutils literal notranslate"><span class="pre">BasicCollector</span></code> to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files <code class="docutils literal notranslate"><span class="pre">uc/train/00001.h5</span></code>, <code class="docutils literal notranslate"><span class="pre">uc/train/00002.h5</span></code>, etc. The optimization models are also exported to compressed MPS files <code class="docutils literal notranslate"><span class="pre">uc/train/00001.mps.gz</span></code>, <code class="docutils literal notranslate"><span class="pre">uc/train/00002.mps.gz</span></code>, etc.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[6]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="k">using</span><span class="w"> </span><span class="n">Suppressor</span>
<span class="nd">@suppress_out</span><span class="w"> </span><span class="k">begin</span>
<span class="w"> </span><span class="n">bc</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">BasicCollector</span><span class="p">()</span>
<span class="w"> </span><span class="n">bc</span><span class="o">.</span><span class="n">collect</span><span class="p">(</span><span class="n">train_data</span><span class="p">,</span><span class="w"> </span><span class="n">build_uc_model</span><span class="p">)</span>
<span class="k">end</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Training-and-solving-test-instances">
<h2><span class="section-number">3.5. </span>Training and solving test instances<a class="headerlink" href="#Training-and-solving-test-instances" title="Permalink to this headline"></a></h2>
<p>With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using <span class="math notranslate nohighlight">\(k\)</span>-nearest neighbors. More specifically, the strategy is to:</p>
<ol class="arabic simple">
<li><p>Memorize the optimal solutions of all training instances;</p></li>
<li><p>Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;</p></li>
<li><p>Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.</p></li>
<li><p>Provide this partial solution to the solver as a warm start.</p></li>
</ol>
<p>This simple strategy can be implemented as shown below, using <code class="docutils literal notranslate"><span class="pre">MemorizingPrimalComponent</span></code>. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[7]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="c"># Load kNN classifier from Scikit-Learn</span>
<span class="k">using</span><span class="w"> </span><span class="n">PyCall</span>
<span class="n">KNeighborsClassifier</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">pyimport</span><span class="p">(</span><span class="s">&quot;sklearn.neighbors&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">KNeighborsClassifier</span>
<span class="c"># Build the MIPLearn component</span>
<span class="n">comp</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">MemorizingPrimalComponent</span><span class="p">(</span>
<span class="w"> </span><span class="n">clf</span><span class="o">=</span><span class="n">KNeighborsClassifier</span><span class="p">(</span><span class="n">n_neighbors</span><span class="o">=</span><span class="mi">25</span><span class="p">),</span>
<span class="w"> </span><span class="n">extractor</span><span class="o">=</span><span class="n">H5FieldsExtractor</span><span class="p">(</span>
<span class="w"> </span><span class="n">instance_fields</span><span class="o">=</span><span class="p">[</span><span class="s">&quot;static_constr_rhs&quot;</span><span class="p">],</span>
<span class="w"> </span><span class="p">),</span>
<span class="w"> </span><span class="n">constructor</span><span class="o">=</span><span class="n">MergeTopSolutions</span><span class="p">(</span><span class="mi">25</span><span class="p">,</span><span class="w"> </span><span class="p">[</span><span class="mf">0.0</span><span class="p">,</span><span class="w"> </span><span class="mf">1.0</span><span class="p">]),</span>
<span class="w"> </span><span class="n">action</span><span class="o">=</span><span class="n">SetWarmStart</span><span class="p">(),</span>
<span class="p">);</span>
</pre></div>
</div>
</div>
<p>Having defined the ML strategy, we next construct <code class="docutils literal notranslate"><span class="pre">LearningSolver</span></code>, train the ML component and optimize one of the test instances.</p>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[8]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="n">solver_ml</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">LearningSolver</span><span class="p">(</span><span class="n">components</span><span class="o">=</span><span class="p">[</span><span class="n">comp</span><span class="p">])</span>
<span class="n">solver_ml</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">train_data</span><span class="p">)</span>
<span class="n">solver_ml</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">test_data</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span><span class="w"> </span><span class="n">build_uc_model</span><span class="p">);</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt empty docutils container">
</div>
<div class="output_area docutils container">
<div class="highlight"><pre>
Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
Model fingerprint: 0xd2378195
Variable types: 500 continuous, 500 integer (500 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+06]
Objective range [1e+00, 6e+07]
Bounds range [0e+00, 0e+00]
RHS range [2e+08, 2e+08]
User MIP start produced solution with objective 1.02165e+10 (0.00s)
Loaded user MIP start with objective 1.02165e+10
Presolve time: 0.00s
Presolved: 1001 rows, 1000 columns, 2500 nonzeros
Variable types: 500 continuous, 500 integer (500 binary)
Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 1.0216e+10 0 1 1.0217e+10 1.0216e+10 0.01% - 0s
Explored 1 nodes (510 simplex iterations) in 0.01 seconds (0.00 work units)
Thread count was 32 (of 32 available processors)
Solution count 1: 1.02165e+10
Optimal solution found (tolerance 1.00e-04)
Best objective 1.021651058978e+10, best bound 1.021567971257e+10, gap 0.0081%
User-callback calls 169, time in user-callback 0.00 sec
</pre></div></div>
</div>
<p>By examining the solve log above, specifically the line <code class="docutils literal notranslate"><span class="pre">Loaded</span> <span class="pre">user</span> <span class="pre">MIP</span> <span class="pre">start</span> <span class="pre">with</span> <span class="pre">objective...</span></code>, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided.</p>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[9]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="n">solver_baseline</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">LearningSolver</span><span class="p">(</span><span class="n">components</span><span class="o">=</span><span class="p">[])</span>
<span class="n">solver_baseline</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">train_data</span><span class="p">)</span>
<span class="n">solver_baseline</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">test_data</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span><span class="w"> </span><span class="n">build_uc_model</span><span class="p">);</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt empty docutils container">
</div>
<div class="output_area docutils container">
<div class="highlight"><pre>
Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
Model fingerprint: 0xb45c0594
Variable types: 500 continuous, 500 integer (500 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+06]
Objective range [1e+00, 6e+07]
Bounds range [0e+00, 0e+00]
RHS range [2e+08, 2e+08]
Presolve time: 0.00s
Presolved: 1001 rows, 1000 columns, 2500 nonzeros
Variable types: 500 continuous, 500 integer (500 binary)
Found heuristic solution: objective 1.071463e+10
Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 1.0216e+10 0 1 1.0715e+10 1.0216e+10 4.66% - 0s
H 0 0 1.025162e+10 1.0216e+10 0.35% - 0s
0 0 1.0216e+10 0 1 1.0252e+10 1.0216e+10 0.35% - 0s
H 0 0 1.023090e+10 1.0216e+10 0.15% - 0s
H 0 0 1.022335e+10 1.0216e+10 0.07% - 0s
H 0 0 1.022281e+10 1.0216e+10 0.07% - 0s
H 0 0 1.021753e+10 1.0216e+10 0.02% - 0s
H 0 0 1.021752e+10 1.0216e+10 0.02% - 0s
0 0 1.0216e+10 0 3 1.0218e+10 1.0216e+10 0.02% - 0s
0 0 1.0216e+10 0 1 1.0218e+10 1.0216e+10 0.02% - 0s
H 0 0 1.021651e+10 1.0216e+10 0.01% - 0s
Explored 1 nodes (764 simplex iterations) in 0.03 seconds (0.02 work units)
Thread count was 32 (of 32 available processors)
Solution count 7: 1.02165e+10 1.02175e+10 1.02228e+10 ... 1.07146e+10
Optimal solution found (tolerance 1.00e-04)
Best objective 1.021651058978e+10, best bound 1.021573363741e+10, gap 0.0076%
User-callback calls 204, time in user-callback 0.00 sec
</pre></div></div>
</div>
<p>In the log above, the <code class="docutils literal notranslate"><span class="pre">MIP</span> <span class="pre">start</span></code> line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems.</p>
</div>
<div class="section" id="Accessing-the-solution">
<h2><span class="section-number">3.6. </span>Accessing the solution<a class="headerlink" href="#Accessing-the-solution" title="Permalink to this headline"></a></h2>
<p>In the example above, we used <code class="docutils literal notranslate"><span class="pre">LearningSolver.solve</span></code> together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver.</p>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[10]:
</pre></div>
</div>
<div class="input_area highlight-julia notranslate"><div class="highlight"><pre><span></span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">random_uc_data</span><span class="p">(</span><span class="n">samples</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span><span class="w"> </span><span class="n">n</span><span class="o">=</span><span class="mi">500</span><span class="p">)[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">model</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">build_uc_model</span><span class="p">(</span><span class="n">data</span><span class="p">)</span>
<span class="n">solver_ml</span><span class="o">.</span><span class="n">optimize</span><span class="p">(</span><span class="n">model</span><span class="p">)</span>
<span class="nd">@show</span><span class="w"> </span><span class="n">objective_value</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">inner</span><span class="p">);</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt empty docutils container">
</div>
<div class="output_area docutils container">
<div class="highlight"><pre>
Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)
CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros
Model fingerprint: 0x974a7fba
Variable types: 500 continuous, 500 integer (500 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+06]
Objective range [1e+00, 6e+07]
Bounds range [0e+00, 0e+00]
RHS range [2e+08, 2e+08]
User MIP start produced solution with objective 9.86729e+09 (0.00s)
User MIP start produced solution with objective 9.86675e+09 (0.00s)
User MIP start produced solution with objective 9.86654e+09 (0.01s)
User MIP start produced solution with objective 9.8661e+09 (0.01s)
Loaded user MIP start with objective 9.8661e+09
Presolve time: 0.00s
Presolved: 1001 rows, 1000 columns, 2500 nonzeros
Variable types: 500 continuous, 500 integer (500 binary)
Root relaxation: objective 9.865344e+09, 510 iterations, 0.00 seconds (0.00 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 9.8653e+09 0 1 9.8661e+09 9.8653e+09 0.01% - 0s
Explored 1 nodes (510 simplex iterations) in 0.02 seconds (0.01 work units)
Thread count was 32 (of 32 available processors)
Solution count 4: 9.8661e+09 9.86654e+09 9.86675e+09 9.86729e+09
Optimal solution found (tolerance 1.00e-04)
Best objective 9.866096485614e+09, best bound 9.865343669936e+09, gap 0.0076%
User-callback calls 182, time in user-callback 0.00 sec
objective_value(model.inner) = 9.866096485613789e9
</pre></div></div>
</div>
</div>
</div>
</div>
<div class='prev-next-bottom'>
<a class='left-prev' id="prev-link" href="../getting-started-gurobipy/" title="previous page"><span class="section-number">2. </span>Getting started (Gurobipy)</a>
<a class='right-next' id="next-link" href="../cuts-gurobipy/" title="next page"><span class="section-number">4. </span>User cuts and lazy constraints</a>
</div>
</div>
</div>
<footer class="footer mt-5 mt-md-0">
<div class="container">
<p>
&copy; Copyright 2020-2023, UChicago Argonne, LLC.<br/>
</p>
</div>
</footer>
</main>
</div>
</div>
<script src="../../_static/js/index.1c5a1a01449ed65a7b51.js"></script>
</body>
</html>