You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
MIPLearn/docs/jump-tutorials/getting-started.ipynb

692 lines
32 KiB

{
"cells": [
{
"cell_type": "markdown",
"id": "9b0c4eed",
"metadata": {},
"source": [
"# Getting started\n",
"\n",
"## Introduction\n",
"\n",
"**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of both commercial and open source mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS, Cbc or SCIP). In this tutorial, we will:\n",
"\n",
"1. Install the Julia/JuMP version of MIPLearn\n",
"2. Model a simple optimization problem using JuMP\n",
"3. Generate training data and train the ML models\n",
"4. Use the ML models together with Gurobi to solve new instances\n",
"\n",
"<div class=\"alert alert-warning\">\n",
"Warning\n",
" \n",
"MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n",
" \n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"id": "f0d159b8",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"MIPLearn is available in two versions:\n",
"\n",
"- Python version, compatible with the Pyomo modeling language,\n",
"- Julia version, compatible with the JuMP modeling language.\n",
"\n",
"In this tutorial, we will demonstrate how to use and install the Julia/JuMP version of the package. The first step is to install the Julia programming language in your computer. [See the official instructions for more details](https://julialang.org/downloads/). Note that MIPLearn was developed and tested with Julia 1.6, and may not be compatible with newer versions of the language. After Julia is installed, launch its console and run the following commands to download and install the package:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b16685be",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Path `/home/axavier/Packages/MIPLearn.jl/dev` exists and looks like the correct package. Using existing path.\n",
"\u001b[32m\u001b[1m Resolving\u001b[22m\u001b[39m package versions...\n",
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Project.toml`\n",
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Manifest.toml`\n"
]
}
],
"source": [
"using Pkg\n",
"#Pkg.add(PackageSpec(url=\"https://github.com/ANL-CEEESA/MIPLearn.jl.git\"))\n",
"Pkg.develop(PackageSpec(path=\"/home/axavier/Packages/MIPLearn.jl/dev\"))"
]
},
{
"cell_type": "markdown",
"id": "e5ed7716",
"metadata": {},
"source": [
"In addition to MIPLearn itself, we will also install a few other packages that are required for this tutorial:\n",
"\n",
"- [**Gurobi**](https://www.gurobi.com/), a state-of-the-art MIP solver\n",
"- [**JuMP**](https://jump.dev/), an open source modeling language for Julia\n",
"- [**Distributions.jl**](https://github.com/JuliaStats/Distributions.jl), a statistics package that we will use to generate random inputs"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f88155c5",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m registry at `~/.julia/registries/General`\n",
"\u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m git-repo `https://github.com/JuliaRegistries/General.git`\n",
"\u001b[32m\u001b[1m Resolving\u001b[22m\u001b[39m package versions...\n",
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Project.toml`\n",
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Manifest.toml`\n"
]
}
],
"source": [
"using Pkg\n",
"Pkg.add([\n",
" PackageSpec(name=\"Gurobi\", version=\"0.9.14\"),\n",
" PackageSpec(name=\"JuMP\", version=\"0.21\"),\n",
" PackageSpec(name=\"Distributions\", version=\"0.25\"),\n",
" PackageSpec(name=\"Glob\", version=\"1\"),\n",
"])"
]
},
{
"cell_type": "markdown",
"id": "a0e1dda5",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
" \n",
"Note\n",
" \n",
"In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Julia projects.\n",
" \n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "378b6a97",
"metadata": {},
"source": [
"## Modeling a simple optimization problem\n",
"\n",
"To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n",
"\n",
"Suppose that you work at a utility company, and that it is your job to decide which electrical generators should be online at a certain hour of the day, as well as how much power should each generator produce. More specifically, assume that your company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs your company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. You also know that the total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts). To minimize the costs to your company, which generators should be online, and how much power should they produce?\n",
"\n",
"This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:\n",
"\n",
"$$\n",
"\\begin{align}\n",
"\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n",
"\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n",
"& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n",
"& \\sum_{i=1}^n y_i = d \\\\\n",
"& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n",
"& y_i \\geq 0 & i=1,\\ldots,n\n",
"\\end{align}\n",
"$$\n",
"\n",
"<div class=\"alert alert-info\">\n",
" \n",
"Note\n",
" \n",
"We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem. See benchmarks for more details.\n",
" \n",
"</div>\n",
"\n",
"Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Julia and JuMP. We start by defining a data structure that holds all the input data."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "798b2f6c",
"metadata": {},
"outputs": [],
"source": [
"Base.@kwdef struct UnitCommitmentData\n",
" demand::Float64\n",
" pmin::Vector{Float64}\n",
" pmax::Vector{Float64}\n",
" cfix::Vector{Float64}\n",
" cvar::Vector{Float64}\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "104b709a",
"metadata": {},
"source": [
"Next, we create a function that converts this data structure into a concrete JuMP model. For more details on the JuMP syntax, see [the official JuMP documentation](https://jump.dev/JuMP.jl/stable/)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a7c048e4",
"metadata": {},
"outputs": [],
"source": [
"using JuMP\n",
"\n",
"function build_uc_model(data::UnitCommitmentData)::Model\n",
" model = Model()\n",
" n = length(data.pmin)\n",
" @variable(model, x[1:n], Bin)\n",
" @variable(model, y[1:n] >= 0)\n",
" @objective(\n",
" model,\n",
" Min,\n",
" sum(\n",
" data.cfix[i] * x[i] +\n",
" data.cvar[i] * y[i]\n",
" for i in 1:n\n",
" )\n",
" )\n",
" @constraint(model, eq_max_power[i in 1:n], y[i] <= data.pmax[i] * x[i])\n",
" @constraint(model, eq_min_power[i in 1:n], y[i] >= data.pmin[i] * x[i])\n",
" @constraint(model, eq_demand, sum(y[i] for i in 1:n) == data.demand)\n",
" return model\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "5f10142e",
"metadata": {},
"source": [
"At this point, we can already use JuMP and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators, using SCIP:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "bc2022a4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"obj = 1320.0\n",
" x = [0.0, 1.0, 1.0]\n",
" y = [0.0, 60.0, 40.0]\n"
]
}
],
"source": [
"using Gurobi\n",
"\n",
"model = build_uc_model(\n",
" UnitCommitmentData(\n",
" demand = 100.0,\n",
" pmin = [10, 20, 30],\n",
" pmax = [50, 60, 70],\n",
" cfix = [700, 600, 500],\n",
" cvar = [1.5, 2.0, 2.5],\n",
" )\n",
")\n",
"\n",
"gurobi = optimizer_with_attributes(Gurobi.Optimizer, \"Threads\" => 1, \"Seed\" => 42)\n",
"set_optimizer(model, gurobi)\n",
"set_silent(model)\n",
"optimize!(model)\n",
"\n",
"println(\"obj = \", objective_value(model))\n",
"println(\" x = \", round.(value.(model[:x])))\n",
"println(\" y = \", round.(value.(model[:y]), digits=2));"
]
},
{
"cell_type": "markdown",
"id": "9ee6958b",
"metadata": {},
"source": [
"Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power."
]
},
{
"cell_type": "markdown",
"id": "f34e3d44",
"metadata": {},
"source": [
"## Generating training data\n",
"\n",
"Although SCIP could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** version of SCIP, which can solve new instances (similar to the ones it was trained on) faster.\n",
"\n",
"In the following, we will use MIPLearn to train machine learning models that can be used to accelerate SCIP's performance on a particular set of instances. More specifically, MIPLearn will train a model that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to SCIP as a warm start.\n",
"\n",
"Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a498e1e1",
"metadata": {},
"outputs": [],
"source": [
"using Distributions\n",
"using Random\n",
"\n",
"function random_uc_data(; samples::Int, n::Int, seed=42)\n",
" Random.seed!(seed)\n",
" pmin = rand(Uniform(100, 500.0), n)\n",
" pmax = pmin .* rand(Uniform(2.0, 2.5), n)\n",
" cfix = pmin .* rand(Uniform(100.0, 125.0), n)\n",
" cvar = rand(Uniform(1.25, 1.5), n)\n",
" return [\n",
" UnitCommitmentData(;\n",
" pmin,\n",
" pmax,\n",
" cfix,\n",
" cvar,\n",
" demand = sum(pmax) * rand(Uniform(0.5, 0.75)),\n",
" )\n",
" for i in 1:samples\n",
" ]\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "e33bb12c",
"metadata": {},
"source": [
"In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n",
"\n",
"Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete JuMP models. Files also make it much easier to solve multiple instances simultaneously, potentially even on multiple machines. We will cover parallel and distributed computing in a future tutorial. The code below generates the files `uc/train/00001.jld2`, `uc/train/00002.jld2`, etc., which contain the input data in [JLD2 format](https://github.com/JuliaIO/JLD2.jl)."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "5358a046",
"metadata": {},
"outputs": [],
"source": [
"using MIPLearn\n",
"data = random_uc_data(samples=500, n=50);\n",
"train_files = MIPLearn.save(data[1:450], \"uc/train/\")\n",
"test_files = MIPLearn.save(data[451:500], \"uc/test/\");"
]
},
{
"cell_type": "markdown",
"id": "38a27d1c",
"metadata": {},
"source": [
"Finally, we use `LearningSolver` to solve all the training instances. `LearningSolver` is the main component provided by MIPLearn, which integrates MIP solvers and ML. The optimal solutions, along with other useful training data, are stored in HDF5 files `uc/train/00001.h5`, `uc/train/00002.h5`, etc."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c341b12d",
"metadata": {},
"outputs": [],
"source": [
"solver = LearningSolver(gurobi)\n",
"solve!(solver, train_files, build_uc_model);"
]
},
{
"cell_type": "markdown",
"id": "189b4f60",
"metadata": {},
"source": [
"## Solving new instances\n",
"\n",
"With training data in hand, we can now fit the ML models using `MIPLearn.fit!`, then solve the test instances with `MIPLearn.solve!`, as shown below. The `tee=true` parameter asks MIPLearn to print the solver log to the screen."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1cf11450",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (linux64)\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 1 threads\n",
"Optimize a model with 101 rows, 100 columns and 250 nonzeros\n",
"Model fingerprint: 0xfb382c05\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+03]\n",
" Objective range [1e+00, 6e+04]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+04, 2e+04]\n",
"Presolve removed 100 rows and 50 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 50 columns, 50 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 7.0629410e+05 6.782322e+02 0.000000e+00 0s\n",
" 1 8.0678161e+05 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.00 seconds\n",
"Optimal objective 8.067816095e+05\n",
"\n",
"User-callback calls 33, time in user-callback 0.00 sec\n",
"\n",
"Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (linux64)\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 1 threads\n",
"Optimize a model with 101 rows, 100 columns and 250 nonzeros\n",
"Model fingerprint: 0x7bb6bbd6\n",
"Variable types: 50 continuous, 50 integer (50 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+03]\n",
" Objective range [1e+00, 6e+04]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+04, 2e+04]\n",
"\n",
"User MIP start produced solution with objective 822175 (0.00s)\n",
"User MIP start produced solution with objective 812767 (0.00s)\n",
"User MIP start produced solution with objective 811628 (0.00s)\n",
"User MIP start produced solution with objective 809648 (0.01s)\n",
"User MIP start produced solution with objective 808536 (0.01s)\n",
"Loaded user MIP start with objective 808536\n",
"\n",
"Presolve time: 0.00s\n",
"Presolved: 101 rows, 100 columns, 250 nonzeros\n",
"Variable types: 50 continuous, 50 integer (50 binary)\n",
"\n",
"Root relaxation: objective 8.067816e+05, 55 iterations, 0.00 seconds\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 806781.610 0 1 808536.496 806781.610 0.22% - 0s\n",
"H 0 0 808091.02482 806781.610 0.16% - 0s\n",
" 0 0 807198.955 0 2 808091.025 807198.955 0.11% - 0s\n",
" 0 0 807198.955 0 1 808091.025 807198.955 0.11% - 0s\n",
" 0 0 807198.955 0 2 808091.025 807198.955 0.11% - 0s\n",
" 0 0 807226.059 0 3 808091.025 807226.059 0.11% - 0s\n",
" 0 0 807240.578 0 5 808091.025 807240.578 0.11% - 0s\n",
" 0 0 807240.663 0 5 808091.025 807240.663 0.11% - 0s\n",
" 0 0 807259.825 0 4 808091.025 807259.825 0.10% - 0s\n",
" 0 0 807275.314 0 5 808091.025 807275.314 0.10% - 0s\n",
" 0 0 807279.037 0 6 808091.025 807279.037 0.10% - 0s\n",
" 0 0 807291.881 0 8 808091.025 807291.881 0.10% - 0s\n",
" 0 0 807325.323 0 6 808091.025 807325.323 0.09% - 0s\n",
" 0 0 807326.015 0 7 808091.025 807326.015 0.09% - 0s\n",
" 0 0 807326.798 0 7 808091.025 807326.798 0.09% - 0s\n",
" 0 0 807328.550 0 8 808091.025 807328.550 0.09% - 0s\n",
" 0 0 807331.193 0 9 808091.025 807331.193 0.09% - 0s\n",
" 0 0 807332.143 0 7 808091.025 807332.143 0.09% - 0s\n",
" 0 0 807335.410 0 8 808091.025 807335.410 0.09% - 0s\n",
" 0 0 807335.452 0 8 808091.025 807335.452 0.09% - 0s\n",
" 0 0 807337.253 0 9 808091.025 807337.253 0.09% - 0s\n",
" 0 0 807337.409 0 9 808091.025 807337.409 0.09% - 0s\n",
" 0 0 807347.720 0 8 808091.025 807347.720 0.09% - 0s\n",
" 0 0 807352.765 0 7 808091.025 807352.765 0.09% - 0s\n",
" 0 0 807366.618 0 9 808091.025 807366.618 0.09% - 0s\n",
" 0 0 807368.345 0 10 808091.025 807368.345 0.09% - 0s\n",
" 0 0 807369.195 0 10 808091.025 807369.195 0.09% - 0s\n",
" 0 0 807392.319 0 8 808091.025 807392.319 0.09% - 0s\n",
" 0 0 807401.436 0 9 808091.025 807401.436 0.09% - 0s\n",
" 0 0 807405.685 0 8 808091.025 807405.685 0.08% - 0s\n",
" 0 0 807411.994 0 8 808091.025 807411.994 0.08% - 0s\n",
" 0 0 807424.710 0 9 808091.025 807424.710 0.08% - 0s\n",
" 0 0 807424.867 0 11 808091.025 807424.867 0.08% - 0s\n",
" 0 0 807427.428 0 12 808091.025 807427.428 0.08% - 0s\n",
" 0 0 807433.211 0 10 808091.025 807433.211 0.08% - 0s\n",
" 0 0 807439.215 0 10 808091.025 807439.215 0.08% - 0s\n",
" 0 0 807439.303 0 11 808091.025 807439.303 0.08% - 0s\n",
" 0 0 807443.312 0 11 808091.025 807443.312 0.08% - 0s\n",
" 0 0 807444.488 0 12 808091.025 807444.488 0.08% - 0s\n",
" 0 0 807444.499 0 13 808091.025 807444.499 0.08% - 0s\n",
" 0 0 807444.499 0 13 808091.025 807444.499 0.08% - 0s\n",
" 0 2 807445.982 0 13 808091.025 807445.982 0.08% - 0s\n",
"\n",
"Cutting planes:\n",
" Cover: 3\n",
" MIR: 18\n",
" StrongCG: 1\n",
" Flow cover: 3\n",
"\n",
"Explored 39 nodes (333 simplex iterations) in 0.03 seconds\n",
"Thread count was 1 (of 32 available processors)\n",
"\n",
"Solution count 6: 808091 808536 809648 ... 822175\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.080910248225e+05, best bound 8.080640878016e+05, gap 0.0033%\n",
"\n",
"User-callback calls 341, time in user-callback 0.00 sec\n",
"\n"
]
}
],
"source": [
"solver_ml = LearningSolver(gurobi)\n",
"fit!(solver_ml, train_files, build_uc_model)\n",
"solve!(solver_ml, test_files[1], build_uc_model, tee=true);"
]
},
{
"cell_type": "markdown",
"id": "872211e7",
"metadata": {},
"source": [
"By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be near optimal for the problem. Now let us repeat the code above, but using an untrained solver. Note that the `fit` line is omitted."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "fc1e3629",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (linux64)\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 1 threads\n",
"Optimize a model with 101 rows, 100 columns and 250 nonzeros\n",
"Model fingerprint: 0xfb382c05\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+03]\n",
" Objective range [1e+00, 6e+04]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+04, 2e+04]\n",
"Presolve removed 100 rows and 50 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 50 columns, 50 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 7.0629410e+05 6.782322e+02 0.000000e+00 0s\n",
" 1 8.0678161e+05 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.00 seconds\n",
"Optimal objective 8.067816095e+05\n",
"\n",
"User-callback calls 33, time in user-callback 0.00 sec\n",
"\n",
"Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (linux64)\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 1 threads\n",
"Optimize a model with 101 rows, 100 columns and 250 nonzeros\n",
"Model fingerprint: 0x899aac3d\n",
"Variable types: 50 continuous, 50 integer (50 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+03]\n",
" Objective range [1e+00, 6e+04]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+04, 2e+04]\n",
"Found heuristic solution: objective 893073.33620\n",
"Presolve time: 0.00s\n",
"Presolved: 101 rows, 100 columns, 250 nonzeros\n",
"Variable types: 50 continuous, 50 integer (50 binary)\n",
"\n",
"Root relaxation: objective 8.067816e+05, 55 iterations, 0.00 seconds\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 806781.610 0 1 893073.336 806781.610 9.66% - 0s\n",
"H 0 0 842766.25007 806781.610 4.27% - 0s\n",
"H 0 0 818273.05208 806781.610 1.40% - 0s\n",
" 0 0 807198.955 0 2 818273.052 807198.955 1.35% - 0s\n",
"H 0 0 813499.43980 807198.955 0.77% - 0s\n",
" 0 0 807246.085 0 3 813499.440 807246.085 0.77% - 0s\n",
" 0 0 807272.377 0 4 813499.440 807272.377 0.77% - 0s\n",
" 0 0 807284.557 0 1 813499.440 807284.557 0.76% - 0s\n",
" 0 0 807298.666 0 2 813499.440 807298.666 0.76% - 0s\n",
" 0 0 807305.559 0 6 813499.440 807305.559 0.76% - 0s\n",
"H 0 0 812223.58825 807305.559 0.61% - 0s\n",
" 0 0 807309.503 0 4 812223.588 807309.503 0.61% - 0s\n",
" 0 0 807339.469 0 4 812223.588 807339.469 0.60% - 0s\n",
" 0 0 807344.135 0 6 812223.588 807344.135 0.60% - 0s\n",
" 0 0 807359.565 0 7 812223.588 807359.565 0.60% - 0s\n",
" 0 0 807371.997 0 8 812223.588 807371.997 0.60% - 0s\n",
" 0 0 807372.245 0 8 812223.588 807372.245 0.60% - 0s\n",
" 0 0 807378.545 0 9 812223.588 807378.545 0.60% - 0s\n",
" 0 0 807378.545 0 9 812223.588 807378.545 0.60% - 0s\n",
"H 0 0 811628.30751 807378.545 0.52% - 0s\n",
"H 0 0 810280.45754 807378.545 0.36% - 0s\n",
" 0 0 807378.545 0 1 810280.458 807378.545 0.36% - 0s\n",
"H 0 0 810123.10116 807378.545 0.34% - 0s\n",
" 0 0 807378.545 0 1 810123.101 807378.545 0.34% - 0s\n",
" 0 0 807378.545 0 3 810123.101 807378.545 0.34% - 0s\n",
" 0 0 807378.545 0 7 810123.101 807378.545 0.34% - 0s\n",
" 0 0 807379.672 0 8 810123.101 807379.672 0.34% - 0s\n",
" 0 0 807379.905 0 9 810123.101 807379.905 0.34% - 0s\n",
" 0 0 807380.615 0 10 810123.101 807380.615 0.34% - 0s\n",
" 0 0 807402.384 0 10 810123.101 807402.384 0.34% - 0s\n",
" 0 0 807407.299 0 12 810123.101 807407.299 0.34% - 0s\n",
" 0 0 807407.299 0 12 810123.101 807407.299 0.34% - 0s\n",
" 0 2 807408.320 0 12 810123.101 807408.320 0.34% - 0s\n",
"H 3 3 809647.65837 807476.463 0.27% 3.0 0s\n",
"H 84 35 808870.26352 807568.065 0.16% 2.7 0s\n",
"H 99 29 808536.49552 807588.561 0.12% 2.7 0s\n",
"* 310 1 5 808091.02482 808069.217 0.00% 3.3 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 3\n",
" Cover: 7\n",
" MIR: 9\n",
" Flow cover: 3\n",
"\n",
"Explored 311 nodes (1175 simplex iterations) in 0.06 seconds\n",
"Thread count was 1 (of 32 available processors)\n",
"\n",
"Solution count 10: 808091 808536 808870 ... 818273\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.080910248225e+05, best bound 8.080692169045e+05, gap 0.0027%\n",
"\n",
"User-callback calls 832, time in user-callback 0.00 sec\n",
"\n"
]
}
],
"source": [
"solver_baseline = LearningSolver(gurobi)\n",
"solve!(solver_baseline, test_files[1], build_uc_model, tee=true);"
]
},
{
"cell_type": "markdown",
"id": "7b5ce528",
"metadata": {},
"source": [
"In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time. For larger problems, however, the difference can be significant. See benchmarks for more details.\n",
"\n",
"<div class=\"alert alert-info\">\n",
"Note\n",
" \n",
"In addition to partial initial solutions, MIPLearn is also able to predict lazy constraints, cutting planes and branching priorities. See the next tutorials for more details.\n",
"</div>\n",
"\n",
"<div class=\"alert alert-info\">\n",
"Note\n",
" \n",
"It is not necessary to specify what ML models to use. MIPLearn, by default, will try a number of classical ML models and will choose the one that performs the best, based on k-fold cross validation. MIPLearn is also able to automatically collect features based on the MIP formulation of the problem and the solution to the LP relaxation, among other things, so it does not require handcrafted features. If you do want to customize the models and features, however, that is also possible, as we will see in a later tutorial.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "46da094b",
"metadata": {},
"source": [
"## Accessing the solution\n",
"\n",
"In the example above, we used `MIPLearn.solve` together with data files to solve both the training and the test instances. The optimal solutions were saved to HDF5 files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "986f0c18",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"obj = 809710.340270503\n",
" x = [1.0, -0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0]\n",
" y = [696.38, 0.0, 249.05, 0.0, 1183.75, 0.0, 504.91, 387.32, 1178.0, 765.25]\n"
]
}
],
"source": [
"# Construct model using previously defined functions\n",
"data = random_uc_data(samples=1, n=50)[1]\n",
"model = build_uc_model(data)\n",
"\n",
"# Solve model\n",
"solve!(solver_ml, model)\n",
"\n",
"# Print part of the optimal solution\n",
"println(\"obj = \", objective_value(model))\n",
"println(\" x = \", round.(value.(model[:x][1:10])))\n",
"println(\" y = \", round.(value.(model[:y][1:10]), digits=2))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f43ed281",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.6.2",
"language": "julia",
"name": "julia-1.6"
},
"language_info": {
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.6.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}