Start rewrite; add CPX collector

master
Alinson S. Xavier 3 years ago
parent 52ddd076b6
commit 1eee63702d
Signed by: isoron
GPG Key ID: 0DA8E4B9E1109DCA

@ -1,21 +0,0 @@
name: Test
on:
push:
pull_request:
schedule:
- cron: '45 10 * * *'
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
julia-version: ['1']
julia-arch: [x64]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@v1
with:
version: ${{ matrix.julia-version }}
- uses: julia-actions/julia-buildpkg@v1
- uses: julia-actions/julia-runtest@v1

@ -1,13 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
VERSION := 0.2
test:
./juliaw test/runtests.jl
format:
cd deps/formatter; ../../juliaw format.jl
.PHONY: docs test format

@ -1,753 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b54271ba",
"metadata": {},
"source": [
"# Getting started with MIPLearn\n",
"\n",
"## Introduction\n",
"\n",
"**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of both commercial and open source mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS, Cbc or SCIP). In this tutorial, we will:\n",
"\n",
"1. Install the Julia/JuMP version of MIPLearn\n",
"2. Model a simple optimization problem using JuMP\n",
"3. Generate training data and train the ML models\n",
"4. Use the ML models together with SCIP to solve new instances\n",
"\n",
"<div class=\"alert alert-info\">\n",
"Note\n",
" \n",
"In this tutorial, we use SCIP because it is more widely available than commercial MIP solvers. However, all the steps below should work for Gurobi, CPLEX or XPRESS, as long as you have a license for these solvers. The performance impact of MIPLearn may also change for different solvers.\n",
"</div>\n",
"\n",
"<div class=\"alert alert-warning\">\n",
"Warning\n",
" \n",
"MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n",
" \n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"id": "6135092d",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"MIPLearn is available in two versions:\n",
"\n",
"- Python version, compatible with the Pyomo modeling language,\n",
"- Julia version, compatible with the JuMP modeling language.\n",
"\n",
"In this tutorial, we will demonstrate how to use and install the Julia/JuMP version of the package. The first step is to install the Julia programming language in your computer. [See the official instructions for more details](https://julialang.org/downloads/). Note that MIPLearn was developed and tested with Julia 1.6, and may not be compatible with newer versions of the language. After Julia is installed, launch its console and run the following commands to download and install the package:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c549d632",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m git-repo `https://github.com/ANL-CEEESA/MIPLearn.jl.git`\n",
"\u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m registry at `~/.julia/registries/General`\n",
"\u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m git-repo `https://github.com/JuliaRegistries/General.git`\n",
"\u001b[32m\u001b[1m Resolving\u001b[22m\u001b[39m package versions...\n",
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Project.toml`\n",
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Manifest.toml`\n"
]
}
],
"source": [
"using Pkg\n",
"Pkg.add(PackageSpec(url=\"https://github.com/ANL-CEEESA/MIPLearn.jl.git\"))"
]
},
{
"cell_type": "markdown",
"id": "bc99f8c1",
"metadata": {},
"source": [
"In addition to MIPLearn itself, we will also install a few other packages that are required for this tutorial:\n",
"\n",
"- [**SCIP**](https://www.scipopt.org/), one of the fastest non-commercial MIP solvers currently available\n",
"- [**JuMP**](https://jump.dev/), an open source modeling language for Julia\n",
"- [**Distributions.jl**](https://github.com/JuliaStats/Distributions.jl), a statistics package that we will use to generate random inputs\n",
"- [**Glob.jl**](https://github.com/vtjnash/Glob.jl), a package that retrieves all files in a directory matching a certain pattern"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d068ed52",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m\u001b[1m Resolving\u001b[22m\u001b[39m package versions...\n",
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Project.toml`\n",
"\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/Packages/MIPLearn/dev/docs/jump-tutorials/Manifest.toml`\n"
]
}
],
"source": [
"using Pkg\n",
"Pkg.add([\n",
" PackageSpec(url=\"https://github.com/scipopt/SCIP.jl.git\", rev=\"7aa79aaa\"),\n",
" PackageSpec(name=\"JuMP\", version=\"0.21\"),\n",
" PackageSpec(name=\"Distributions\", version=\"0.25\"),\n",
" PackageSpec(name=\"Glob\", version=\"1\"),\n",
"])"
]
},
{
"cell_type": "markdown",
"id": "9689bce5",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
" \n",
"Note\n",
" \n",
"In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Julia projects.\n",
" \n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "9801f40d",
"metadata": {},
"source": [
"## Modeling a simple optimization problem\n",
"\n",
"To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n",
"\n",
"Suppose that you work at a utility company, and that it is your job to decide which electrical generators should be online at a certain hour of the day, as well as how much power should each generator produce. More specifically, assume that your company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs your company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. You also know that the total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts). To minimize the costs to your company, which generators should be online, and how much power should they produce?\n",
"\n",
"This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:\n",
"\n",
"$$\n",
"\\begin{align}\n",
"\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n",
"\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n",
"& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n",
"& \\sum_{i=1}^n y_i = d \\\\\n",
"& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n",
"& y_i \\geq 0 & i=1,\\ldots,n\n",
"\\end{align}\n",
"$$\n",
"\n",
"<div class=\"alert alert-info\">\n",
" \n",
"Note\n",
" \n",
"We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem. See benchmarks for more details.\n",
" \n",
"</div>\n",
"\n",
"Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Julia and JuMP. We start by defining a data structure that holds all the input data."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8ae84f9d",
"metadata": {},
"outputs": [],
"source": [
"Base.@kwdef struct UnitCommitmentData\n",
" demand::Float64\n",
" pmin::Vector{Float64}\n",
" pmax::Vector{Float64}\n",
" cfix::Vector{Float64}\n",
" cvar::Vector{Float64}\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "3d59466c",
"metadata": {},
"source": [
"Next, we create a function that converts this data structure into a concrete JuMP model. For more details on the JuMP syntax, see [the official JuMP documentation](https://jump.dev/JuMP.jl/stable/)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e07e3633",
"metadata": {},
"outputs": [],
"source": [
"using JuMP\n",
"\n",
"function build_uc_model(data::UnitCommitmentData)::Model\n",
" model = Model()\n",
" n = length(data.pmin)\n",
" @variable(model, x[1:n], Bin)\n",
" @variable(model, y[1:n] >= 0)\n",
" @objective(\n",
" model,\n",
" Min,\n",
" sum(\n",
" data.cfix[i] * x[i] +\n",
" data.cvar[i] * y[i]\n",
" for i in 1:n\n",
" )\n",
" )\n",
" @constraint(model, eq_max_power[i in 1:n], y[i] <= data.pmax[i] * x[i])\n",
" @constraint(model, eq_min_power[i in 1:n], y[i] >= data.pmin[i] * x[i])\n",
" @constraint(model, eq_demand, sum(y[i] for i in 1:n) == data.demand)\n",
" return model\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "6e92b1d8",
"metadata": {},
"source": [
"At this point, we can already use JuMP and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators, using SCIP:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e2b828da",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"obj = 1320.0\n",
" x = [0.0, 1.0, 1.0]\n",
" y = [0.0, 60.0, 40.0]\n"
]
}
],
"source": [
"using SCIP\n",
"\n",
"model = build_uc_model(\n",
" UnitCommitmentData(\n",
" demand = 100.0,\n",
" pmin = [10, 20, 30],\n",
" pmax = [50, 60, 70],\n",
" cfix = [700, 600, 500],\n",
" cvar = [1.5, 2.0, 2.5],\n",
" )\n",
")\n",
"\n",
"scip = optimizer_with_attributes(SCIP.Optimizer, \"limits/gap\" => 1e-4)\n",
"set_optimizer(model, scip)\n",
"set_silent(model)\n",
"optimize!(model)\n",
"\n",
"println(\"obj = \", objective_value(model))\n",
"println(\" x = \", round.(value.(model[:x])))\n",
"println(\" y = \", round.(value.(model[:y]), digits=2));"
]
},
{
"cell_type": "markdown",
"id": "92e27701",
"metadata": {},
"source": [
"Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power."
]
},
{
"cell_type": "markdown",
"id": "06e538e5",
"metadata": {},
"source": [
"## Generating training data\n",
"\n",
"Although SCIP could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** version of SCIP, which can solve new instances (similar to the ones it was trained on) faster.\n",
"\n",
"In the following, we will use MIPLearn to train machine learning models that can be used to accelerate SCIP's performance on a particular set of instances. More specifically, MIPLearn will train a model that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to SCIP as a warm start.\n",
"\n",
"Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "859e0e29",
"metadata": {},
"outputs": [],
"source": [
"using Distributions\n",
"using Random\n",
"\n",
"function random_uc_data(; samples::Int, n::Int, seed=42)\n",
" Random.seed!(seed)\n",
" pmin = rand(Uniform(100, 500.0), n)\n",
" pmax = pmin .* rand(Uniform(2.0, 2.5), n)\n",
" cfix = pmin .* rand(Uniform(100.0, 125.0), n)\n",
" cvar = rand(Uniform(1.25, 1.5), n)\n",
" return [\n",
" UnitCommitmentData(;\n",
" pmin,\n",
" pmax,\n",
" cfix,\n",
" cvar,\n",
" demand = sum(pmax) * rand(Uniform(0.5, 0.75)),\n",
" )\n",
" for i in 1:samples\n",
" ]\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "92cb1931",
"metadata": {},
"source": [
"In this example, for simplicity, only the demands change from one instance to the next. We could also have made the prices and the production limits random. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n",
"\n",
"Now we generate 100 instances of this problem, each one with 1,000 generators. We will use the first 90 instances for training, and the remaining 10 instances to evaluate SCIP's performance."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cf7d09ba",
"metadata": {},
"outputs": [],
"source": [
"data = random_uc_data(samples=100, n=1000);\n",
"train_data = data[1:90]\n",
"test_data = data[91:100];"
]
},
{
"cell_type": "markdown",
"id": "8a03a93d",
"metadata": {},
"source": [
"Next, we write these data structures to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold the entire training data, as well as the concrete JuMP models, in memory. Files also make it much easier to solve multiple instances simultaneously, potentially even on multiple machines. We will cover parallel and distributed computing in a future tutorial.\n",
"\n",
"The code below generates the files `uc/train/000001.jld2`, `uc/train/000002.jld2`, etc., which contain the input data in [JLD2 format](https://github.com/JuliaIO/JLD2.jl)."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2433ed6f",
"metadata": {},
"outputs": [],
"source": [
"using MIPLearn\n",
"MIPLearn.save(data[1:90], \"uc/train/\")\n",
"MIPLearn.save(data[91:100], \"uc/test/\")\n",
"\n",
"using Glob\n",
"train_files = glob(\"uc/train/*.jld2\")\n",
"test_files = glob(\"uc/test/*.jld2\");"
]
},
{
"cell_type": "markdown",
"id": "de0861c2",
"metadata": {},
"source": [
"Finally, we use `MIPLearn.LearningSolver` and `MIPLearn.solve!` to solve all the training instances. `LearningSolver` is the main component provided by MIPLearn, which integrates MIP solvers and ML. The `solve!` function can be used to solve either one or multiple instances, and requires: (i) the list of files containing the training data; and (ii) the function that converts the data structure into a concrete JuMP model:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "bd7432e7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"103.808547 seconds (93.52 M allocations: 3.604 GiB, 1.19% gc time, 0.52% compilation time)\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING: Dual bound 1.98665e+07 is larger than the objective of the primal solution 1.98665e+07. The solution might not be optimal.\n"
]
}
],
"source": [
"using Glob\n",
"solver = LearningSolver(scip)\n",
"@time solve!(solver, train_files, build_uc_model);"
]
},
{
"cell_type": "markdown",
"id": "ee2a0bac",
"metadata": {},
"source": [
"The macro `@time` shows us how long did the code take to run. We can see that SCIP was able to solve all training instances in about 2 minutes. The solutions, and other useful training data, are stored by MIPLearn in `.h5` files, stored side-by-side with the original `.jld2` files."
]
},
{
"cell_type": "markdown",
"id": "f14f08f3",
"metadata": {},
"source": [
"## Solving new instances\n",
"\n",
"With training data in hand, we can now fit the ML models using `MIPLearn.fit!`, then solve the test instances with `MIPLearn.solve!`, as shown below:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "fd8161d0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" 5.951264 seconds (9.33 M allocations: 334.657 MiB, 1.51% gc time)\n"
]
}
],
"source": [
"solver_ml = LearningSolver(scip)\n",
"fit!(solver_ml, train_files, build_uc_model)\n",
"@time solve!(solver_ml, test_files, build_uc_model);"
]
},
{
"cell_type": "markdown",
"id": "cf0ed97d",
"metadata": {},
"source": [
"The trained MIP solver was able to solve all test instances in about 6 seconds. To see that ML is being helpful here, let us repeat the code above, but remove the `fit!` line:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "d0812fbf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" 10.390325 seconds (8.17 M allocations: 278.042 MiB, 0.89% gc time)\n"
]
}
],
"source": [
"solver_baseline = LearningSolver(scip)\n",
"@time solve!(solver_baseline, test_files, build_uc_model);"
]
},
{
"cell_type": "markdown",
"id": "ca64c7ca",
"metadata": {},
"source": [
"Without the help of the ML models, SCIP took around 10 seconds to solve the same test instances.\n",
"\n",
"<div class=\"alert alert-info\">\n",
"Note\n",
" \n",
"Note that is is not necessary to specify what ML models to use. MIPLearn, by default, will try a number of classical ML models and will choose the one that performs the best, based on k-fold cross validation. MIPLearn is also able to automatically collect features based on the MIP formulation of the problem and the solution to the LP relaxation, among other things, so it does not require handcrafted features. If you do want to customize the models and features, however, that is also possible, as we will see in a later tutorial.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "a80dc6a5",
"metadata": {},
"source": [
"## Understanding the acceleration\n",
"\n",
"Let us go a bit deeper and try to understand how exactly did MIPLearn accelerate SCIP's performance. First, we are going to solve one of the test instances again, using the trained solver, but this time using the `tee=true` parameter, so that we can see SCIP's log:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e5ec48e6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"presolving:\n",
"(round 1, fast) 861 del vars, 861 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
"(round 2, fast) 861 del vars, 1722 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
"(round 3, fast) 862 del vars, 1722 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
"presolving (4 rounds: 4 fast, 1 medium, 1 exhaustive):\n",
" 862 deleted vars, 1722 deleted constraints, 0 added constraints, 2000 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients\n",
" 0 implications, 0 cliques\n",
"presolved problem has 1138 variables (0 bin, 0 int, 0 impl, 1138 cont) and 279 constraints\n",
" 279 constraints of type <linear>\n",
"Presolving Time: 0.03\n",
"\n",
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
"* 0.0s| 1 | 0 | 203 | - | LP | 0 |1138 | 279 | 279 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705035e+07 | 0.00%| unknown\n",
" 0.0s| 1 | 0 | 203 | - | 8950k | 0 |1138 | 279 | 279 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705035e+07 | 0.00%| unknown\n",
"\n",
"SCIP Status : problem is solved [optimal solution found]\n",
"Solving Time (sec) : 0.04\n",
"Solving Nodes : 1\n",
"Primal Bound : +1.70503465600131e+07 (1 solutions)\n",
"Dual Bound : +1.70503465600131e+07\n",
"Gap : 0.00 %\n",
"\n",
"violation: integrality condition of variable <> = 0.338047247943162\n",
"all 1 solutions given by solution candidate storage are infeasible\n",
"\n",
"feasible solution found by completesol heuristic after 0.1 seconds, objective value 1.705169e+07\n",
"presolving:\n",
"(round 1, fast) 0 del vars, 0 del conss, 0 add conss, 3000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
"(round 2, exhaustive) 0 del vars, 0 del conss, 0 add conss, 3000 chg bounds, 0 chg sides, 0 chg coeffs, 1000 upgd conss, 0 impls, 0 clqs\n",
"(round 3, exhaustive) 0 del vars, 0 del conss, 0 add conss, 3000 chg bounds, 0 chg sides, 0 chg coeffs, 2000 upgd conss, 1000 impls, 0 clqs\n",
" (0.1s) probing: 51/1000 (5.1%) - 0 fixings, 0 aggregations, 0 implications, 0 bound changes\n",
" (0.1s) probing aborted: 50/50 successive totally useless probings\n",
" (0.1s) symmetry computation started: requiring (bin +, int -, cont +), (fixed: bin -, int +, cont -)\n",
" (0.1s) no symmetry present\n",
"presolving (4 rounds: 4 fast, 3 medium, 3 exhaustive):\n",
" 0 deleted vars, 0 deleted constraints, 0 added constraints, 3000 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients\n",
" 2000 implications, 0 cliques\n",
"presolved problem has 2000 variables (1000 bin, 0 int, 0 impl, 1000 cont) and 2001 constraints\n",
" 2000 constraints of type <varbound>\n",
" 1 constraints of type <linear>\n",
"Presolving Time: 0.11\n",
"transformed 1/1 original solutions to the transformed problem space\n",
"\n",
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
" 0.2s| 1 | 0 | 1201 | - | 20M | 0 |2000 |2001 |2001 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705169e+07 | 0.01%| unknown\n",
"\n",
"SCIP Status : solving was interrupted [gap limit reached]\n",
"Solving Time (sec) : 0.21\n",
"Solving Nodes : 1\n",
"Primal Bound : +1.70516871251443e+07 (1 solutions)\n",
"Dual Bound : +1.70503465600130e+07\n",
"Gap : 0.01 %\n",
"\n"
]
}
],
"source": [
"solve!(solver_ml, test_files[1], build_uc_model, tee=true);"
]
},
{
"cell_type": "markdown",
"id": "714805e0",
"metadata": {},
"source": [
"The log above is quite complicated if you have never seen it before, but the important line is the one starting with `feasible solution found [...] objective value 1.705169e+07`. This line indicates that MIPLearn was able to construct a warm start with value `1.705169e+07`. Using this warm start, SCIP then used the branch-and-cut method to either prove its optimality or to find an even better solution. Very quickly, however, SCIP proved that the solution produced by MIPLearn was indeed optimal. It was able to do this without generating a single cutting plane or running any other heuristics; it could tell the optimality by the root LP relaxation alone, which was very fast. \n",
"\n",
"Let us now repeat the process, but using the untrained solver this time:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "7f1da1e6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"presolving:\n",
"(round 1, fast) 861 del vars, 861 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
"(round 2, fast) 861 del vars, 1722 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
"(round 3, fast) 862 del vars, 1722 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
"presolving (4 rounds: 4 fast, 1 medium, 1 exhaustive):\n",
" 862 deleted vars, 1722 deleted constraints, 0 added constraints, 2000 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients\n",
" 0 implications, 0 cliques\n",
"presolved problem has 1138 variables (0 bin, 0 int, 0 impl, 1138 cont) and 279 constraints\n",
" 279 constraints of type <linear>\n",
"Presolving Time: 0.03\n",
"\n",
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
"* 0.0s| 1 | 0 | 203 | - | LP | 0 |1138 | 279 | 279 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705035e+07 | 0.00%| unknown\n",
" 0.0s| 1 | 0 | 203 | - | 8950k | 0 |1138 | 279 | 279 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.705035e+07 | 0.00%| unknown\n",
"\n",
"SCIP Status : problem is solved [optimal solution found]\n",
"Solving Time (sec) : 0.04\n",
"Solving Nodes : 1\n",
"Primal Bound : +1.70503465600131e+07 (1 solutions)\n",
"Dual Bound : +1.70503465600131e+07\n",
"Gap : 0.00 %\n",
"\n",
"violation: integrality condition of variable <> = 0.338047247943162\n",
"all 1 solutions given by solution candidate storage are infeasible\n",
"\n",
"presolving:\n",
"(round 1, fast) 0 del vars, 0 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 0 upgd conss, 0 impls, 0 clqs\n",
"(round 2, exhaustive) 0 del vars, 0 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 1000 upgd conss, 0 impls, 0 clqs\n",
"(round 3, exhaustive) 0 del vars, 0 del conss, 0 add conss, 2000 chg bounds, 0 chg sides, 0 chg coeffs, 2000 upgd conss, 1000 impls, 0 clqs\n",
" (0.0s) probing: 51/1000 (5.1%) - 0 fixings, 0 aggregations, 0 implications, 0 bound changes\n",
" (0.0s) probing aborted: 50/50 successive totally useless probings\n",
" (0.0s) symmetry computation started: requiring (bin +, int -, cont +), (fixed: bin -, int +, cont -)\n",
" (0.0s) no symmetry present\n",
"presolving (4 rounds: 4 fast, 3 medium, 3 exhaustive):\n",
" 0 deleted vars, 0 deleted constraints, 0 added constraints, 2000 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients\n",
" 2000 implications, 0 cliques\n",
"presolved problem has 2000 variables (1000 bin, 0 int, 0 impl, 1000 cont) and 2001 constraints\n",
" 2000 constraints of type <varbound>\n",
" 1 constraints of type <linear>\n",
"Presolving Time: 0.03\n",
"\n",
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
"p 0.0s| 1 | 0 | 1 | - | locks| 0 |2000 |2001 |2001 | 0 | 0 | 0 | 0 | 0.000000e+00 | 2.335200e+07 | Inf | unknown\n",
"p 0.0s| 1 | 0 | 2 | - | vbounds| 0 |2000 |2001 |2001 | 0 | 0 | 0 | 0 | 0.000000e+00 | 1.839873e+07 | Inf | unknown\n",
" 0.1s| 1 | 0 | 1204 | - | 20M | 0 |2000 |2001 |2001 | 0 | 0 | 0 | 0 | 1.705035e+07 | 1.839873e+07 | 7.91%| unknown\n",
" 0.1s| 1 | 0 | 1207 | - | 22M | 0 |2000 |2001 |2002 | 1 | 1 | 0 | 0 | 1.705036e+07 | 1.839873e+07 | 7.91%| unknown\n",
"r 0.1s| 1 | 0 | 1207 | - |shifting| 0 |2000 |2001 |2002 | 1 | 1 | 0 | 0 | 1.705036e+07 | 1.711399e+07 | 0.37%| unknown\n",
" 0.1s| 1 | 0 | 1209 | - | 22M | 0 |2000 |2001 |2003 | 2 | 2 | 0 | 0 | 1.705037e+07 | 1.711399e+07 | 0.37%| unknown\n",
"r 0.1s| 1 | 0 | 1209 | - |shifting| 0 |2000 |2001 |2003 | 2 | 2 | 0 | 0 | 1.705037e+07 | 1.706492e+07 | 0.09%| unknown\n",
" 0.1s| 1 | 0 | 1210 | - | 22M | 0 |2000 |2001 |2004 | 3 | 3 | 0 | 0 | 1.705037e+07 | 1.706492e+07 | 0.09%| unknown\n",
" 0.1s| 1 | 0 | 1211 | - | 23M | 0 |2000 |2001 |2005 | 4 | 4 | 0 | 0 | 1.705037e+07 | 1.706492e+07 | 0.09%| unknown\n",
" 0.1s| 1 | 0 | 1212 | - | 23M | 0 |2000 |2001 |2006 | 5 | 5 | 0 | 0 | 1.705037e+07 | 1.706492e+07 | 0.09%| unknown\n",
"r 0.1s| 1 | 0 | 1212 | - |shifting| 0 |2000 |2001 |2006 | 5 | 5 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
" 0.1s| 1 | 0 | 1214 | - | 24M | 0 |2000 |2001 |2007 | 6 | 7 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
" 0.2s| 1 | 0 | 1216 | - | 24M | 0 |2000 |2001 |2009 | 8 | 8 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
" 0.2s| 1 | 0 | 1220 | - | 25M | 0 |2000 |2001 |2011 | 10 | 9 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
" 0.2s| 1 | 0 | 1223 | - | 25M | 0 |2000 |2001 |2014 | 13 | 10 | 0 | 0 | 1.705037e+07 | 1.706228e+07 | 0.07%| unknown\n",
" time | node | left |LP iter|LP it/n|mem/heur|mdpt |vars |cons |rows |cuts |sepa|confs|strbr| dualbound | primalbound | gap | compl. \n",
" 0.2s| 1 | 0 | 1229 | - | 26M | 0 |2000 |2001 |2015 | 14 | 11 | 0 | 0 | 1.705038e+07 | 1.706228e+07 | 0.07%| unknown\n",
"r 0.2s| 1 | 0 | 1403 | - |intshift| 0 |2000 |2001 |2015 | 14 | 11 | 0 | 0 | 1.705038e+07 | 1.705687e+07 | 0.04%| unknown\n",
"L 0.6s| 1 | 0 | 1707 | - | rens| 0 |2000 |2001 |2015 | 14 | 11 | 0 | 0 | 1.705038e+07 | 1.705332e+07 | 0.02%| unknown\n",
"L 0.7s| 1 | 0 | 1707 | - | alns| 0 |2000 |2001 |2015 | 14 | 11 | 0 | 0 | 1.705038e+07 | 1.705178e+07 | 0.01%| unknown\n",
"\n",
"SCIP Status : solving was interrupted [gap limit reached]\n",
"Solving Time (sec) : 0.68\n",
"Solving Nodes : 1\n",
"Primal Bound : +1.70517823853380e+07 (13 solutions)\n",
"Dual Bound : +1.70503798271962e+07\n",
"Gap : 0.01 %\n",
"\n"
]
}
],
"source": [
"solve!(solver_baseline, test_files[1], build_uc_model, tee=true);"
]
},
{
"cell_type": "markdown",
"id": "033da453",
"metadata": {},
"source": [
"In this log file, notice how the previous line about warm starts is missing. Since no warm starts were provided, SCIP had to find an initial solution using its own internal heuristics, which are not specifically tailored for this problem. The initial solution found by SCIP's heuristics has value `2.335200e+07`, which is significantly worse than the one constructed by MIPLearn. SCIP then proceeded to improve this solution, by generating cutting planes and repeatedly running additional primal heuristics. In the end, it was able to find the optimal solution, as expected, but it took longer.\n",
"\n",
"In summary, MIPLearn accelerated the solution process by constructing a high-quality initial solution. In the following tutorials, we will see other strategies that MIPLearn can use to accelerate MIP performance, besides warm starts."
]
},
{
"cell_type": "markdown",
"id": "76510d4a",
"metadata": {},
"source": [
"## Accessing the solution\n",
"\n",
"In the example above, we used `MIPLearn.solve!` together with data files to solve both the training and the test instances. The solutions were saved to a `.h5` files in the train/test folders, and could be retrieved by reading theses files, but that is not very convenient. In this section we will use an easier method.\n",
"\n",
"We can use the function `MIPLearn.load!` to obtain a regular JuMP model:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "49fe62d5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"A JuMP Model\n",
"Minimization problem with:\n",
"Variables: 2000\n",
"Objective function type: AffExpr\n",
"`AffExpr`-in-`MathOptInterface.EqualTo{Float64}`: 1 constraint\n",
"`AffExpr`-in-`MathOptInterface.GreaterThan{Float64}`: 1000 constraints\n",
"`AffExpr`-in-`MathOptInterface.LessThan{Float64}`: 1000 constraints\n",
"`VariableRef`-in-`MathOptInterface.GreaterThan{Float64}`: 1000 constraints\n",
"`VariableRef`-in-`MathOptInterface.ZeroOne`: 1000 constraints\n",
"Model mode: AUTOMATIC\n",
"CachingOptimizer state: NO_OPTIMIZER\n",
"Solver name: No optimizer attached.\n",
"Names registered in the model: eq_demand, eq_max_power, eq_min_power, x, y"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model = MIPLearn.load(\"uc/test/000001.jld2\", build_uc_model)"
]
},
{
"cell_type": "markdown",
"id": "9414f027",
"metadata": {},
"source": [
"We can then solve this model as before, with `MIPLearn.solve!`:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "4b15a28d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"obj = 1.7051217395548128e7\n",
" x = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0]\n",
" y = [767.11, 646.61, 230.28, 365.46, 1150.99, 1103.36, 0.0, 0.0, 0.0, 0.0]\n"
]
}
],
"source": [
"solve!(solver_ml, model)\n",
"println(\"obj = \", objective_value(model))\n",
"println(\" x = \", round.(value.(model[:x][1:10])))\n",
"println(\" y = \", round.(value.(model[:y][1:10]), digits=2))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.6.0",
"language": "julia",
"name": "julia-1.6"
},
"language_info": {
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.6.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,75 +0,0 @@
#!/bin/bash
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
if [ ! -e Project.toml ]; then
echo "juliaw: Project.toml not found"
exit 1
fi
if [ ! -e Manifest.toml ]; then
julia --project=. -e 'using Pkg; Pkg.instantiate()' || exit 1
fi
if [ ! -e build/sysimage.so -o Project.toml -nt build/sysimage.so ]; then
echo "juliaw: rebuilding system image..."
# Generate temporary project folder
rm -rf $HOME/.juliaw
mkdir -p $HOME/.juliaw/src
cp Project.toml Manifest.toml $HOME/.juliaw
NAME=$(julia -e 'using TOML; toml = TOML.parsefile("Project.toml"); "name" in keys(toml) && print(toml["name"])')
if [ ! -z $NAME ]; then
cat > $HOME/.juliaw/src/$NAME.jl << EOF
module $NAME
end
EOF
fi
# Add PackageCompiler dependencies to temporary project
julia --project=$HOME/.juliaw -e 'using Pkg; Pkg.add(["PackageCompiler", "TOML", "Logging"])'
# Generate system image scripts
cat > $HOME/.juliaw/sysimage.jl << EOF
using PackageCompiler
using TOML
using Logging
Logging.disable_logging(Logging.Info)
mkpath("$PWD/build")
println("juliaw: generating precompilation statements...")
run(\`julia --project="$PWD" --trace-compile="$PWD"/build/precompile.jl \$(ARGS)\`)
println("juliaw: finding dependencies...")
project = TOML.parsefile("Project.toml")
manifest = TOML.parsefile("Manifest.toml")
deps = Symbol[]
for dep in keys(project["deps"])
if dep in keys(manifest)
# Up to Julia 1.6
dep_entry = manifest[dep][1]
else
# Julia 1.7+
dep_entry = manifest["deps"][dep][1]
end
if "path" in keys(dep_entry)
println(" - \$(dep) [skip]")
else
println(" - \$(dep)")
push!(deps, Symbol(dep))
end
end
println("juliaw: building system image...")
create_sysimage(
deps,
precompile_statements_file = "$PWD/build/precompile.jl",
sysimage_path = "$PWD/build/sysimage.so",
)
EOF
julia --project=$HOME/.juliaw $HOME/.juliaw/sysimage.jl $*
else
julia --project=. --sysimage build/sysimage.so $*
fi

@ -0,0 +1,98 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2023, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using CPLEX
using JuMP
using HDF5
struct CplexBlackBoxCuts end
function collect(
mps_filename::String,
::CplexBlackBoxCuts,
)::Nothing
tempdir = mktempdir()
isfile(mps_filename) || error("file not found: $mps_filename")
h5_filename = replace(mps_filename, ".mps.gz" => ".h5")
# Initialize CPLEX
status_p = [Cint(0)]
env = CPXopenCPLEX(status_p)
# Parameter: Disable presolve
CPXsetintparam(env, CPX_PARAM_AGGFILL, 0)
CPXsetintparam(env, CPX_PARAM_AGGIND, 0)
CPXsetintparam(env, CPX_PARAM_PREIND, 0)
CPXsetintparam(env, CPX_PARAM_PREPASS, 0)
CPXsetintparam(env, CPX_PARAM_REDUCE, 0)
CPXsetintparam(env, CPX_PARAM_PREDUAL, -1)
CPXsetintparam(env, CPX_PARAM_PRESLVND, -1)
# Parameter: Enable logging
CPXsetintparam(env, CPX_PARAM_SCRIND, 1)
# Parameter: Stop processing at the root node
CPXsetintparam(env, CPX_PARAM_NODELIM, 0)
# Load problem
lp = CPXcreateprob(env, status_p, "problem")
CPXreadcopyprob(env, lp, mps_filename, "mps")
# Define callback
function solve_callback(env, cbdata, wherefrom, cbhandle, useraction_p)::Int32
nodelp_p = [CPXLPptr(0)]
CPXgetcallbacknodelp(env, cbdata, wherefrom, nodelp_p)
CPXwriteprob(env, nodelp_p[1], "$tempdir/root.mps", C_NULL)
return 0
end
c_solve_callback = @cfunction($solve_callback, Cint, (
CPXENVptr, # env
Ptr{Cvoid}, # cbdata
Cint, # wherefrom
Ptr{Cvoid}, # cbhandle
Ptr{Cint}, # useraction_p
))
CPXsetsolvecallbackfunc(env, c_solve_callback, C_NULL)
# Run optimization
CPXmipopt(env, lp)
# Load generated MPS file
model = JuMP.read_from_file("$tempdir/root.mps")
# Parse cuts
cuts_lhs::Vector{Vector{Float64}} = []
cuts_rhs::Vector{Float64} = []
nvars = num_variables(model)
constraints = all_constraints(model, GenericAffExpr{Float64,VariableRef}, MOI.LessThan{Float64})
for conRef in constraints
if name(conRef)[begin] in ['i', 'f', 'm', 'r', 'L', 'z', 'v'] &&
isdigit(name(conRef)[begin+1])
c = constraint_object(conRef)
cset = MOI.get(conRef.model.moi_backend, MOI.ConstraintSet(), conRef.index)
lhs = zeros(nvars)
for (key, val) in c.func.terms
lhs[key.index.value] = val
end
push!(cuts_lhs, lhs)
push!(cuts_rhs, cset.upper)
end
end
cuts_lhs_matrix::Matrix{Float64} = vcat(cuts_lhs'...)
# Store cuts in HDF5 file
h5open(h5_filename, "r+") do h5
for key in ["cuts_cpx_lhs", "cuts_cpx_rhs"]
if haskey(h5, key)
delete_object(h5, key)
end
end
write(h5, "cuts_cpx_lhs", cuts_lhs_matrix)
write(h5, "cuts_cpx_rhs", cuts_rhs)
end
return
end
export CplexBlackBoxCuts, collect

@ -1,91 +1,9 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Copyright (C) 2020-2023, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
module MIPLearn
using PyCall
using Requires
global DynamicLazyConstraintsComponent = PyNULL()
global JuMPSolver = PyNULL()
global MinPrecisionThreshold = PyNULL()
global miplearn = PyNULL()
global ObjectiveValueComponent = PyNULL()
global PrimalSolutionComponent = PyNULL()
global PyFileInstance = PyNULL()
global PyJuMPInstance = PyNULL()
global StaticLazyConstraintsComponent = PyNULL()
global traceback = PyNULL()
global UserCutsComponent = PyNULL()
global MemorySample = PyNULL()
global Hdf5Sample = PyNULL()
to_str_array(values) = py"to_str_array"(values)
from_str_array(values) = py"from_str_array"(values)
include("solvers/structs.jl")
include("utils/log.jl")
include("utils/exceptions.jl")
include("instance/abstract_instance.jl")
include("instance/jump_instance.jl")
include("instance/file_instance.jl")
include("solvers/jump_solver.jl")
include("solvers/learning_solver.jl")
include("solvers/macros.jl")
include("utils/benchmark.jl")
include("utils/parse.jl")
include("bb/BB.jl")
include("cuts/Cuts.jl")
function __init__()
copy!(miplearn, pyimport("miplearn"))
copy!(traceback, pyimport("traceback"))
copy!(DynamicLazyConstraintsComponent, miplearn.DynamicLazyConstraintsComponent)
copy!(UserCutsComponent, miplearn.UserCutsComponent)
copy!(ObjectiveValueComponent, miplearn.ObjectiveValueComponent)
copy!(PrimalSolutionComponent, miplearn.PrimalSolutionComponent)
copy!(StaticLazyConstraintsComponent, miplearn.StaticLazyConstraintsComponent)
copy!(MinPrecisionThreshold, miplearn.MinPrecisionThreshold)
copy!(MemorySample, miplearn.features.sample.MemorySample)
copy!(Hdf5Sample, miplearn.features.sample.Hdf5Sample)
__init_PyFileInstance__()
__init_PyJuMPInstance__()
__init_JuMPSolver__()
py"""
import numpy as np
def to_str_array(values):
if values is None:
return None
return np.array(values, dtype="S")
def from_str_array(values):
return [v.decode() for v in values]
"""
end
function convert(::Type{SparseMatrixCSC}, o::PyObject)
I, J, V = pyimport("scipy.sparse").find(o)
return sparse(I .+ 1, J .+ 1, V, o.shape...)
end
function PyObject(m::SparseMatrixCSC)
pyimport("scipy.sparse").csc_matrix(
(m.nzval, m.rowval .- 1, m.colptr .- 1),
shape = size(m),
).tocoo()
end
export DynamicLazyConstraintsComponent,
UserCutsComponent,
ObjectiveValueComponent,
PrimalSolutionComponent,
StaticLazyConstraintsComponent,
MinPrecisionThreshold,
Hdf5Sample
include("Cuts/BlackBox/cplex.jl")
end # module

@ -1,29 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
module BB
using Requires
frac(x) = x - floor(x)
include("structs.jl")
include("collect.jl")
include("nodepool.jl")
include("optimize.jl")
include("log.jl")
include("lp.jl")
include("varbranch/hybrid.jl")
include("varbranch/infeasibility.jl")
include("varbranch/pseudocost.jl")
include("varbranch/random.jl")
include("varbranch/reliability.jl")
include("varbranch/strong.jl")
function __init__()
@require CPLEX = "a076750e-1247-5638-91d2-ce28b192dca0" include("cplex.jl")
end
end # module

@ -1,63 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Printf
using Base.Threads
import Base.Threads: @threads, nthreads, threadid
import ..load_data, ..Hdf5Sample
function collect!(
optimizer,
filename::String;
time_limit::Float64 = Inf,
node_limit::Int = typemax(Int),
gap_limit::Float64 = 1e-4,
print_interval::Int = 5,
branch_rule::VariableBranchingRule = ReliabilityBranching(collect = true),
enable_plunging = true,
)::NodePool
model = read_from_file(filename)
mip = init(optimizer)
load!(mip, model)
h5 = Hdf5Sample(replace(filename, ".mps.gz" => ".h5"), "r")
primal_bound = h5.get_scalar("mip_upper_bound")
if primal_bound === nothing
primal_bound = h5.get_scalar("mip_obj_value")
end
h5.file.close()
pool = solve!(
mip;
initial_primal_bound = primal_bound,
time_limit,
node_limit,
gap_limit,
print_interval,
branch_rule,
enable_plunging,
)
h5 = Hdf5Sample(replace(filename, ".mps.gz" => ".h5"))
pseudocost_up = [NaN for _ = 1:mip.nvars]
pseudocost_down = [NaN for _ = 1:mip.nvars]
priorities = [0.0 for _ = 1:mip.nvars]
for (var, var_hist) in pool.var_history
pseudocost_up[var.index] = var_hist.pseudocost_up
pseudocost_down[var.index] = var_hist.pseudocost_down
x = mean(var_hist.fractional_values)
f_up = x - floor(x)
f_down = ceil(x) - x
priorities[var.index] =
var_hist.pseudocost_up * f_up * var_hist.pseudocost_down * f_down
end
h5.put_array("bb_var_pseudocost_up", pseudocost_up)
h5.put_array("bb_var_pseudocost_down", pseudocost_down)
h5.put_array("bb_var_priority", priorities)
collect!(branch_rule, h5)
h5.file.close()
return pool
end

@ -1,33 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using CPLEX
function _probe(
mip::MIP,
cpx::CPLEX.Optimizer,
var::Variable,
::Float64,
::Float64,
::Float64,
itlim::Int,
)::Tuple{Float64,Float64}
indices = [var.index - Cint(1)]
downobj, upobj, cnt = [0.0], [0.0], 1
status = CPXlpopt(cpx.env, cpx.lp)
status == 0 || error("CPXlpopt failed ($status)")
status = CPXstrongbranch(cpx.env, cpx.lp, indices, cnt, downobj, upobj, itlim)
status == 0 || error("CPXstrongbranch failed ($status)")
return upobj[1] * mip.sense, downobj[1] * mip.sense
end
function _relax_integrality!(cpx::CPLEX.Optimizer)::Nothing
status = CPXchgprobtype(cpx.env, cpx.lp, CPLEX.CPXPROB_LP)
status == 0 || error("CPXchgprobtype failed ($status)")
return
end

@ -1,68 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Printf
function print_progress_header()
@printf(
"%8s %9s %9s %13s %13s %9s %6s %13s %6s %-24s %9s %9s %6s %6s",
"time",
"processed",
"pending",
"primal-bound",
"dual-bound",
"gap",
"node",
"obj",
"parent",
"branch-var",
"branch-lb",
"branch-ub",
"depth",
"iinfes"
)
println()
flush(stdout)
end
function print_progress(
pool::NodePool,
node::Node;
time_elapsed::Float64,
print_interval::Int,
primal_update::Bool,
)::Nothing
if (pool.processed % print_interval == 0) || isempty(pool.pending) || primal_update
if isempty(node.branch_vars)
branch_var_name = "---"
branch_lb = "---"
branch_ub = "---"
else
branch_var_name = name(node.mip, last(node.branch_vars))
L = min(24, length(branch_var_name))
branch_var_name = branch_var_name[1:L]
branch_lb = @sprintf("%9.2f", last(node.branch_lb))
branch_ub = @sprintf("%9.2f", last(node.branch_ub))
end
@printf(
"%8.2f %9d %9d %13.6e %13.6e %9.2e %6d %13.6e %6s %-24s %9s %9s %6d %6d",
time_elapsed,
pool.processed,
length(pool.processing) + length(pool.pending),
pool.primal_bound * node.mip.sense,
pool.dual_bound * node.mip.sense,
pool.gap,
node.index,
node.obj * node.mip.sense,
node.parent === nothing ? "---" : @sprintf("%d", node.parent.index),
branch_var_name,
branch_lb,
branch_ub,
length(node.branch_vars),
length(node.fractional_variables)
)
println()
flush(stdout)
end
end

@ -1,269 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import Base: values, convert
using Base.Threads
import Base.Threads: @threads, nthreads, threadid
using JuMP
using MathOptInterface
const MOI = MathOptInterface
function init(constructor)::MIP
return MIP(
constructor = constructor,
optimizers = Any[nothing for t = 1:nthreads()],
int_vars = Variable[],
int_vars_lb = Float64[],
int_vars_ub = Float64[],
sense = 1.0,
lp_iterations = 0,
nvars = 0,
)
end
function read!(mip::MIP, filename::AbstractString)::Nothing
load!(mip, read_from_file(filename))
return
end
function load!(mip::MIP, prototype::JuMP.Model)
mip.nvars = num_variables(prototype)
_replace_zero_one!(backend(prototype))
_assert_supported(backend(prototype))
mip.int_vars, mip.int_vars_lb, mip.int_vars_ub = _get_int_variables(backend(prototype))
mip.sense = _get_objective_sense(backend(prototype))
_relax_integrality!(backend(prototype))
@threads for t = 1:nthreads()
model = Model(mip.constructor)
MOI.copy_to(model, backend(prototype))
mip.optimizers[t] = backend(model)
set_silent(model)
end
return
end
function _assert_supported(optimizer::MOI.AbstractOptimizer)::Nothing
types = MOI.get(optimizer, MOI.ListOfConstraintTypesPresent())
for (F, S) in types
_assert_supported(F, S)
end
end
function _assert_supported(F::Type, S::Type)::Nothing
if F in [MOI.ScalarAffineFunction{Float64}, MOI.VariableIndex] && S in [
MOI.LessThan{Float64},
MOI.GreaterThan{Float64},
MOI.EqualTo{Float64},
MOI.Interval{Float64},
]
return
end
if F in [MOI.VariableIndex] && S in [MOI.Integer, MOI.ZeroOne]
return
end
error("MOI constraint not supported: $F in $S")
end
function _get_objective_sense(optimizer::MOI.AbstractOptimizer)::Float64
sense = MOI.get(optimizer, MOI.ObjectiveSense())
if sense == MOI.MIN_SENSE
return 1.0
elseif sense == MOI.MAX_SENSE
return -1.0
else
error("objective sense not supported: $sense")
end
end
_interval_index(v::Variable) =
MOI.ConstraintIndex{MOI.VariableIndex,MOI.Interval{Float64}}(v.index)
_lower_bound_index(v::Variable) =
MOI.ConstraintIndex{MOI.VariableIndex,MOI.GreaterThan{Float64}}(v.index)
_upper_bound_index(v::Variable) =
MOI.ConstraintIndex{MOI.VariableIndex,MOI.LessThan{Float64}}(v.index)
function _replace_zero_one!(optimizer::MOI.AbstractOptimizer)::Nothing
constrs_to_delete = MOI.ConstraintIndex[]
funcs = MOI.VariableIndex[]
sets = Union{MOI.Interval,MOI.Integer}[]
for ci in
MOI.get(optimizer, MOI.ListOfConstraintIndices{MOI.VariableIndex,MOI.ZeroOne}())
func = MOI.get(optimizer, MOI.ConstraintFunction(), ci)
var = func.value
push!(constrs_to_delete, ci)
push!(funcs, MOI.VariableIndex(var))
push!(funcs, MOI.VariableIndex(var))
push!(sets, MOI.Interval{Float64}(0.0, 1.0))
push!(sets, MOI.Integer())
end
MOI.delete(optimizer, constrs_to_delete)
MOI.add_constraints(optimizer, funcs, sets)
return
end
function _get_int_variables(
optimizer::MOI.AbstractOptimizer,
)::Tuple{Vector{Variable},Vector{Float64},Vector{Float64}}
vars = Variable[]
lb = Float64[]
ub = Float64[]
for ci in
MOI.get(optimizer, MOI.ListOfConstraintIndices{MOI.VariableIndex,MOI.Integer}())
func = MOI.get(optimizer, MOI.ConstraintFunction(), ci)
var = Variable(func.value)
var_lb, var_ub = -Inf, Inf
if MOI.is_valid(optimizer, _interval_index(var))
constr = MOI.get(optimizer, MOI.ConstraintSet(), _interval_index(var))
var_ub = constr.upper
var_lb = constr.lower
else
# If interval constraint is not found, query individual lower/upper bound
# constraints and replace them by an interval constraint.
if MOI.is_valid(optimizer, _lower_bound_index(var))
constr = MOI.get(optimizer, MOI.ConstraintSet(), _lower_bound_index(var))
var_lb = constr.lower
MOI.delete(optimizer, _lower_bound_index(var))
end
if MOI.is_valid(optimizer, _upper_bound_index(var))
constr = MOI.get(optimizer, MOI.ConstraintSet(), _upper_bound_index(var))
var_ub = constr.upper
MOI.delete(optimizer, _upper_bound_index(var))
end
MOI.add_constraint(optimizer, var, MOI.Interval(var_lb, var_ub))
end
push!(vars, var)
push!(lb, var_lb)
push!(ub, var_ub)
end
return vars, lb, ub
end
function _relax_integrality!(optimizer::MOI.AbstractOptimizer)::Nothing
indices =
MOI.get(optimizer, MOI.ListOfConstraintIndices{MOI.VariableIndex,MOI.Integer}())
MOI.delete(optimizer, indices)
end
"""
solve_relaxation(mip::MIP)::Tuple{Symbol, Float64}
Solve the linear relaxation of `mip` and returns a tuple containing the
solution status (either :Optimal or :Infeasible) and the optimal objective
value. If the problem is infeasible, the optimal value is Inf for minimization
problems and -Inf for maximization problems..
"""
function solve_relaxation!(mip::MIP)::Tuple{Symbol,Float64}
t = threadid()
MOI.optimize!(mip.optimizers[t])
try
mip.lp_iterations += MOI.get(mip.optimizers[t], MOI.SimplexIterations())
catch
# ignore
end
status = MOI.get(mip.optimizers[t], MOI.TerminationStatus())
if status == MOI.OPTIMAL
obj = MOI.get(mip.optimizers[t], MOI.ObjectiveValue())
return :Optimal, obj * mip.sense
elseif status in [MOI.INFEASIBLE, MOI.INFEASIBLE_OR_UNBOUNDED]
return :Infeasible, Inf * mip.sense
end
error("unknown status: $status")
end
"""
values(mip::MIP, vars::Vector{Variable})::Array{Float64}
Returns a vector `vals` which describes the current primal values for the
decision variables `vars`. More specifically, `vals[j]` is the current
primal value of `vars[j]`.
"""
function values(mip::MIP, vars::Vector{Variable})::Array{Float64}
return MOI.get(
mip.optimizers[threadid()],
MOI.VariablePrimal(),
[MOI.VariableIndex(v.index) for v in vars],
)
end
values(mip::MIP) =
values(mip, MOI.get(mip.optimizers[threadid()], MOI.ListOfVariableIndices()))
"""
set_bounds!(mip::MIP,
vars::Vector{Variable},
lb::Array{Float64},
ub::Array{Float64})
Modify the bounds of the given variables. More specifically, sets
upper and lower bounds of `vars[j]` to `ub[j]` and `lb[j]`, respectively.
"""
function set_bounds!(
mip::MIP,
vars::Vector{Variable},
lb::Array{Float64},
ub::Array{Float64},
)::Nothing
t = threadid()
for j = 1:length(vars)
MOI.delete(mip.optimizers[t], _interval_index(vars[j]))
MOI.add_constraint(
mip.optimizers[t],
MOI.VariableIndex(vars[j].index),
MOI.Interval(lb[j], ub[j]),
)
end
return
end
"""
name(mip::MIP, var::Variable)::String
Return the name of the decision variable `var`.
"""
function name(mip::MIP, var::Variable)::String
t = threadid()
return MOI.get(mip.optimizers[t], MOI.VariableName(), MOI.VariableIndex(var.index))
end
"""
probe(mip::MIP, var, x, lb, ub, max_iterations)::Tuple{Float64, Float64}
Suppose that the LP relaxation of `mip` has been solved and that `var` holds
a fractional value `x`. This function returns two numbers corresponding,
respectively, to the the optimal values of the LP relaxations having the
constraints `ceil(x) <= var <= ub` and `lb <= var <= floor(x)` enforced.
If any branch is infeasible, the optimal value for that branch is Inf for
minimization problems and -Inf for maximization problems.
"""
function probe(
mip::MIP,
var::Variable,
x::Float64,
lb::Float64,
ub::Float64,
max_iterations::Int,
)::Tuple{Float64,Float64}
return _probe(mip, mip.optimizers[threadid()], var, x, lb, ub, max_iterations)
end
function _probe(
mip::MIP,
_,
var::Variable,
x::Float64,
lb::Float64,
ub::Float64,
::Int,
)::Tuple{Float64,Float64}
set_bounds!(mip, [var], [ceil(x)], [ceil(x)])
_, obj_up = solve_relaxation!(mip)
set_bounds!(mip, [var], [floor(x)], [floor(x)])
_, obj_down = solve_relaxation!(mip)
set_bounds!(mip, [var], [lb], [ub])
return obj_up * mip.sense, obj_down * mip.sense
end

@ -1,186 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Statistics
using DataStructures
import Base.Threads: threadid
function take(
pool::NodePool;
suggestions::Array{Node} = [],
time_remaining::Float64,
gap_limit::Float64,
node_limit::Int,
)::Union{Symbol,Node}
t = threadid()
lock(pool.lock) do
n_processing = length(pool.processing)
if (
(pool.gap < gap_limit) ||
(n_processing + pool.processed >= node_limit) ||
(time_remaining < 0)
)
return :END
end
if isempty(pool.pending)
if isempty(pool.processing)
return :END
else
return :WAIT
end
else
# If one of the suggested nodes is still pending, return it.
# This is known in the literature as plunging.
for s in suggestions
if s in keys(pool.pending)
delete!(pool.pending, s)
pool.processing[s] = s.obj
return s
end
end
# If all suggestions have already been processed
# or pruned, find another node based on best bound.
node = dequeue!(pool.pending)
pool.processing[node] = node.obj
return node
end
end
end
function offer(
pool::NodePool;
parent_node::Union{Nothing,Node},
child_nodes::Vector{Node},
time_elapsed::Float64 = 0.0,
print_interval::Int = 100,
)::Nothing
lock(pool.lock) do
primal_update = false
# Update node.processing and node.processed
if parent_node !== nothing
pool.processed += 1
delete!(pool.processing, parent_node)
end
# Queue child nodes
for node in child_nodes
if node.status == :Infeasible
continue
end
if node.obj >= pool.primal_bound - 1e-6
continue
end
if isempty(node.fractional_variables)
pool.primal_bound = min(pool.primal_bound, node.obj)
primal_update = true
continue
end
pool.pending[node] = node.obj
end
# Update dual bound
pool.dual_bound = pool.primal_bound
if !isempty(pool.pending)
pool.dual_bound = min(pool.dual_bound, peek(pool.pending)[2])
end
if !isempty(pool.processing)
pool.dual_bound = min(pool.dual_bound, peek(pool.processing)[2])
end
# Update gap
if pool.primal_bound == pool.dual_bound
pool.gap = 0
else
pool.gap = abs((pool.primal_bound - pool.dual_bound) / pool.primal_bound)
end
if parent_node !== nothing
# Update branching variable history
branch_var = child_nodes[1].branch_vars[end]
offset = findfirst(isequal(branch_var), parent_node.fractional_variables)
x = parent_node.fractional_values[offset]
obj_change_up = child_nodes[1].obj - parent_node.obj
obj_change_down = child_nodes[2].obj - parent_node.obj
_update_var_history(
pool = pool,
var = branch_var,
x = x,
obj_change_down = obj_change_down,
obj_change_up = obj_change_up,
)
# Update global history
pool.history.avg_pseudocost_up =
mean(vh.pseudocost_up for vh in values(pool.var_history))
pool.history.avg_pseudocost_down =
mean(vh.pseudocost_down for vh in values(pool.var_history))
end
for node in child_nodes
print_progress(
pool,
node,
time_elapsed = time_elapsed,
print_interval = print_interval,
primal_update = isfinite(node.obj) && isempty(node.fractional_variables),
)
end
end
return
end
function _update_var_history(;
pool::NodePool,
var::Variable,
x::Float64,
obj_change_down::Float64,
obj_change_up::Float64,
)::Nothing
# Create new history entry
if var keys(pool.var_history)
pool.var_history[var] = VariableHistory()
end
varhist = pool.var_history[var]
# Push fractional value
push!(varhist.fractional_values, x)
# Push objective value changes
push!(varhist.obj_change_up, obj_change_up)
push!(varhist.obj_change_down, obj_change_down)
# Push objective change ratios
f_up = x - floor(x)
f_down = ceil(x) - x
if isfinite(obj_change_up)
push!(varhist.obj_ratio_up, obj_change_up / f_up)
end
if isfinite(obj_change_down)
push!(varhist.obj_ratio_down, obj_change_down / f_down)
end
# Update variable pseudocosts
varhist.pseudocost_up = 0.0
varhist.pseudocost_down = 0.0
if !isempty(varhist.obj_ratio_up)
varhist.pseudocost_up = sum(varhist.obj_ratio_up) / length(varhist.obj_ratio_up)
end
if !isempty(varhist.obj_ratio_down)
varhist.pseudocost_down =
sum(varhist.obj_ratio_down) / length(varhist.obj_ratio_down)
end
return
end
function generate_indices(pool::NodePool, n::Int)::Vector{Int}
lock(pool.lock) do
result = Int[]
for i = 1:n
push!(result, pool.next_index)
pool.next_index += 1
end
return result
end
end

@ -1,169 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Printf
using Base.Threads
import Base.Threads: @threads, nthreads, threadid
import ..load_data, ..Hdf5Sample
function solve!(
mip::MIP;
time_limit::Float64 = Inf,
node_limit::Int = typemax(Int),
gap_limit::Float64 = 1e-4,
print_interval::Int = 5,
initial_primal_bound::Float64 = Inf,
branch_rule::VariableBranchingRule = ReliabilityBranching(),
enable_plunging = true,
)::NodePool
time_initial = time()
pool = NodePool(mip = mip)
pool.primal_bound = initial_primal_bound
root_node = _create_node(mip)
if isempty(root_node.fractional_variables)
println("root relaxation is integer feasible")
pool.dual_bound = root_node.obj
pool.primal_bound = root_node.obj
return pool
else
print_progress_header()
end
offer(
pool,
parent_node = nothing,
child_nodes = [root_node],
print_interval = print_interval,
)
@threads for t = 1:nthreads()
child_one, child_zero, suggestions = nothing, nothing, Node[]
while true
time_elapsed = time() - time_initial
if enable_plunging && (child_one !== nothing)
suggestions = Node[child_one, child_zero]
end
node = take(
pool,
suggestions = suggestions,
time_remaining = time_limit - time_elapsed,
node_limit = node_limit,
gap_limit = gap_limit,
)
if node == :END
break
elseif node == :WAIT
sleep(0.1)
continue
else
# Assert node is feasible
_set_node_bounds(node)
status, _ = solve_relaxation!(mip)
@assert status == :Optimal
_unset_node_bounds(node)
# Find branching variable
ids = generate_indices(pool, 2)
branch_var = find_branching_var(branch_rule, node, pool)
# Find current variable lower and upper bounds
offset = findfirst(isequal(branch_var), mip.int_vars)
var_lb = mip.int_vars_lb[offset]
var_ub = mip.int_vars_ub[offset]
for (offset, v) in enumerate(node.branch_vars)
if v == branch_var
var_lb = max(var_lb, node.branch_lb[offset])
var_ub = min(var_ub, node.branch_ub[offset])
end
end
# Query current fractional value
offset = findfirst(isequal(branch_var), node.fractional_variables)
var_value = node.fractional_values[offset]
child_zero = _create_node(
mip,
index = ids[2],
parent = node,
branch_var = branch_var,
branch_var_lb = var_lb,
branch_var_ub = floor(var_value),
)
child_one = _create_node(
mip,
index = ids[1],
parent = node,
branch_var = branch_var,
branch_var_lb = ceil(var_value),
branch_var_ub = var_ub,
)
offer(
pool,
parent_node = node,
child_nodes = [child_one, child_zero],
time_elapsed = time_elapsed,
print_interval = print_interval,
)
end
end
end
return pool
end
function _create_node(
mip;
index::Int = 0,
parent::Union{Nothing,Node} = nothing,
branch_var::Union{Nothing,Variable} = nothing,
branch_var_lb::Union{Nothing,Float64} = nothing,
branch_var_ub::Union{Nothing,Float64} = nothing,
)::Node
if parent === nothing
branch_vars = Variable[]
branch_lb = Float64[]
branch_ub = Float64[]
depth = 1
else
branch_vars = [parent.branch_vars; branch_var]
branch_lb = [parent.branch_lb; branch_var_lb]
branch_ub = [parent.branch_ub; branch_var_ub]
depth = parent.depth + 1
end
set_bounds!(mip, branch_vars, branch_lb, branch_ub)
status, obj = solve_relaxation!(mip)
if status == :Optimal
vals = values(mip, mip.int_vars)
fractional_indices = [
j for j in 1:length(mip.int_vars) if 1e-6 < vals[j] - floor(vals[j]) < 1 - 1e-6
]
fractional_values = vals[fractional_indices]
fractional_variables = mip.int_vars[fractional_indices]
else
fractional_variables = Variable[]
fractional_values = Float64[]
end
set_bounds!(mip, mip.int_vars, mip.int_vars_lb, mip.int_vars_ub)
return Node(
mip,
index,
depth,
obj,
status,
branch_vars,
branch_lb,
branch_ub,
fractional_variables,
fractional_values,
parent,
)
end
function _set_node_bounds(node::Node)
set_bounds!(node.mip, node.branch_vars, node.branch_lb, node.branch_ub)
end
function _unset_node_bounds(node::Node)
set_bounds!(node.mip, node.mip.int_vars, node.mip.int_vars_lb, node.mip.int_vars_ub)
end

@ -1,74 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using DataStructures
abstract type VariableBranchingRule end
struct Variable
index::Any
end
Base.@kwdef mutable struct MIP
constructor::Any
optimizers::Vector
int_vars::Vector{Variable}
int_vars_lb::Vector{Float64}
int_vars_ub::Vector{Float64}
sense::Float64
lp_iterations::Int64
nvars::Int
end
struct Node
mip::MIP
index::Int
depth::Int
obj::Float64
status::Symbol
branch_vars::Array{Variable}
branch_lb::Array{Float64}
branch_ub::Array{Float64}
fractional_variables::Array{Variable}
fractional_values::Array{Float64}
parent::Union{Nothing,Node}
end
Base.@kwdef mutable struct History
avg_pseudocost_up::Float64 = 1.0
avg_pseudocost_down::Float64 = 1.0
end
mutable struct VariableHistory
fractional_values::Array{Float64}
obj_change_up::Array{Float64}
obj_change_down::Array{Float64}
obj_ratio_up::Array{Float64}
obj_ratio_down::Array{Float64}
pseudocost_up::Float64
pseudocost_down::Float64
VariableHistory() = new(
Float64[], # fractional_values
Float64[], # obj_change_up
Float64[], # obj_change_down
Float64[], # obj_ratio_up
Float64[], # obj_ratio_up
0.0, # pseudocost_up
0.0, # pseudocost_down
)
end
Base.@kwdef mutable struct NodePool
mip::MIP
pending::PriorityQueue{Node,Float64} = PriorityQueue{Node,Float64}()
processing::PriorityQueue{Node,Float64} = PriorityQueue{Node,Float64}()
processed::Int = 0
next_index::Int = 1
lock::ReentrantLock = ReentrantLock()
primal_bound::Float64 = Inf
dual_bound::Float64 = Inf
gap::Float64 = Inf
history::History = History()
var_history::Dict{Variable,VariableHistory} = Dict()
end

@ -1,31 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
"""
HybridBranching(depth_cutoff::Int,
shallow_rule::VariableBranchingRule,
deep_rule::::VariableBranchingRule)
Branching strategy that switches between two variable branching strategies,
according to the depth of the node.
More specifically, if `node.depth <= depth_cutoff`, then `shallow_rule` is
applied. Otherwise, `deep_rule` is applied.
"""
mutable struct HybridBranching <: VariableBranchingRule
depth_cutoff::Int
shallow_rule::VariableBranchingRule
deep_rule::VariableBranchingRule
end
HybridBranching(depth_cutoff::Int = 10) =
HybridBranching(depth_cutoff, StrongBranching(), PseudocostBranching())
function find_branching_var(rule::HybridBranching, node::Node, pool::NodePool)::Variable
if node.depth <= rule.depth_cutoff
return find_branching_var(rule.shallow_rule, node, pool)
else
return find_branching_var(rule.deep_rule, node, pool)
end
end

@ -1,54 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
"""
FirstInfeasibleBranching()
Branching rule that always selects the first fractional variable.
"""
struct FirstInfeasibleBranching <: VariableBranchingRule end
function find_branching_var(
rule::FirstInfeasibleBranching,
node::Node,
pool::NodePool,
)::Variable
return node.fractional_variables[1]
end
"""
LeastInfeasibleBranching()
Branching strategy that select the fractional variable whose value is the closest
to an integral value.
"""
struct LeastInfeasibleBranching <: VariableBranchingRule end
function find_branching_var(
rule::LeastInfeasibleBranching,
node::Node,
pool::NodePool,
)::Variable
scores = [max(v - floor(v), ceil(v) - v) for v in node.fractional_values]
_, max_offset = findmax(scores)
return node.fractional_variables[max_offset]
end
"""
MostInfeasibleBranching()
Branching strategy that selects the fractional variable whose value is closest
to 1/2.
"""
struct MostInfeasibleBranching <: VariableBranchingRule end
function find_branching_var(
rule::MostInfeasibleBranching,
node::Node,
pool::NodePool,
)::Variable
scores = [min(v - floor(v), ceil(v) - v) for v in node.fractional_values]
_, max_offset = findmax(scores)
return node.fractional_variables[max_offset]
end

@ -1,45 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
"""
PseudocostBranching()
Branching strategy that uses historical changes in objective value to estimate
strong branching scores at lower computational cost.
"""
struct PseudocostBranching <: VariableBranchingRule end
function find_branching_var(rule::PseudocostBranching, node::Node, pool::NodePool)::Variable
scores = [
_pseudocost_score(
node,
pool,
node.fractional_variables[j],
node.fractional_values[j],
) for j = 1:length(node.fractional_variables)
]
_, max_offset = findmax(scores)
return node.fractional_variables[max_offset]
end
function _pseudocost_score(
node::Node,
pool::NodePool,
var::Variable,
x::Float64,
)::Tuple{Float64,Int}
f_up = x - floor(x)
f_down = ceil(x) - x
pseudo_up = pool.history.avg_pseudocost_up * f_up
pseudo_down = pool.history.avg_pseudocost_down * f_down
if var in keys(pool.var_history)
if isfinite(pool.var_history[var].pseudocost_up)
pseudo_up = pool.var_history[var].pseudocost_up * f_up
end
if isfinite(pool.var_history[var].pseudocost_down)
pseudo_down = pool.var_history[var].pseudocost_down * f_down
end
end
return (pseudo_up * f_up * pseudo_down * f_down, var.index)
end

@ -1,17 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Random
"""
RandomBranching()
Branching strategy that picks a fractional variable randomly.
"""
struct RandomBranching <: VariableBranchingRule end
function find_branching_var(rule::RandomBranching, node::Node, pool::NodePool)::Variable
return shuffle(node.fractional_variables)[1]
end

@ -1,191 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import ..to_str_array
Base.@kwdef mutable struct ReliabilityBranchingStats
branched_count::Vector{Int} = []
num_strong_branch_calls = 0
score_var_names::Vector{String} = []
score_features::Vector{Vector{Float32}} = []
score_targets::Vector{Float32} = []
end
"""
ReliabilityBranching
Branching strategy that uses pseudocosts if there are enough observations
to make an accurate prediction of strong branching scores, or runs the
actual strong branching routine if not enough observations are available.
"""
Base.@kwdef mutable struct ReliabilityBranching <: VariableBranchingRule
min_samples::Int = 8
max_sb_calls::Int = 100
look_ahead::Int = 10
side_effect::Bool = true
max_iterations::Int = 1_000_000
aggregation::Symbol = :prod
stats::ReliabilityBranchingStats = ReliabilityBranchingStats()
collect::Bool = false
end
function _strong_branch_score(;
node::Node,
pool::NodePool,
var::Variable,
x::Float64,
side_effect::Bool,
max_iterations::Int,
aggregation::Symbol,
)::Tuple{Float64,Int}
# Find current variable lower and upper bounds
offset = findfirst(isequal(var), node.mip.int_vars)
var_lb = node.mip.int_vars_lb[offset]
var_ub = node.mip.int_vars_ub[offset]
for (offset, v) in enumerate(node.branch_vars)
if v == var
var_lb = max(var_lb, node.branch_lb[offset])
var_ub = min(var_ub, node.branch_ub[offset])
end
end
obj_up, obj_down = 0, 0
obj_up, obj_down = probe(node.mip, var, x, var_lb, var_ub, max_iterations)
obj_change_up = obj_up - node.obj
obj_change_down = obj_down - node.obj
if side_effect
_update_var_history(
pool = pool,
var = var,
x = x,
obj_change_down = obj_change_down,
obj_change_up = obj_change_up,
)
end
if aggregation == :prod
return (obj_change_up * obj_change_down, var.index)
elseif aggregation == :min
sense = node.mip.sense
return (min(sense * obj_up, sense * obj_down), var.index)
else
error("Unknown aggregation: $aggregation")
end
end
function find_branching_var(
rule::ReliabilityBranching,
node::Node,
pool::NodePool,
)::Variable
stats = rule.stats
# Initialize statistics
if isempty(stats.branched_count)
stats.branched_count = zeros(node.mip.nvars)
end
# Sort variables by pseudocost score
nfrac = length(node.fractional_variables)
pseudocost_scores = [
_pseudocost_score(
node,
pool,
node.fractional_variables[j],
node.fractional_values[j],
) for j = 1:nfrac
]
σ = sortperm(pseudocost_scores, rev = true)
sorted_vars = node.fractional_variables[σ]
if rule.collect
# Compute dynamic features for all fractional variables
features = []
for (i, var) in enumerate(sorted_vars)
branched_count = stats.branched_count[var.index]
branched_count_rel = 0.0
branched_count_sum = sum(stats.branched_count[var.index])
if branched_count_sum > 0
branched_count_rel = branched_count / branched_count_sum
end
push!(
features,
Float32[
nfrac,
node.fractional_values[σ[i]],
node.depth,
pseudocost_scores[σ[i]][1],
branched_count,
branched_count_rel,
],
)
end
end
_set_node_bounds(node)
no_improv_count, n_sb_calls = 0, 0
max_score, max_var = (-Inf, -Inf), sorted_vars[1]
for (i, var) in enumerate(sorted_vars)
# Decide whether to use strong branching
use_strong_branch = true
if n_sb_calls >= rule.max_sb_calls
use_strong_branch = false
else
if var in keys(pool.var_history)
varhist = pool.var_history[var]
hlength = min(length(varhist.obj_ratio_up), length(varhist.obj_ratio_down))
if hlength >= rule.min_samples
use_strong_branch = false
end
end
end
if use_strong_branch
# Compute strong branching score
n_sb_calls += 1
score = _strong_branch_score(
node = node,
pool = pool,
var = var,
x = node.fractional_values[σ[i]],
side_effect = rule.side_effect,
max_iterations = rule.max_iterations,
aggregation = rule.aggregation,
)
if rule.collect
# Store training data
push!(stats.score_var_names, name(node.mip, var))
push!(stats.score_features, features[i])
push!(stats.score_targets, score[1])
end
else
score = pseudocost_scores[σ[i]]
end
if score > max_score
max_score, max_var = score, var
no_improv_count = 0
else
no_improv_count += 1
end
no_improv_count <= rule.look_ahead || break
end
_unset_node_bounds(node)
# Update statistics
stats.branched_count[max_var.index] += 1
stats.num_strong_branch_calls += n_sb_calls
return max_var
end
function collect!(rule::ReliabilityBranching, h5)
if rule.stats.num_strong_branch_calls == 0
return
end
h5.put_array("bb_score_var_names", to_str_array(rule.stats.score_var_names))
h5.put_array("bb_score_features", vcat(rule.stats.score_features'...))
h5.put_array("bb_score_targets", rule.stats.score_targets)
end

@ -1,32 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Random
"""
StrongBranching(look_ahead::Int, max_calls::Int)
Branching strategy that selects a subset of fractional variables
as candidates (according to pseudocosts) the solves two linear
programming problems for each candidate.
"""
Base.@kwdef struct StrongBranching <: VariableBranchingRule
look_ahead::Int = 10
max_calls::Int = 100
side_effect::Bool = true
max_iterations::Int = 1_000_000
aggregation::Symbol = :prod
end
function find_branching_var(rule::StrongBranching, node::Node, pool::NodePool)::Variable
rb_rule = ReliabilityBranching(
min_samples = typemax(Int),
max_sb_calls = rule.max_calls,
look_ahead = rule.look_ahead,
side_effect = rule.side_effect,
max_iterations = rule.max_iterations,
aggregation = rule.aggregation,
)
return find_branching_var(rb_rule, node, pool)
end

@ -1,14 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
module Cuts
include("tableau/structs.jl")
include("tableau/moi.jl")
include("tableau/transform.jl")
include("tableau/tableau.jl")
include("tableau/gmi.jl")
include("tableau/collect.jl")
end # module

@ -1,201 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import ..Hdf5Sample
using OrderedCollections
function collect_gmi(
mps_filename;
optimizer,
max_rounds=10,
max_cuts_per_round=100,
)
@info mps_filename
reset_timer!()
# Open HDF5 file
h5_filename = replace(mps_filename, ".mps.gz" => ".h5")
h5 = Hdf5Sample(h5_filename)
# Read optimal solution
sol_opt_dict = Dict(
zip(
h5.get_array("static_var_names"),
convert(Array{Float64}, h5.get_array("mip_var_values")),
)
)
# Read optimal value
obj_mip = h5.get_scalar("mip_lower_bound")
if obj_mip === nothing
obj_mip = h5.get_scalar("mip_obj_value")
end
obj_lp = nothing
h5.file.close()
# Define relative MIP gap
gap(v) = 100 * abs(obj_mip - v) / abs(v)
# Initialize stats
stats_obj = []
stats_gap = []
stats_ncuts = []
stats_time_convert = 0
stats_time_solve = 0
stats_time_select = 0
stats_time_tableau = 0
stats_time_gmi = 0
all_cuts = nothing
# Read problem
model = read_from_file(mps_filename)
for round in 1:max_rounds
@info "Round $(round)..."
stats_time_convert = @elapsed begin
# Extract problem data
data = ProblemData(model)
# Construct optimal solution vector (with correct variable sequence)
sol_opt = [sol_opt_dict[n] for n in data.var_names]
# Assert optimal solution is feasible for the original problem
@assert all(data.constr_lb .- 1e-3 .<= data.constr_lhs * sol_opt)
@assert all(data.constr_lhs * sol_opt .<= data.constr_ub .+ 1e-3)
# Convert to standard form
data_s, transforms = convert_to_standard_form(data)
model_s = to_model(data_s)
set_optimizer(model_s, optimizer)
relax_integrality(model_s)
# Convert optimal solution to standard form
sol_opt_s = forward(transforms, sol_opt)
# Assert converted solution is feasible for standard form problem
@assert data_s.constr_lhs * sol_opt_s data_s.constr_lb
end
# Optimize standard form
optimize!(model_s)
stats_time_solve += solve_time(model_s)
obj = objective_value(model_s) + data_s.obj_offset
if obj_lp === nothing
obj_lp = obj
push!(stats_obj, obj)
push!(stats_gap, gap(obj))
push!(stats_ncuts, 0)
end
if termination_status(model_s) != MOI.OPTIMAL
return
end
# Select tableau rows
basis = get_basis(model_s)
sol_frac = get_x(model_s)
stats_time_select += @elapsed begin
selected_rows = select_gmi_rows(
data_s,
basis,
sol_frac,
max_rows=max_cuts_per_round,
)
end
# Compute selected tableau rows
stats_time_tableau += @elapsed begin
tableau = compute_tableau(
data_s,
basis,
sol_frac,
rows=selected_rows,
)
# Assert tableau rows have been computed correctly
@assert tableau.lhs * sol_frac tableau.rhs
@assert tableau.lhs * sol_opt_s tableau.rhs
end
# Compute GMI cuts
stats_time_gmi += @elapsed begin
cuts_s = compute_gmi(data_s, tableau)
# Assert cuts have been generated correctly
try
assert_cuts_off(cuts_s, sol_frac)
assert_does_not_cut_off(cuts_s, sol_opt_s)
catch
@warn "Invalid cuts detected. Discarding round $round cuts and aborting."
break
end
# Abort if no cuts are left
if length(cuts_s.lb) == 0
@info "No cuts generated. Aborting."
break
end
end
# Add GMI cuts to original problem
cuts = backwards(transforms, cuts_s)
assert_does_not_cut_off(cuts, sol_opt)
constrs = add_constraint_set(model, cuts)
# Optimize original form
set_optimizer(model, optimizer)
undo_relax = relax_integrality(model)
optimize!(model)
obj = objective_value(model)
push!(stats_obj, obj)
push!(stats_gap, gap(obj))
# Store useful cuts; drop useless ones from the problem
useful = [
abs(shadow_price(c)) > 1e-3
for c in constrs
]
drop = findall(useful .== false)
keep = findall(useful .== true)
delete.(model, constrs[drop])
if all_cuts === nothing
all_cuts = cuts
else
all_cuts.lhs = [all_cuts.lhs; cuts.lhs[keep, :]]
all_cuts.lb = [all_cuts.lb; cuts.lb[keep]]
all_cuts.lb = [all_cuts.lb; cuts.lb[keep]]
end
push!(stats_ncuts, length(all_cuts.lb))
undo_relax()
end
# Store cuts
if all_cuts !== nothing
@info "Storing $(length(all_cuts.ub)) GMI cuts..."
h5 = Hdf5Sample(h5_filename)
h5.put_sparse("cuts_lhs", all_cuts.lhs)
h5.put_array("cuts_lb", all_cuts.lb)
h5.put_array("cuts_ub", all_cuts.ub)
h5.file.close()
end
return OrderedDict(
"instance" => mps_filename,
"max_rounds" => max_rounds,
"rounds" => length(stats_obj) - 1,
"time_convert" => stats_time_convert,
"time_solve" => stats_time_solve,
"time_tableau" => stats_time_tableau,
"time_gmi" => stats_time_gmi,
"obj_mip" => obj_mip,
"obj_lp" => obj_lp,
"stats_obj" => stats_obj,
"stats_gap" => stats_gap,
"stats_ncuts" => stats_ncuts,
)
end
export collect_gmi

@ -1,91 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using SparseArrays
@inline frac(x::Float64) = x - floor(x)
function select_gmi_rows(data, basis, x; max_rows=10, atol=0.001)
candidate_rows = [
r
for r in 1:length(basis.var_basic)
if (data.var_types[basis.var_basic[r]] != 'C') && (frac(x[basis.var_basic[r]]) > atol)
]
candidate_vals = frac.(x[basis.var_basic[candidate_rows]])
score = abs.(candidate_vals .- 0.5)
perm = sortperm(score)
return [candidate_rows[perm[i]] for i in 1:min(length(perm), max_rows)]
end
function compute_gmi(
data::ProblemData,
tableau::Tableau,
tol=1e-8,
)::ConstraintSet
nrows, ncols = size(tableau.lhs)
ub = Float64[Inf for _ in 1:nrows]
lb = Float64[0.999 for _ in 1:nrows]
tableau_I, tableau_J, tableau_V = findnz(tableau.lhs)
lhs_I = Int[]
lhs_J = Int[]
lhs_V = Float64[]
@timeit "Compute coefficients" begin
for k in 1:nnz(tableau.lhs)
i::Int = tableau_I[k]
v::Float64 = 0.0
alpha_j = frac(tableau_V[k])
beta = frac(tableau.rhs[i])
if data.var_types[i] == "C"
if alpha_j >= 0
v = alpha_j / beta
else
v = alpha_j / (1 - beta)
end
else
if alpha_j <= beta
v = alpha_j / beta
else
v = (1 - alpha_j) / (1 - beta)
end
end
if abs(v) > tol
push!(lhs_I, i)
push!(lhs_J, tableau_J[k])
push!(lhs_V, v)
end
end
lhs = sparse(lhs_I, lhs_J, lhs_V, nrows, ncols)
end
return ConstraintSet(; lhs, ub, lb)
end
function assert_cuts_off(
cuts::ConstraintSet,
x::Vector{Float64},
tol=1e-6
)
for i in 1:length(cuts.lb)
val = cuts.lhs[i, :]' * x
if (val <= cuts.ub[i] - tol) && (val >= cuts.lb[i] + tol)
throw(ErrorException("inequality fails to cut off fractional solution"))
end
end
end
function assert_does_not_cut_off(
cuts::ConstraintSet,
x::Vector{Float64};
tol=1e-6
)
for i in 1:length(cuts.lb)
val = cuts.lhs[i, :]' * x
ub = cuts.ub[i]
lb = cuts.lb[i]
if (val >= ub) || (val <= lb)
throw(ErrorException("inequality $i cuts off integer solution ($lb <= $val <= $ub)"))
end
end
end
export compute_gmi, frac, select_gmi_rows, assert_cuts_off, assert_does_not_cut_off

@ -1,177 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using JuMP
function ProblemData(model::Model)::ProblemData
vars = all_variables(model)
# Objective function
obj = objective_function(model)
obj = [v keys(obj.terms) ? obj.terms[v] : 0.0 for v in vars]
# Variable types, lower bounds and upper bounds
var_lb = [is_binary(v) ? 0.0 : has_lower_bound(v) ? lower_bound(v) : -Inf for v in vars]
var_ub = [is_binary(v) ? 1.0 : has_upper_bound(v) ? upper_bound(v) : Inf for v in vars]
var_types = [is_binary(v) || is_integer(v) ? 'I' : 'C' for v in vars]
var_names = [name(v) for v in vars]
# Constraints
constr_lb = Float64[]
constr_ub = Float64[]
constr_lhs_rows = Int[]
constr_lhs_cols = Int[]
constr_lhs_values = Float64[]
constr_index = 1
for (ftype, stype) in list_of_constraint_types(model)
for constr in all_constraints(model, ftype, stype)
cset = MOI.get(constr.model.moi_backend, MOI.ConstraintSet(), constr.index)
cf = MOI.get(
constr.model.moi_backend,
MOI.ConstraintFunction(),
constr.index,
)
if ftype == VariableRef
var_idx = cf.value
if stype == MOI.Integer || stype == MOI.ZeroOne
# nop
elseif stype == MOI.EqualTo{Float64}
var_lb[var_idx] = max(var_lb[var_idx], cset.value)
var_ub[var_idx] = min(var_ub[var_idx], cset.value)
elseif stype == MOI.LessThan{Float64}
var_ub[var_idx] = min(var_ub[var_idx], cset.upper)
elseif stype == MOI.GreaterThan{Float64}
var_lb[var_idx] = max(var_lb[var_idx], cset.lower)
elseif stype == MOI.Interval{Float64}
var_lb[var_idx] = max(var_lb[var_idx], cset.lower)
var_ub[var_idx] = min(var_ub[var_idx], cset.upper)
else
error("Unsupported set: $stype")
end
elseif ftype == AffExpr
if stype == MOI.EqualTo{Float64}
push!(constr_lb, cset.value)
push!(constr_ub, cset.value)
elseif stype == MOI.LessThan{Float64}
push!(constr_lb, -Inf)
push!(constr_ub, cset.upper)
elseif stype == MOI.GreaterThan{Float64}
push!(constr_lb, cset.lower)
push!(constr_ub, Inf)
elseif stype == MOI.Interval{Float64}
push!(constr_lb, cset.lower)
push!(constr_ub, cset.upper)
else
error("Unsupported set: $stype")
end
for term in cf.terms
push!(constr_lhs_cols, term.variable.value)
push!(constr_lhs_rows, constr_index)
push!(constr_lhs_values, term.coefficient)
end
constr_index += 1
else
error("Unsupported constraint type: ($ftype, $stype)")
end
end
end
n = length(vars)
m = constr_index - 1
constr_lhs = sparse(
constr_lhs_rows,
constr_lhs_cols,
constr_lhs_values,
m,
n,
)
@assert length(obj) == n
@assert length(var_lb) == n
@assert length(var_ub) == n
@assert length(var_types) == n
@assert length(var_names) == n
@assert length(constr_lb) == m
@assert length(constr_ub) == m
return ProblemData(
obj_offset=0.0;
obj,
constr_lb,
constr_ub,
constr_lhs,
var_lb,
var_ub,
var_types,
var_names
)
end
function to_model(data::ProblemData, tol=1e-6)::Model
model = Model()
# Variables
nvars = length(data.obj)
@variable(model, x[1:nvars])
for i = 1:nvars
set_name(x[i], data.var_names[i])
if data.var_types[i] == 'B'
set_binary(x[i])
elseif data.var_types[i] == 'I'
set_integer(x[i])
end
if isfinite(data.var_lb[i])
set_lower_bound(x[i], data.var_lb[i])
end
if isfinite(data.var_ub[i])
set_upper_bound(x[i], data.var_ub[i])
end
set_objective_coefficient(model, x[i], data.obj[i])
end
# Constraints
lhs = data.constr_lhs * x
for (j, lhs_expr) in enumerate(lhs)
lb = data.constr_lb[j]
ub = data.constr_ub[j]
if abs(lb - ub) < tol
@constraint(model, lb == lhs_expr)
elseif isfinite(lb) && !isfinite(ub)
@constraint(model, lb <= lhs_expr)
elseif !isfinite(lb) && isfinite(ub)
@constraint(model, lhs_expr <= ub)
else
@constraint(model, lb <= lhs_expr <= ub)
end
end
return model
end
function add_constraint_set(model::JuMP.Model, cs::ConstraintSet)
vars = all_variables(model)
nrows, _ = size(cs.lhs)
constrs = []
for i in 1:nrows
c = nothing
if isinf(cs.ub[i])
c = @constraint(model, cs.lb[i] <= dot(cs.lhs[i, :], vars))
elseif isinf(cs.lb[i])
c = @constraint(model, dot(cs.lhs[i, :], vars) <= cs.ub[i])
else
c = @constraint(model, cs.lb[i] <= dot(cs.lhs[i, :], vars) <= cs.ub[i])
end
push!(constrs, c)
end
return constrs
end
function set_warm_start(model::JuMP.Model, x::Vector{Float64})
vars = all_variables(model)
for (i, xi) in enumerate(x)
set_start_value(vars[i], xi)
end
end
export to_model, ProblemData, add_constraint_set, set_warm_start

@ -1,39 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using SparseArrays
Base.@kwdef mutable struct ProblemData
obj::Vector{Float64}
obj_offset::Float64
constr_lb::Vector{Float64}
constr_ub::Vector{Float64}
constr_lhs::SparseMatrixCSC
var_lb::Vector{Float64}
var_ub::Vector{Float64}
var_types::Vector{Char}
var_names::Vector{String}
end
Base.@kwdef mutable struct Tableau
obj
lhs
rhs
z
end
Base.@kwdef mutable struct Basis
var_basic
var_nonbasic
constr_basic
constr_nonbasic
end
Base.@kwdef mutable struct ConstraintSet
lhs::SparseMatrixCSC
ub::Vector{Float64}
lb::Vector{Float64}
end
export ProblemData, Tableau, Basis, ConstraintSet

@ -1,130 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using KLU
using TimerOutputs
function get_basis(model::JuMP.Model)::Basis
var_basic = Int[]
var_nonbasic = Int[]
constr_basic = Int[]
constr_nonbasic = Int[]
# Variables
for (i, var) in enumerate(all_variables(model))
bstatus = MOI.get(model, MOI.VariableBasisStatus(), var)
if bstatus == MOI.BASIC
push!(var_basic, i)
elseif bstatus == MOI.NONBASIC_AT_LOWER
push!(var_nonbasic, i)
else
error("Unknown basis status: $bstatus")
end
end
# Constraints
constr_index = 1
for (ftype, stype) in list_of_constraint_types(model)
for constr in all_constraints(model, ftype, stype)
if ftype == VariableRef
# nop
elseif ftype == AffExpr
bstatus = MOI.get(model, MOI.ConstraintBasisStatus(), constr)
if bstatus == MOI.BASIC
push!(constr_basic, constr_index)
elseif bstatus == MOI.NONBASIC
push!(constr_nonbasic, constr_index)
else
error("Unknown basis status: $bstatus")
end
constr_index += 1
else
error("Unsupported constraint type: ($ftype, $stype)")
end
end
end
return Basis(; var_basic, var_nonbasic, constr_basic, constr_nonbasic)
end
function get_x(model::JuMP.Model)
return JuMP.value.(all_variables(model))
end
function compute_tableau(
data::ProblemData,
basis::Basis,
x::Vector{Float64};
rows::Union{Vector{Int},Nothing}=nothing,
tol=1e-8
)::Tableau
@timeit "Split data" begin
nrows, ncols = size(data.constr_lhs)
lhs_slacks = sparse(I, nrows, nrows)
lhs_b = [data.constr_lhs[:, basis.var_basic] lhs_slacks[:, basis.constr_basic]]
obj_b = [data.obj[basis.var_basic]; zeros(length(basis.constr_basic))]
if rows === nothing
rows = 1:nrows
end
end
@timeit "Factorize basis matrix" begin
factor = klu(sparse(lhs_b'))
end
@timeit "Compute tableau LHS" begin
tableau_lhs_I = Int[]
tableau_lhs_J = Int[]
tableau_lhs_V = Float64[]
for k in 1:length(rows)
@timeit "Prepare inputs" begin
i = rows[k]
e = zeros(nrows)
e[i] = 1.0
end
@timeit "Solve" begin
sol = factor \ e
end
@timeit "Multiply" begin
row = sol' * data.constr_lhs
end
@timeit "Sparsify & copy" begin
for (j, v) in enumerate(row)
if abs(v) < tol
continue
end
push!(tableau_lhs_I, k)
push!(tableau_lhs_J, j)
push!(tableau_lhs_V, v)
end
end
end
tableau_lhs = sparse(
tableau_lhs_I,
tableau_lhs_J,
tableau_lhs_V,
length(rows),
ncols,
)
end
@timeit "Compute tableau RHS" begin
tableau_rhs = [x[basis.var_basic]; zeros(length(basis.constr_basic))][rows]
end
@timeit "Compute tableau objective row" begin
sol = factor \ obj_b
tableau_obj = -data.obj' + sol' * data.constr_lhs
tableau_obj[abs.(tableau_obj).<tol] .= 0
end
return Tableau(
obj=tableau_obj,
lhs=tableau_lhs,
rhs=tableau_rhs,
z=dot(data.obj, x),
)
end
export get_basis, get_x, compute_tableau

@ -1,314 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using LinearAlgebra
using TimerOutputs
abstract type Transform end
function _isbounded(x)
isfinite(x) || return false
abs(x) < 1e15 || return false
return true
end
function backwards!(transforms::Vector{Transform}, m::ConstraintSet; tol=1e-8)
for t in reverse(transforms)
backwards!(t, m)
end
for (idx, val) in enumerate(m.lhs.nzval)
if abs(val) < tol
m.lhs.nzval[idx] = 0
end
end
end
function backwards(transforms::Vector{Transform}, m::ConstraintSet; tol=1e-8)
m2 = deepcopy(m)
backwards!(transforms, m2; tol)
return m2
end
function forward(transforms::Vector{Transform}, p::Vector{Float64})::Vector{Float64}
for t in transforms
p = forward(t, p)
end
return p
end
# -----------------------------------------------------------------------------
Base.@kwdef mutable struct ShiftVarLowerBoundsToZero <: Transform
lb::Vector{Float64} = []
end
function forward!(t::ShiftVarLowerBoundsToZero, data::ProblemData)
t.lb = copy(data.var_lb)
data.obj_offset += dot(data.obj, t.lb)
data.var_ub -= t.lb
data.var_lb -= t.lb
data.constr_lb -= data.constr_lhs * t.lb
data.constr_ub -= data.constr_lhs * t.lb
end
function backwards!(t::ShiftVarLowerBoundsToZero, c::ConstraintSet)
c.lb += c.lhs * t.lb
c.ub += c.lhs * t.lb
end
function forward(t::ShiftVarLowerBoundsToZero, p::Vector{Float64})::Vector{Float64}
return p - t.lb
end
# -----------------------------------------------------------------------------
Base.@kwdef mutable struct MoveVarUpperBoundsToConstrs <: Transform end
function forward!(t::MoveVarUpperBoundsToConstrs, data::ProblemData)
_, ncols = size(data.constr_lhs)
data.constr_lhs = [data.constr_lhs; I]
data.constr_lb = [data.constr_lb; [-Inf for _ in 1:ncols]]
data.constr_ub = [data.constr_ub; data.var_ub]
data.var_ub .= Inf
end
function backwards!(::MoveVarUpperBoundsToConstrs, ::ConstraintSet)
# nop
end
function forward(t::MoveVarUpperBoundsToConstrs, p::Vector{Float64})::Vector{Float64}
return p
end
# -----------------------------------------------------------------------------
Base.@kwdef mutable struct AddSlackVariables <: Transform
M1::SparseMatrixCSC = spzeros(0)
M2::Vector{Float64} = []
ncols_orig::Int = 0
GE::Int = 0
LE::Int = 0
lhs_ge::SparseMatrixCSC = spzeros(0)
lhs_le::SparseMatrixCSC = spzeros(0)
rhs_le::Vector{Float64} = []
rhs_ge::Vector{Float64} = []
end
function forward!(t::AddSlackVariables, data::ProblemData)
nrows, ncols = size(data.constr_lhs)
isequality = abs.(data.constr_ub .- data.constr_lb) .< 1e-6
eq = [i for i in 1:nrows if isequality[i]]
ge = [i for i in 1:nrows if isfinite(data.constr_lb[i]) && !isequality[i]]
le = [i for i in 1:nrows if isfinite(data.constr_ub[i]) && !isequality[i]]
EQ, GE, LE = length(eq), length(ge), length(le)
t.M1 = [
I spzeros(ncols, GE + LE)
data.constr_lhs[ge, :] spzeros(GE, GE + LE)
-data.constr_lhs[le, :] spzeros(LE, GE + LE)
]
t.M2 = [
zeros(ncols)
data.constr_lb[ge]
-data.constr_ub[le]
]
t.ncols_orig = ncols
t.GE, t.LE = GE, LE
t.lhs_ge = data.constr_lhs[ge, :]
t.lhs_le = data.constr_lhs[le, :]
t.rhs_ge = data.constr_lb[ge]
t.rhs_le = data.constr_ub[le]
data.constr_lhs = [
data.constr_lhs[eq, :] spzeros(EQ, GE) spzeros(EQ, LE)
data.constr_lhs[ge, :] -I spzeros(GE, LE)
data.constr_lhs[le, :] spzeros(LE, GE) I
]
data.obj = [data.obj; zeros(GE + LE)]
data.var_lb = [data.var_lb; zeros(GE + LE)]
data.var_ub = [data.var_ub; [Inf for _ = 1:(GE+LE)]]
data.var_names = [data.var_names; ["__s$i" for i in 1:(GE+LE)]]
data.var_types = [data.var_types; ['C' for _ in 1:(GE+LE)]]
data.constr_lb = [
data.constr_lb[eq]
data.constr_lb[ge]
data.constr_ub[le]
]
data.constr_ub = copy(data.constr_lb)
end
function backwards!(t::AddSlackVariables, c::ConstraintSet)
c.lb += c.lhs * t.M2
c.ub += c.lhs * t.M2
c.lhs = (c.lhs*t.M1)[:, 1:t.ncols_orig]
end
function forward(t::AddSlackVariables, x::Vector{Float64})::Vector{Float64}
return [
x
t.lhs_ge * x - t.rhs_ge
t.rhs_le - t.lhs_le * x
]
end
# -----------------------------------------------------------------------------
Base.@kwdef mutable struct SplitFreeVars <: Transform
F::Int = 0
B::Int = 0
free::Vector{Int}=[]
others::Vector{Int}=[]
end
function forward!(t::SplitFreeVars, data::ProblemData)
lhs = data.constr_lhs
_, ncols = size(lhs)
free = [i for i in 1:ncols if !isfinite(data.var_lb[i]) && !isfinite(data.var_ub[i])]
others = [i for i in 1:ncols if isfinite(data.var_lb[i]) || isfinite(data.var_ub[i])]
t.F = length(free)
t.B = length(others)
t.free, t.others = free, others
data.obj = [
data.obj[others]
data.obj[free]
-data.obj[free]
]
data.constr_lhs = [lhs[:, others] lhs[:, free] -lhs[:, free]]
data.var_lb = [
data.var_lb[others]
[0.0 for _ in free]
[0.0 for _ in free]
]
data.var_ub = [
data.var_ub[others]
[Inf for _ in free]
[Inf for _ in free]
]
data.var_types = [
data.var_types[others]
data.var_types[free]
data.var_types[free]
]
data.var_names = [
data.var_names[others]
["$(v)_p" for v in data.var_names[free]]
["$(v)_m" for v in data.var_names[free]]
]
end
function backwards!(t::SplitFreeVars, c::ConstraintSet)
# Convert GE constraints into LE
nrows, _ = size(c.lhs)
ge = [i for i in 1:nrows if isfinite(c.lb[i])]
c.ub[ge], c.lb[ge] = -c.lb[ge], -c.ub[ge]
c.lhs[ge, :] *= -1
# Assert only LE constraints are left (EQ constraints are not supported)
@assert all(c.lb .== -Inf)
# Take minimum (weakest) coefficient
B, F = t.B, t.F
for i in 1:F
c.lhs[:, B + i] = min.(c.lhs[:, B + i], -c.lhs[:, B + F + i])
end
c.lhs = c.lhs[:, 1:(B+F)]
end
function forward(t::SplitFreeVars, p::Vector{Float64})::Vector{Float64}
return [
p[t.others]
max.(p[t.free], 0)
max.(-p[t.free], 0)
]
end
# -----------------------------------------------------------------------------
Base.@kwdef mutable struct FlipUnboundedBelowVars <: Transform
flip_idx::Vector{Int} = []
end
function forward!(t::FlipUnboundedBelowVars, data::ProblemData)
_, ncols = size(data.constr_lhs)
for i in 1:ncols
if isfinite(data.var_lb[i])
continue
end
data.obj[i] *= -1
data.constr_lhs[:, i] *= -1
data.var_lb[i], data.var_ub[i] = -data.var_ub[i], -data.var_lb[i]
push!(t.flip_idx, i)
end
end
function backwards!(t::FlipUnboundedBelowVars, c::ConstraintSet)
for i in t.flip_idx
c.lhs[:, i] *= -1
end
end
function forward(t::FlipUnboundedBelowVars, p::Vector{Float64})::Vector{Float64}
p2 = copy(p)
p2[t.flip_idx] *= -1
return p2
end
# -----------------------------------------------------------------------------
function _assert_standard_form(data::ProblemData)
# Check sizes
nrows, ncols = size(data.constr_lhs)
@assert length(data.constr_lb) == nrows
@assert length(data.constr_ub) == nrows
@assert length(data.obj) == ncols
@assert length(data.var_lb) == ncols
@assert length(data.var_ub) == ncols
@assert length(data.var_names) == ncols
@assert length(data.var_types) == ncols
# Check standard form
@assert all(data.var_lb .== 0.0)
@assert all(data.var_ub .== Inf)
@assert all(data.constr_lb .== data.constr_ub)
end
function convert_to_standard_form!(data::ProblemData)::Vector{Transform}
transforms = []
function _apply!(t)
push!(transforms, t)
forward!(t, data)
end
@timeit "Split free vars" begin
_apply!(SplitFreeVars())
end
@timeit "Flip unbounded-below vars" begin
_apply!(FlipUnboundedBelowVars())
end
@timeit "Shift var lower bounds to zero" begin
_apply!(ShiftVarLowerBoundsToZero())
end
@timeit "Move var upper bounds to constrs" begin
_apply!(MoveVarUpperBoundsToConstrs())
end
@timeit "Add slack vars" begin
_apply!(AddSlackVariables())
end
_assert_standard_form(data)
return transforms
end
function convert_to_standard_form(data::ProblemData)::Tuple{ProblemData,Vector{Transform}}
data2 = deepcopy(data)
transforms = convert_to_standard_form!(data2)
return (data2, transforms)
end
export convert_to_standard_form!,
convert_to_standard_form,
forward!,
backwards!,
backwards,
AddSlackVariables,
SplitFreeVars,
forward

@ -1,5 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
abstract type Instance end

@ -1,204 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using JLD2
using Distributed
using ProgressBars
import Base: flush
mutable struct FileInstance <: Instance
py::Union{Nothing,PyCall.PyObject}
loaded::Union{Nothing,JuMPInstance}
filename::AbstractString
sample::PyCall.PyObject
build_model::Function
mode::String
function FileInstance(
filename::AbstractString,
build_model::Function;
mode::String = "a",
)::FileInstance
instance = new(nothing, nothing, filename, nothing, build_model, mode)
instance.py = PyFileInstance(instance)
h5_filename = replace(filename, ".jld2" => ".h5")
if mode != "r" || isfile(h5_filename)
instance.sample = Hdf5Sample(h5_filename, mode = mode)
end
instance.filename = filename
return instance
end
end
function _load!(instance::FileInstance)
if instance.loaded === nothing
data = load_data(instance.filename)
instance.loaded = JuMPInstance(instance.build_model(data))
end
end
function free(instance::FileInstance)
instance.loaded = nothing
end
function to_model(instance::FileInstance)
_load!(instance)
return to_model(instance.loaded)
end
function get_instance_features(instance::FileInstance)
_load!(instance)
return get_instance_features(instance.loaded)
end
function get_variable_features(instance::FileInstance, names)
_load!(instance)
return get_variable_features(instance.loaded, names)
end
function get_variable_categories(instance::FileInstance, names)
_load!(instance)
return get_variable_categories(instance.loaded, names)
end
function get_constraint_features(instance::FileInstance, names)
_load!(instance)
return get_constraint_features(instance.loaded, names)
end
function get_constraint_categories(instance::FileInstance, names)
_load!(instance)
return get_constraint_categories(instance.loaded, names)
end
function find_violated_lazy_constraints(instance::FileInstance, solver)
_load!(instance)
return find_violated_lazy_constraints(instance.loaded, solver)
end
function enforce_lazy_constraint(instance::FileInstance, solver, violation)
_load!(instance)
return enforce_lazy_constraint(instance.loaded, solver, violation)
end
function get_samples(instance::FileInstance)
return [instance.sample]
end
function create_sample!(instance::FileInstance)
if instance.mode == "r"
return MemorySample()
else
return instance.sample
end
end
function save_data(filename::AbstractString, data)::Nothing
jldsave(filename, data = data)
end
function load_data(filename::AbstractString)
jldopen(filename, "r") do file
return file["data"]
end
end
function load(filename::AbstractString, build_model::Function)
jldopen(filename, "r") do file
return build_model(file["data"])
end
end
function save(data::AbstractVector, dirname::String)::Vector{String}
mkpath(dirname)
filenames = []
for (i, d) in enumerate(data)
filename = joinpath(dirname, @sprintf("%06d.jld2", i))
push!(filenames, filename)
jldsave(filename, data = d)
end
return filenames
end
function fit!(
solver::LearningSolver,
filenames::Vector,
build_model::Function;
tee::Bool = false,
)
instances = [FileInstance(f, build_model) for f in filenames]
fit!(solver, instances)
end
function solve!(
solver::LearningSolver,
filenames::Vector,
build_model::Function;
tee::Bool = false,
progress::Bool = false,
)
if progress
filenames = ProgressBar(filenames)
end
return [solve!(solver, f, build_model; tee) for f in filenames]
end
function solve!(
solver::LearningSolver,
filename::AbstractString,
build_model::Function;
tee::Bool = false,
)
instance = FileInstance(filename, build_model)
stats = solve!(solver, instance; tee)
instance.sample.file.close()
return stats
end
function parallel_solve!(
solver::LearningSolver,
filenames::Vector,
build_model::Function;
tee::Bool = false,
)
solver_filename = tempname()
save(solver_filename, solver)
@sync @distributed for filename in filenames
local_solver = load_solver(solver_filename)
solve!(local_solver, filename, build_model; tee)
end
end
function __init_PyFileInstance__()
@pydef mutable struct Class <: miplearn.Instance
function __init__(self, jl)
self.jl = jl
end
to_model(self) = to_model(self.jl)
get_instance_features(self) = get_instance_features(self.jl)
get_variable_features(self, names) =
get_variable_features(self.jl, from_str_array(names))
get_variable_categories(self, names) =
to_str_array(get_variable_categories(self.jl, from_str_array(names)))
get_samples(self) = get_samples(self.jl)
create_sample(self) = create_sample!(self.jl)
find_violated_lazy_constraints(self, solver, _) =
find_violated_lazy_constraints(self.jl, solver)
enforce_lazy_constraint(self, solver, _, violation) =
enforce_lazy_constraint(self.jl, solver, violation)
free(self) = free(self.jl)
# FIXME: The two functions below are disabled because they break lazy loading
# of FileInstance.
# get_constraint_features(self, names) =
# get_constraint_features(self.jl, from_str_array(names))
# get_constraint_categories(self, names) =
# to_str_array(get_constraint_categories(self.jl, from_str_array(names)))
end
copy!(PyFileInstance, Class)
end
export FileInstance

@ -1,113 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using JuMP
import JSON
Base.@kwdef mutable struct JuMPInstance <: Instance
py::Union{Nothing,PyCall.PyObject} = nothing
model::Union{Nothing,JuMP.Model} = nothing
samples::Vector{PyCall.PyObject} = []
function JuMPInstance(model::JuMP.Model)::JuMPInstance
init_miplearn_ext(model)
instance = new(nothing, model, [])
py = PyJuMPInstance(instance)
instance.py = py
return instance
end
end
function to_model(instance::JuMPInstance)::JuMP.Model
return instance.model
end
function get_instance_features(instance::JuMPInstance)::Union{Vector{Float64},Nothing}
return instance.model.ext[:miplearn]["instance_features"]
end
function _concat_features(dict, names)::Matrix{Float64}
if isempty(dict)
return zeros(length(names), 1)
end
ncols = length(first(dict).second)
return vcat([n in keys(dict) ? dict[n]' : zeros(ncols) for n in names]...)
end
function _concat_categories(dict, names)::Vector{String}
return String[n in keys(dict) ? dict[n] : n for n in names]
end
function get_variable_features(
instance::JuMPInstance,
names::Vector{String},
)::Matrix{Float64}
return _concat_features(instance.model.ext[:miplearn]["variable_features"], names)
end
function get_variable_categories(instance::JuMPInstance, names::Vector{String})
return _concat_categories(instance.model.ext[:miplearn]["variable_categories"], names)
end
function get_constraint_features(
instance::JuMPInstance,
names::Vector{String},
)::Matrix{Float64}
return _concat_features(instance.model.ext[:miplearn]["constraint_features"], names)
end
function get_constraint_categories(instance::JuMPInstance, names::Vector{String})
return _concat_categories(instance.model.ext[:miplearn]["constraint_categories"], names)
end
get_samples(instance::JuMPInstance) = instance.samples
function create_sample!(instance::JuMPInstance)
sample = MemorySample()
push!(instance.samples, sample)
return sample
end
function find_violated_lazy_constraints(instance::JuMPInstance, solver)::Vector{String}
if "lazy_find_cb" keys(instance.model.ext[:miplearn])
return instance.model.ext[:miplearn]["lazy_find_cb"](instance.model, solver.data)
else
return []
end
end
function enforce_lazy_constraint(instance::JuMPInstance, solver, violation::String)::Nothing
instance.model.ext[:miplearn]["lazy_enforce_cb"](instance.model, solver.data, violation)
end
function solve!(solver::LearningSolver, model::JuMP.Model; kwargs...)
solve!(solver, JuMPInstance(model); kwargs...)
end
function __init_PyJuMPInstance__()
@pydef mutable struct Class <: miplearn.Instance
function __init__(self, jl)
self.jl = jl
end
to_model(self) = to_model(self.jl)
get_instance_features(self) = get_instance_features(self.jl)
get_variable_features(self, names) =
get_variable_features(self.jl, from_str_array(names))
get_variable_categories(self, names) =
to_str_array(get_variable_categories(self.jl, from_str_array(names)))
get_constraint_features(self, names) =
get_constraint_features(self.jl, from_str_array(names))
get_constraint_categories(self, names) =
to_str_array(get_constraint_categories(self.jl, from_str_array(names)))
get_samples(self) = get_samples(self.jl)
create_sample(self) = create_sample!(self.jl)
find_violated_lazy_constraints(self, solver, _) =
find_violated_lazy_constraints(self.jl, solver)
enforce_lazy_constraint(self, solver, _, violation) =
enforce_lazy_constraint(self.jl, solver, violation)
end
copy!(PyJuMPInstance, Class)
end
export JuMPInstance, save, load_instance

@ -1,704 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Cbc
using Clp
using JuMP
using MathOptInterface
using TimerOutputs
using SparseArrays
const MOI = MathOptInterface
import JuMP: value
Base.@kwdef mutable struct JuMPSolverData
optimizer_factory::Any
basis_status::Dict = Dict()
bin_vars::Vector{JuMP.VariableRef} = []
int_vars::Vector{JuMP.VariableRef} = []
cb_data::Any = nothing
cname_to_constr::Dict{String,JuMP.ConstraintRef} = Dict()
dual_values::Dict{JuMP.ConstraintRef,Float64} = Dict()
instance::Union{Nothing,PyObject} = nothing
model::Union{Nothing,JuMP.Model} = nothing
reduced_costs::Vector{Float64} = []
sensitivity_report::Any = nothing
solution::Dict{JuMP.VariableRef,Float64} = Dict()
varname_to_var::Dict{String,VariableRef} = Dict()
x::Vector{Float64} = Float64[]
end
"""
_optimize_and_capture_output!(model; tee=tee)
Optimizes a given JuMP model while capturing the solver log, then returns that log.
If tee=true, prints the solver log to the standard output as the optimization takes place.
"""
function _optimize_and_capture_output!(model; tee::Bool = false)
logname = tempname()
logfile = open(logname, "w")
redirect_stdout(logfile) do
JuMP.optimize!(model)
Base.Libc.flush_cstdio()
end
close(logfile)
log = String(read(logname))
rm(logname)
if tee
println(log)
flush(stdout)
Base.Libc.flush_cstdio()
end
return log
end
function _update_solution!(data::JuMPSolverData)
vars = JuMP.all_variables(data.model)
data.solution = Dict(var => JuMP.value(var) for var in vars)
data.x = JuMP.value.(vars)
if has_duals(data.model)
data.reduced_costs = []
data.basis_status = Dict()
# Reduced costs
for var in vars
rc = 0.0
if has_upper_bound(var)
rc += shadow_price(UpperBoundRef(var))
end
if has_lower_bound(var)
# FIXME: Remove negative sign
rc -= shadow_price(LowerBoundRef(var))
end
if is_fixed(var)
rc += shadow_price(FixRef(var))
end
push!(data.reduced_costs, rc)
# Basis status
data.basis_status[var] = MOI.get(data.model, MOI.VariableBasisStatus(), var)
end
try
data.sensitivity_report = lp_sensitivity_report(data.model)
catch
@warn "Sensitivity analysis is unavailable; ignoring" maxlog = 1
end
data.dual_values = Dict()
for (ftype, stype) in JuMP.list_of_constraint_types(data.model)
ftype != VariableRef || continue
for constr in JuMP.all_constraints(data.model, ftype, stype)
# Dual values (FIXME: Remove negative sign)
data.dual_values[constr] = -JuMP.dual(constr)
# Basis status
data.basis_status[constr] =
MOI.get(data.model, MOI.ConstraintBasisStatus(), constr)
end
end
else
data.reduced_costs = []
data.dual_values = Dict()
data.sensitivity_report = nothing
data.basis_status = Dict()
end
end
function add_constraints(
data::JuMPSolverData;
lhs::SparseMatrixCSC,
rhs::Vector{Float64},
senses::Vector{String},
names::Vector{String},
)::Nothing
lhs_exprs = lhs * JuMP.all_variables(data.model)
for (i, lhs_expr) in enumerate(lhs_exprs)
if senses[i] == ">"
constr = @constraint(data.model, lhs_expr >= rhs[i])
elseif senses[i] == "<"
constr = @constraint(data.model, lhs_expr <= rhs[i])
elseif senses[i] == "="
constr = @constraint(data.model, lhs_expr == rhs[i])
else
error("unknown sense: $sense")
end
set_name(constr, names[i])
data.cname_to_constr[names[i]] = constr
end
data.solution = Dict()
data.x = Float64[]
return
end
function are_constraints_satisfied(
data::JuMPSolverData;
lhs::SparseMatrixCSC,
rhs::Vector{Float64},
senses::Vector{String},
tol::Float64 = 1e-5,
)::Vector{Bool}
result = Bool[]
lhs_value = lhs * data.x
for (i, sense) in enumerate(senses)
sense = senses[i]
if sense == "<"
push!(result, lhs_value[i] <= rhs[i] + tol)
elseif sense == ">"
push!(result, lhs_value[i] >= rhs[i] - tol)
elseif sense == "<"
push!(result, abs(rhs[i] - lhs_value[i]) <= tol)
else
error("unknown sense: $sense")
end
end
return result
end
function build_test_instance_knapsack()
weights = [23.0, 26.0, 20.0, 18.0]
prices = [505.0, 352.0, 458.0, 220.0]
capacity = 67.0
model = Model()
n = length(weights)
@variable(model, x[0:n-1], Bin)
@variable(model, z, lower_bound = 0.0, upper_bound = capacity)
@objective(model, Max, sum(x[i-1] * prices[i] for i = 1:n))
@constraint(model, eq_capacity, sum(x[i-1] * weights[i] for i = 1:n) - z == 0)
return JuMPInstance(model).py
end
function build_test_instance_infeasible()
model = Model()
@variable(model, x, Bin)
@objective(model, Max, x)
@constraint(model, x >= 2)
return JuMPInstance(model).py
end
function remove_constraints(data::JuMPSolverData, names::Vector{String})::Nothing
for name in names
constr = data.cname_to_constr[name]
delete(data.model, constr)
delete!(data.cname_to_constr, name)
end
return
end
function solve(
data::JuMPSolverData;
tee::Bool = false,
iteration_cb = nothing,
lazy_cb = nothing,
)
model = data.model
wallclock_time = 0
log = ""
if lazy_cb !== nothing
function lazy_cb_wrapper(cb_data)
data.cb_data = cb_data
lazy_cb(nothing, nothing)
data.cb_data = nothing
end
MOI.set(model, MOI.LazyConstraintCallback(), lazy_cb_wrapper)
end
while true
wallclock_time += @elapsed begin
log *= _optimize_and_capture_output!(model, tee = tee)
end
if is_infeasible(data)
break
end
if iteration_cb !== nothing
iteration_cb() || break
else
break
end
end
if is_infeasible(data)
data.solution = Dict()
data.x = Float64[]
primal_bound = nothing
dual_bound = nothing
else
_update_solution!(data)
primal_bound = JuMP.objective_value(model)
dual_bound = JuMP.objective_bound(model)
end
if JuMP.objective_sense(model) == MOI.MIN_SENSE
sense = "min"
lower_bound = dual_bound
upper_bound = primal_bound
else
sense = "max"
lower_bound = primal_bound
upper_bound = dual_bound
end
return miplearn.solvers.internal.MIPSolveStats(
mip_lower_bound = lower_bound,
mip_upper_bound = upper_bound,
mip_sense = sense,
mip_wallclock_time = wallclock_time,
mip_nodes = 1,
mip_log = log,
mip_warm_start_value = nothing,
)
end
function solve_lp(data::JuMPSolverData; tee::Bool = false)
for var in data.bin_vars
~is_fixed(var) || continue
unset_binary(var)
set_upper_bound(var, 1.0)
set_lower_bound(var, 0.0)
end
for var in data.int_vars
~is_fixed(var) || continue
unset_integer(var)
end
# If the optimizer is Cbc, we need to replace it by Clp,
# otherwise dual values are not available.
# https://github.com/jump-dev/Cbc.jl/issues/50
is_cbc = (data.optimizer_factory == Cbc.Optimizer)
if is_cbc
set_optimizer(data.model, Clp.Optimizer)
end
wallclock_time = @elapsed begin
log = _optimize_and_capture_output!(data.model, tee = tee)
end
if is_infeasible(data)
data.solution = Dict()
obj_value = nothing
else
_update_solution!(data)
obj_value = objective_value(data.model)
end
if is_cbc
set_optimizer(data.model, data.optimizer_factory)
end
for var in data.bin_vars
~is_fixed(var) || continue
set_binary(var)
end
for var in data.int_vars
~is_fixed(var) || continue
set_integer(var)
end
return miplearn.solvers.internal.LPSolveStats(
lp_value = obj_value,
lp_log = log,
lp_wallclock_time = wallclock_time,
)
end
function set_instance!(
data::JuMPSolverData,
instance;
model::Union{Nothing,JuMP.Model},
)::Nothing
data.instance = instance
if model === nothing
model = instance.to_model()
end
data.model = model
data.bin_vars = [var for var in JuMP.all_variables(model) if JuMP.is_binary(var)]
data.int_vars = [var for var in JuMP.all_variables(model) if JuMP.is_integer(var)]
data.varname_to_var = Dict(JuMP.name(var) => var for var in JuMP.all_variables(model))
JuMP.set_optimizer(model, data.optimizer_factory)
data.cname_to_constr = Dict()
for (ftype, stype) in JuMP.list_of_constraint_types(model)
for constr in JuMP.all_constraints(model, ftype, stype)
name = JuMP.name(constr)
length(name) > 0 || continue
data.cname_to_constr[name] = constr
end
end
return
end
function fix!(data::JuMPSolverData, solution)
for (varname, value) in solution
value !== nothing || continue
var = data.varname_to_var[varname]
JuMP.fix(var, value, force = true)
end
end
function set_warm_start!(data::JuMPSolverData, solution)
for (varname, value) in solution
value !== nothing || continue
var = data.varname_to_var[varname]
JuMP.set_start_value(var, value)
end
end
function is_infeasible(data::JuMPSolverData)
return JuMP.termination_status(data.model) in
[MOI.INFEASIBLE, MOI.INFEASIBLE_OR_UNBOUNDED]
end
function get_variables(data::JuMPSolverData; with_static::Bool, with_sa::Bool)
vars = JuMP.all_variables(data.model)
lb, ub, types = nothing, nothing, nothing
sa_obj_down, sa_obj_up = nothing, nothing
sa_lb_down, sa_lb_up = nothing, nothing
sa_ub_down, sa_ub_up = nothing, nothing
basis_status = nothing
values, rc = nothing, nothing
# Variable names
names = JuMP.name.(vars)
# Primal values
if !isempty(data.solution)
values = [data.solution[v] for v in vars]
end
# Objective function coefficients
obj = objective_function(data.model)
obj_coeffs = [v keys(obj.terms) ? obj.terms[v] : 0.0 for v in vars]
if with_static
# Lower bounds
lb = [
JuMP.is_binary(v) ? 0.0 : JuMP.has_lower_bound(v) ? JuMP.lower_bound(v) : -Inf for v in vars
]
# Upper bounds
ub = [
JuMP.is_binary(v) ? 1.0 : JuMP.has_upper_bound(v) ? JuMP.upper_bound(v) : Inf for v in vars
]
# Variable types
types = [JuMP.is_binary(v) ? "B" : JuMP.is_integer(v) ? "I" : "C" for v in vars]
end
# Sensitivity analysis
if data.sensitivity_report !== nothing
sa_obj_down, sa_obj_up = Float64[], Float64[]
sa_lb_down, sa_lb_up = Float64[], Float64[]
sa_ub_down, sa_ub_up = Float64[], Float64[]
for (i, v) in enumerate(vars)
# Objective function
(delta_down, delta_up) = data.sensitivity_report[v]
push!(sa_obj_down, delta_down + obj_coeffs[i])
push!(sa_obj_up, delta_up + obj_coeffs[i])
# Lower bound
if has_lower_bound(v)
constr = LowerBoundRef(v)
(delta_down, delta_up) = data.sensitivity_report[constr]
push!(sa_lb_down, lower_bound(v) + delta_down)
push!(sa_lb_up, lower_bound(v) + delta_up)
else
push!(sa_lb_down, -Inf)
push!(sa_lb_up, -Inf)
end
# Upper bound
if has_upper_bound(v)
constr = JuMP.UpperBoundRef(v)
(delta_down, delta_up) = data.sensitivity_report[constr]
push!(sa_ub_down, upper_bound(v) + delta_down)
push!(sa_ub_up, upper_bound(v) + delta_up)
else
push!(sa_ub_down, Inf)
push!(sa_ub_up, Inf)
end
end
end
# Basis status
if !isempty(data.basis_status)
basis_status = []
for v in vars
bstatus = data.basis_status[v]
if bstatus == MOI.BASIC
basis_status_v = "B"
elseif bstatus == MOI.NONBASIC_AT_LOWER
basis_status_v = "L"
elseif bstatus == MOI.NONBASIC_AT_UPPER
basis_status_v = "U"
else
error("Unknown basis status: $(bstatus)")
end
push!(basis_status, basis_status_v)
end
end
rc = isempty(data.reduced_costs) ? nothing : data.reduced_costs
vf = miplearn.solvers.internal.Variables(
basis_status = to_str_array(basis_status),
lower_bounds = lb,
names = to_str_array(names),
obj_coeffs = with_static ? obj_coeffs : nothing,
reduced_costs = rc,
sa_lb_down = with_sa ? sa_lb_down : nothing,
sa_lb_up = with_sa ? sa_lb_up : nothing,
sa_obj_down = with_sa ? sa_obj_down : nothing,
sa_obj_up = with_sa ? sa_obj_up : nothing,
sa_ub_down = with_sa ? sa_ub_down : nothing,
sa_ub_up = with_sa ? sa_ub_up : nothing,
types = to_str_array(types),
upper_bounds = ub,
values = values,
)
return vf
end
function get_constraints(
data::JuMPSolverData;
with_static::Bool,
with_sa::Bool,
with_lhs::Bool,
)
names = String[]
senses, rhs = String[], Float64[]
lhs_rows, lhs_cols, lhs_values = Int[], Int[], Float64[]
dual_values, slacks = nothing, nothing
basis_status = nothing
sa_rhs_up, sa_rhs_down = nothing, nothing
if !isempty(data.dual_values)
dual_values = Float64[]
end
if !isempty(data.basis_status)
basis_status = []
end
if data.sensitivity_report !== nothing
sa_rhs_up, sa_rhs_down = Float64[], Float64[]
end
constr_index = 1
for (ftype, stype) in JuMP.list_of_constraint_types(data.model)
for constr in JuMP.all_constraints(data.model, ftype, stype)
cset = MOI.get(constr.model.moi_backend, MOI.ConstraintSet(), constr.index)
cf = MOI.get(constr.model.moi_backend, MOI.ConstraintFunction(), constr.index)
# Names
name = JuMP.name(constr)
length(name) > 0 || continue
push!(names, name)
# LHS, RHS and sense
if ftype == VariableRef
# nop
elseif ftype == AffExpr
if stype == MOI.EqualTo{Float64}
rhs_c = cset.value
push!(senses, "=")
elseif stype == MOI.LessThan{Float64}
rhs_c = cset.upper
push!(senses, "<")
elseif stype == MOI.GreaterThan{Float64}
rhs_c = cset.lower
push!(senses, ">")
else
error("Unsupported set: $stype")
end
push!(rhs, rhs_c)
for term in cf.terms
push!(lhs_cols, term.variable.value)
push!(lhs_rows, constr_index)
push!(lhs_values, term.coefficient)
end
constr_index += 1
else
error("Unsupported constraint type: ($ftype, $stype)")
end
# Dual values
if !isempty(data.dual_values)
push!(dual_values, data.dual_values[constr])
end
# Basis status
if !isempty(data.basis_status)
b = data.basis_status[constr]
if b == MOI.NONBASIC
push!(basis_status, "N")
elseif b == MOI.BASIC
push!(basis_status, "B")
else
error("Unknown basis status: $b")
end
end
# Sensitivity analysis
if data.sensitivity_report !== nothing
(delta_down, delta_up) = data.sensitivity_report[constr]
push!(sa_rhs_down, rhs_c + delta_down)
push!(sa_rhs_up, rhs_c + delta_up)
end
end
end
lhs =
sparse(lhs_rows, lhs_cols, lhs_values, length(rhs), JuMP.num_variables(data.model))
if !isempty(data.x)
lhs_value = lhs * data.x
slacks = abs.(lhs_value - rhs)
end
return miplearn.solvers.internal.Constraints(
basis_status = to_str_array(basis_status),
dual_values = dual_values,
lhs = (with_static && with_lhs) ? sparse(lhs_rows, lhs_cols, lhs_values) : nothing,
names = to_str_array(names),
rhs = with_static ? rhs : nothing,
sa_rhs_down = with_sa ? sa_rhs_down : nothing,
sa_rhs_up = with_sa ? sa_rhs_up : nothing,
senses = with_static ? to_str_array(senses) : nothing,
slacks = slacks,
)
end
function __init_JuMPSolver__()
@pydef mutable struct Class <: miplearn.solvers.internal.InternalSolver
function __init__(self, optimizer_factory)
self.data = JuMPSolverData(optimizer_factory = optimizer_factory)
end
function add_constraints(self, cf)
add_constraints(
self.data,
lhs = convert(SparseMatrixCSC, cf.lhs),
rhs = cf.rhs,
senses = from_str_array(cf.senses),
names = from_str_array(cf.names),
)
end
function are_constraints_satisfied(self, cf; tol = 1e-5)
return are_constraints_satisfied(
self.data,
lhs = convert(SparseMatrixCSC, cf.lhs),
rhs = cf.rhs,
senses = from_str_array(cf.senses),
tol = tol,
)
end
build_test_instance_infeasible(self) = build_test_instance_infeasible()
build_test_instance_knapsack(self) = build_test_instance_knapsack()
clone(self) = JuMPSolver(self.data.optimizer_factory)
fix(self, solution) = fix!(self.data, solution)
get_solution(self) = isempty(self.data.solution) ? nothing : self.data.solution
get_constraints(self; with_static = true, with_sa = true, with_lhs = true) =
get_constraints(
self.data,
with_static = with_static,
with_sa = with_sa,
with_lhs = with_lhs,
)
function get_constraint_attrs(self)
return [
"categories",
"dual_values",
"lazy",
"lhs",
"names",
"rhs",
"senses",
"user_features",
"slacks",
"basis_status",
"sa_rhs_down",
"sa_rhs_up",
]
end
get_variables(self; with_static = true, with_sa = true) =
get_variables(self.data; with_static = with_static, with_sa = with_sa)
function get_variable_attrs(self)
return [
"basis_status",
"names",
"categories",
"lower_bounds",
"obj_coeffs",
"reduced_costs",
"types",
"upper_bounds",
"user_features",
"values",
"sa_obj_down",
"sa_obj_up",
"sa_lb_down",
"sa_lb_up",
"sa_ub_down",
"sa_ub_up",
]
end
is_infeasible(self) = is_infeasible(self.data)
remove_constraints(self, names) = remove_constraints(self.data, [n for n in names])
set_instance(self, instance, model = nothing) =
set_instance!(self.data, instance, model = model)
set_warm_start(self, solution) = set_warm_start!(self.data, solution)
solve(
self;
tee = false,
iteration_cb = nothing,
lazy_cb = nothing,
user_cut_cb = nothing,
) = solve(self.data, tee = tee, iteration_cb = iteration_cb, lazy_cb = lazy_cb)
solve_lp(self; tee = false) = solve_lp(self.data, tee = tee)
end
copy!(JuMPSolver, Class)
end
function value(solver::JuMPSolverData, var::VariableRef)
if solver.cb_data !== nothing
return JuMP.callback_value(solver.cb_data, var)
else
return JuMP.value(var)
end
end
function submit(solver::JuMPSolverData, con::AbstractConstraint, name::String = "")
if solver.cb_data !== nothing
MOI.submit(solver.model, MOI.LazyConstraint(solver.cb_data), con)
else
JuMP.add_constraint(solver.model, con, name)
end
end
export JuMPSolver, submit

@ -1,84 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Distributed
using JLD2
function LearningSolver(
optimizer_factory;
components = nothing,
mode::AbstractString = "exact",
simulate_perfect::Bool = false,
solve_lp::Bool = true,
extract_sa::Bool = true,
extract_lhs::Bool = true,
)::LearningSolver
return LearningSolver(
miplearn.LearningSolver(
solver = JuMPSolver(optimizer_factory),
mode = mode,
solve_lp = solve_lp,
simulate_perfect = simulate_perfect,
components = components,
extract_lhs = extract_lhs,
extract_sa = extract_sa,
),
optimizer_factory,
)
end
function solve!(
solver::LearningSolver,
instance::Instance;
tee::Bool = false,
discard_output::Bool = false,
)
return @python_call solver.py.solve(
instance.py,
tee = tee,
discard_output = discard_output,
)
end
function fit!(solver::LearningSolver, instances::Vector{<:Instance})
@python_call solver.py.fit([instance.py for instance in instances])
return
end
function save(filename::AbstractString, solver::LearningSolver)
internal_solver = solver.py.internal_solver
internal_solver_prototype = solver.py.internal_solver_prototype
solver.py.internal_solver = nothing
solver.py.internal_solver_prototype = nothing
solver_py_filename = tempname()
miplearn.write_pickle_gz(solver.py, solver_py_filename)
solver_py = read(solver_py_filename)
solver.py.internal_solver = internal_solver
solver.py.internal_solver_prototype = internal_solver_prototype
jldsave(
filename;
miplearn_version = "0.2",
solver_py = solver_py,
optimizer_factory = solver.optimizer_factory,
)
return
end
function load_solver(filename::AbstractString)::LearningSolver
jldopen(filename, "r") do file
solve_py_filename = tempname()
write(solve_py_filename, file["solver_py"])
solver_py = miplearn.read_pickle_gz(solve_py_filename)
internal_solver = JuMPSolver(file["optimizer_factory"])
solver_py.internal_solver_prototype = internal_solver
return LearningSolver(solver_py, file["optimizer_factory"])
end
end
export Instance, LearningSolver, solve!, fit!, parallel_solve!, save, load_solver

@ -1,93 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
function init_miplearn_ext(model)::Dict
if :miplearn keys(model.ext)
model.ext[:miplearn] = Dict()
model.ext[:miplearn]["instance_features"] = [0.0]
model.ext[:miplearn]["variable_features"] = Dict{AbstractString,Vector{Float64}}()
model.ext[:miplearn]["variable_categories"] = Dict{AbstractString,String}()
model.ext[:miplearn]["constraint_features"] = Dict{AbstractString,Vector{Float64}}()
model.ext[:miplearn]["constraint_categories"] = Dict{AbstractString,String}()
end
return model.ext[:miplearn]
end
function set_features!(m::Model, f::Array{Float64})::Nothing
ext = init_miplearn_ext(m)
ext["instance_features"] = f
return
end
function set_features!(v::VariableRef, f::Array{Float64})::Nothing
ext = init_miplearn_ext(v.model)
n = _get_and_check_name(v)
ext["variable_features"][n] = f
return
end
function set_category!(v::VariableRef, category::String)::Nothing
ext = init_miplearn_ext(v.model)
n = _get_and_check_name(v)
ext["variable_categories"][n] = category
return
end
function set_features!(c::ConstraintRef, f::Array{Float64})::Nothing
ext = init_miplearn_ext(c.model)
n = _get_and_check_name(c)
ext["constraint_features"][n] = f
return
end
function set_category!(c::ConstraintRef, category::String)::Nothing
ext = init_miplearn_ext(c.model)
n = _get_and_check_name(c)
ext["constraint_categories"][n] = category
return
end
function set_lazy_callback!(model::Model, find_cb::Function, enforce_cb::Function)::Nothing
ext = init_miplearn_ext(model)
ext["lazy_find_cb"] = find_cb
ext["lazy_enforce_cb"] = enforce_cb
return
end
macro feature(obj, features)
quote
set_features!($(esc(obj)), $(esc(features)))
end
end
macro category(obj, category)
quote
set_category!($(esc(obj)), $(esc(category)))
end
end
macro lazycb(obj, find_cb, enforce_cb)
quote
set_lazy_callback!($(esc(obj)), $(esc(find_cb)), $(esc(enforce_cb)))
end
end
function _get_and_check_name(obj)
n = name(obj)
length(n) > 0 || error(
"Features and categories can only be assigned to variables and " *
"constraints that have names. Unnamed model element detected.",
)
return n
end
export @feature, @category, @lazycb

@ -1,8 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
struct LearningSolver
py::PyCall.PyObject
optimizer_factory::Any
end

@ -1,63 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using CSV
using DataFrames
using OrderedCollections
function run_benchmarks(;
optimizer,
train_instances::Vector{<:AbstractString},
test_instances::Vector{<:AbstractString},
build_model::Function,
progress::Bool = false,
output_filename::String,
)
solvers = OrderedDict(
"baseline" => LearningSolver(optimizer),
"ml-exact" => LearningSolver(optimizer),
"ml-heuristic" => LearningSolver(optimizer, mode = "heuristic"),
)
#solve!(solvers["baseline"], train_instances, build_model; progress)
fit!(solvers["ml-exact"], train_instances, build_model)
fit!(solvers["ml-heuristic"], train_instances, build_model)
stats = OrderedDict()
for (solver_name, solver) in solvers
stats[solver_name] = solve!(solver, test_instances, build_model; progress)
end
results = nothing
for (solver_name, solver_stats) in stats
for (i, s) in enumerate(solver_stats)
s["Solver"] = solver_name
s["Instance"] = test_instances[i]
s = Dict(k => isnothing(v) ? missing : v for (k, v) in s)
if results === nothing
results = DataFrame(s)
else
push!(results, s, cols = :union)
end
end
end
CSV.write(output_filename, results)
# fig_filename = "$(tempname()).svg"
# df = pyimport("pandas").read_csv(csv_filename)
# miplearn.benchmark.plot(df, output=fig_filename)
# open(fig_filename) do f
# display("image/svg+xml", read(f, String))
# end
return
end
function run_benchmarks(; solvers, instance_filenames, build_model, output_filename)
runner = BenchmarkRunner(; solvers)
instances = [FileInstance(f, build_model) for f in instance_filenames]
solve!(runner, instances)
write_csv!(runner, output_filename)
end
export BenchmarkRunner, solve!, fit!, write_csv!

@ -1,20 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using PyCall
macro python_call(expr)
quote
try
return $(esc(expr))
catch e
if isa(e, PyCall.PyError)
printstyled("Uncaught Python exception:\n", bold = true, color = :red)
traceback.print_exception(e.T, e.val, e.traceback)
end
rethrow()
end
end
end

@ -1,74 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import Logging: min_enabled_level, shouldlog, handle_message
using Base.CoreLogging, Logging, Printf
struct TimeLogger <: AbstractLogger
initial_time::Float64
file::Union{Nothing,IOStream}
screen_log_level::Any
io_log_level::Any
end
function TimeLogger(;
initial_time::Float64,
file::Union{Nothing,IOStream} = nothing,
screen_log_level = CoreLogging.Info,
io_log_level = CoreLogging.Info,
)::TimeLogger
return TimeLogger(initial_time, file, screen_log_level, io_log_level)
end
min_enabled_level(logger::TimeLogger) = logger.io_log_level
shouldlog(logger::TimeLogger, level, _module, group, id) = true
function handle_message(
logger::TimeLogger,
level,
message,
_module,
group,
id,
filepath,
line;
kwargs...,
)
elapsed_time = time() - logger.initial_time
time_string = @sprintf("[%12.3f] ", elapsed_time)
if level >= Logging.Error
color = :light_red
elseif level >= Logging.Warn
color = :light_yellow
else
color = :light_green
end
flush(stdout)
flush(stderr)
Base.Libc.flush_cstdio()
if level >= logger.screen_log_level
printstyled(time_string, color = color)
println(message)
end
if logger.file !== nothing && level >= logger.io_log_level
write(logger.file, time_string)
write(logger.file, message)
write(logger.file, "\n")
flush(logger.file)
end
flush(stdout)
flush(stderr)
Base.Libc.flush_cstdio()
end
function setup_logger()
initial_time = time()
global_logger(TimeLogger(initial_time = initial_time))
miplearn = pyimport("miplearn")
miplearn.setup_logger(initial_time)
end
export TimeLogger

@ -1,20 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
function parse_name(name::AbstractString)::Vector{String}
# x
m = match(r"^[-a-z0-9_]*$", name)
if m !== nothing
return [name]
end
# x[1,2,3]
m = match(r"^([-a-z0-9_]*)\[([-a-z0-9_,]*)\]$"i, name)
if m !== nothing
return [m[1], split(m[2], ",")...]
end
# unknown
error("Could not parse: $(name)")
end

@ -0,0 +1,20 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2023, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using HDF5
function test_cuts_blackbox_cplex()
# Prepare filenames
mps_filename = joinpath(@__DIR__, "../../fixtures/bell5.mps.gz")
h5_filename = replace(mps_filename, ".mps.gz" => ".h5")
# Run collector
MIPLearn.collect(mps_filename, CplexBlackBoxCuts())
# Read HDF5 file
h5open(h5_filename, "r+") do h5
@test size(h5["cuts_cpx_lhs"]) == (12, 104)
@test size(h5["cuts_cpx_rhs"]) == (12,)
end
end

@ -0,0 +1,5 @@
[deps]
HDF5 = "f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"
MIPLearn = "2b1277c3-b477-4c49-a15e-7ba350325c68"
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

@ -1,146 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Clp
using JuMP
using Test
using MIPLearn.BB
using MIPLearn
basepath = @__DIR__
function runtests(optimizer_name, optimizer; large = true)
@testset "Solve ($optimizer_name)" begin
@testset "interface" begin
filename = "$basepath/../fixtures/danoint.mps.gz"
mip = BB.init(optimizer)
BB.read!(mip, filename)
@test mip.sense == 1.0
@test length(mip.int_vars) == 56
status, obj = BB.solve_relaxation!(mip)
@test status == :Optimal
@test round(obj, digits = 6) == 62.637280
@test BB.name(mip, mip.int_vars[1]) == "xab"
@test BB.name(mip, mip.int_vars[2]) == "xac"
@test BB.name(mip, mip.int_vars[3]) == "xad"
@test mip.int_vars_lb[1] == 0.0
@test mip.int_vars_ub[1] == 1.0
vals = BB.values(mip, mip.int_vars)
@test round(vals[1], digits = 6) == 0.046933
@test round(vals[2], digits = 6) == 0.000841
@test round(vals[3], digits = 6) == 0.248696
# Probe (up and down are feasible)
probe_up, probe_down = BB.probe(mip, mip.int_vars[1], 0.5, 0.0, 1.0, 1_000_000)
@test round(probe_down, digits = 6) == 62.690000
@test round(probe_up, digits = 6) == 62.714100
# Fix one variable to zero
BB.set_bounds!(mip, mip.int_vars[1:1], [0.0], [0.0])
status, obj = BB.solve_relaxation!(mip)
@test status == :Optimal
@test round(obj, digits = 6) == 62.690000
# Fix one variable to one and another variable variable to zero
BB.set_bounds!(mip, mip.int_vars[1:2], [1.0, 0.0], [1.0, 0.0])
status, obj = BB.solve_relaxation!(mip)
@test status == :Optimal
@test round(obj, digits = 6) == 62.714777
# Fix all binary variables to one, making problem infeasible
N = length(mip.int_vars)
BB.set_bounds!(mip, mip.int_vars, ones(N), ones(N))
status, obj = BB.solve_relaxation!(mip)
@test status == :Infeasible
@test obj == Inf
# Restore original problem
N = length(mip.int_vars)
BB.set_bounds!(mip, mip.int_vars, zeros(N), ones(N))
status, obj = BB.solve_relaxation!(mip)
@test status == :Optimal
@test round(obj, digits = 6) == 62.637280
end
@testset "varbranch" begin
for instance in ["bell5", "vpm2"]
for branch_rule in [
BB.RandomBranching(),
BB.FirstInfeasibleBranching(),
BB.LeastInfeasibleBranching(),
BB.MostInfeasibleBranching(),
BB.PseudocostBranching(),
BB.StrongBranching(),
BB.ReliabilityBranching(),
BB.HybridBranching(),
BB.StrongBranching(aggregation = :min),
BB.ReliabilityBranching(aggregation = :min, collect = true),
]
h5 = Hdf5Sample("$basepath/../fixtures/$instance.h5")
mip_lower_bound = h5.get_scalar("mip_lower_bound")
mip_upper_bound = h5.get_scalar("mip_upper_bound")
mip_sense = h5.get_scalar("mip_sense")
mip_primal_bound =
mip_sense == "min" ? mip_upper_bound : mip_lower_bound
h5.file.close()
mip = BB.init(optimizer)
BB.read!(mip, "$basepath/../fixtures/$instance.mps.gz")
@info optimizer_name, branch_rule, instance
@time BB.solve!(
mip,
initial_primal_bound = mip_primal_bound,
print_interval = 10,
node_limit = 100,
branch_rule = branch_rule,
)
end
end
end
@testset "collect" begin
rule = BB.ReliabilityBranching(collect = true)
BB.collect!(
optimizer,
"$basepath/../fixtures/bell5.mps.gz",
node_limit = 100,
print_interval = 10,
branch_rule = rule,
)
n_sb = rule.stats.num_strong_branch_calls
h5 = Hdf5Sample("$basepath/../fixtures/bell5.h5")
@test size(h5.get_array("bb_var_pseudocost_up")) == (104,)
@test size(h5.get_array("bb_score_var_names")) == (n_sb,)
@test size(h5.get_array("bb_score_features")) == (n_sb, 6)
@test size(h5.get_array("bb_score_targets")) == (n_sb,)
h5.file.close()
end
end
end
@testset "BB" begin
@time runtests("Clp", optimizer_with_attributes(Clp.Optimizer))
if is_gurobi_available
using Gurobi
@time runtests(
"Gurobi",
optimizer_with_attributes(Gurobi.Optimizer, "Threads" => 1),
)
end
if is_cplex_available
using CPLEX
@time runtests(
"CPLEX",
optimizer_with_attributes(CPLEX.Optimizer, "CPXPARAM_Threads" => 1),
)
end
end

Binary file not shown.

@ -1,46 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using JuMP
using MIPLearn
using Cbc
@testset "FileInstance" begin
@testset "Solve (knapsack)" begin
data = KnapsackData()
basename = tempname()
MIPLearn.save_data("$basename.jld2", data)
instance = FileInstance("$basename.jld2", build_knapsack_model)
solver = LearningSolver(Cbc.Optimizer)
solve!(solver, instance)
h5 = Hdf5Sample("$basename.h5")
@test h5.get_scalar("mip_wallclock_time") > 0
end
@testset "Solve (danoint)" begin
data = Dict("filename" => joinpath(@__DIR__, "../fixtures/danoint.mps.gz"))
build_model(data) = read_from_file(data["filename"])
basename = tempname()
MIPLearn.save_data("$basename.jld2", data)
instance = FileInstance("$basename.jld2", build_model)
solver = LearningSolver(optimizer_with_attributes(Cbc.Optimizer, "seconds" => 1.0))
solve!(solver, instance)
h5 = Hdf5Sample("$basename.h5")
@test h5.get_scalar("mip_wallclock_time") > 0
end
@testset "Save and load data" begin
filename = tempname()
data = KnapsackData(
weights = [5.0, 5.0, 5.0],
prices = [1.0, 1.0, 1.0],
capacity = 3.0,
)
MIPLearn.save_data(filename, data)
loaded = MIPLearn.load_data(filename)
@test loaded.weights == [5.0, 5.0, 5.0]
@test loaded.prices == [1.0, 1.0, 1.0]
@test loaded.capacity == 3.0
end
end

@ -1,62 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Cbc
using JuMP
using MathOptInterface
using MIPLearn
const MOI = MathOptInterface
function find_lazy(model::Model, cb_data)::Vector{String}
x = variable_by_name(model, "x")
y = variable_by_name(model, "y")
x_val = value(cb_data, x)
y_val = value(cb_data, y)
if x_val + y_val > 1 + 1e-6
return ["con"]
end
return []
end
function enforce_lazy(model::Model, cb_data, violation::String)::Nothing
if violation == "con"
x = variable_by_name(model, "x")
y = variable_by_name(model, "y")
con = @build_constraint(x + y <= 1)
submit(cb_data, con)
end
return
end
function build_model(data)
model = Model()
@variable(model, x, Bin)
@variable(model, y, Bin)
@objective(model, Max, 2 * x + y)
@constraint(model, c1, x + y <= 2)
@lazycb(model, find_lazy, enforce_lazy)
return model
end
@testset "Lazy callback" begin
@testset "JuMPInstance" begin
model = build_model(nothing)
instance = JuMPInstance(model)
solver = LearningSolver(Cbc.Optimizer)
solve!(solver, instance)
@test value(model[:x]) == 1.0
@test value(model[:y]) == 0.0
end
@testset "FileInstance" begin
data = nothing
basename = tempname()
MIPLearn.save_data("$basename.jld2", data)
instance = FileInstance("$basename.jld2", build_model)
solver = LearningSolver(Cbc.Optimizer)
solve!(solver, instance)
h5 = MIPLearn.Hdf5Sample("$basename.h5")
@test h5.get_array("mip_var_values") == [1.0, 0.0]
end
end

@ -1,28 +1,13 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Copyright (C) 2020-2023, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Revise
using Test
using Requires
using MIPLearn
MIPLearn.setup_logger()
is_cplex_available = false
@require CPLEX = "a076750e-1247-5638-91d2-ce28b192dca0" begin
is_cplex_available = true
end
is_gurobi_available = false
@require Gurobi = "2e9cd046-0924-5485-92f1-d5272153d98b" begin
is_gurobi_available = true
end
includet("Cuts/BlackBox/test_cplex.jl")
@testset "MIPLearn" begin
include("fixtures/knapsack.jl")
include("instance/file_instance_test.jl")
include("instance/jump_instance_test.jl")
include("solvers/jump_solver_test.jl")
include("solvers/learning_solver_test.jl")
include("utils/parse_test.jl")
include("bb/lp_test.jl")
function runtests()
test_cuts_blackbox_cplex()
end

@ -1,34 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Cbc
using JuMP
using MIPLearn
using PyCall
using Test
if is_gurobi_available
using Gurobi
end
miplearn_tests = pyimport("miplearn.solvers.tests")
traceback = pyimport("traceback")
function _test_solver(optimizer_factory)
MIPLearn.@python_call miplearn_tests.run_internal_solver_tests(
JuMPSolver(optimizer_factory),
)
end
@testset "JuMPSolver" begin
@testset "Cbc" begin
_test_solver(Cbc.Optimizer)
end
if is_gurobi_available
using Gurobi
@testset "Gurobi" begin
_test_solver(Gurobi.Optimizer)
end
end
end

@ -1,46 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Cbc
using JuMP
using MIPLearn
@testset "LearningSolver" begin
@testset "Model with annotations" begin
model = build_knapsack_model()
solver = LearningSolver(Cbc.Optimizer)
instance = JuMPInstance(model)
stats = solve!(solver, instance)
@test stats["mip_lower_bound"] == 11.0
@test length(instance.samples) == 1
fit!(solver, [instance])
solve!(solver, instance)
end
@testset "Model without annotations" begin
model = build_knapsack_model()
solver = LearningSolver(Cbc.Optimizer)
instance = JuMPInstance(model)
stats = solve!(solver, instance)
@test stats["mip_lower_bound"] == 11.0
end
@testset "Save and load" begin
solver = LearningSolver(Cbc.Optimizer)
solver.py.components = "Placeholder"
filename = tempname()
save(filename, solver)
@test isfile(filename)
loaded = load_solver(filename)
@test loaded.py.components == "Placeholder"
end
# @testset "Discard output" begin
# instance = build_knapsack_file_instance()
# solver = LearningSolver(Cbc.Optimizer)
# solve!(solver, instance, discard_output = true)
# loaded = load_instance(instance.filename)
# @test length(loaded.samples) == 0
# end
end

@ -1,36 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using Cbc
using CSV
using DataFrames
@testset "BenchmarkRunner" begin
@info "Building training data..."
instances = [build_knapsack_file_instance(), build_knapsack_file_instance()]
stats = parallel_solve!(LearningSolver(Cbc.Optimizer), instances)
@test length(stats) == 2
@test stats[1] !== nothing
@test stats[2] !== nothing
benchmark = BenchmarkRunner(
solvers = Dict(
"baseline" => LearningSolver(Cbc.Optimizer, components = []),
"ml-exact" => LearningSolver(Cbc.Optimizer),
"ml-heur" => LearningSolver(Cbc.Optimizer, mode = "heuristic"),
),
)
@info "Fitting..."
fit!(benchmark, instances)
@info "Benchmarking..."
parallel_solve!(benchmark, instances, n_trials = 2)
csv_filename = tempname()
write_csv!(benchmark, csv_filename)
@test isfile(csv_filename)
csv = DataFrame(CSV.File(csv_filename))
@test size(csv)[1] == 12
end

@ -1,12 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using MIPLearn
@testset "Parse" begin
@test MIPLearn.parse_name("x") == ["x"]
@test MIPLearn.parse_name("x[3]") == ["x", "3"]
@test MIPLearn.parse_name("test_eq[x]") == ["test_eq", "x"]
@test MIPLearn.parse_name("test_eq[x,y,z]") == ["test_eq", "x", "y", "z"]
end
Loading…
Cancel
Save