Framework for solving discrete optimization problems using a combination of Mixed-Integer Linear Programming (MIP) and Machine Learning (ML)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
Go to file
Alinson S. Xavier 28e2ba7c01
Update README.md
5 years ago
.github Create config.yml 5 years ago
benchmark Benchmark: Use default components to generate training data 5 years ago
docs Merge branch 'feature/files' into dev 5 years ago
miplearn PyomoSolver: Never query values of fixed variables 5 years ago
.gitignore Add notebooks/ to gitignore 6 years ago
.pre-commit-config.yaml Reformat source code with Black; add pre-commit hooks and CI checks 5 years ago
LICENSE Update README and setup.py 5 years ago
Makefile Reformat additional files 5 years ago
README.md Update README.md 5 years ago
mkdocs.yml Udpate docs 5 years ago
pyproject.toml Reformat source code with Black; add pre-commit hooks and CI checks 5 years ago
requirements.txt Reformat source code with Black; add pre-commit hooks and CI checks 5 years ago
setup.py Reformat additional files 5 years ago

README.md

MIPLearn

MIPLearn is an extensible framework for Learning-Enhanced Mixed-Integer Optimization, an approach targeted at discrete optimization problems that need to be repeatedly solved with only minor changes to input data.

The package uses Machine Learning (ML) to automatically identify patterns in previously solved instances of the problem, or in the solution process itself, and produces hints that can guide a conventional MIP solver towards the optimal solution faster. For particular classes of problems, this approach has been shown to provide significant performance benefits (see benchmarks and references).

Features

  • MIPLearn proposes a flexible problem specification format, which allows users to describe their particular optimization problems to a Learning-Enhanced MIP solver, both from the MIP perspective and from the ML perspective, without making any assumptions on the problem being modeled, the mathematical formulation of the problem, or ML encoding.

  • MIPLearn provides a reference implementation of a Learning-Enhanced Solver, which can use the above problem specification format to automatically predict, based on previously solved instances, a number of hints to accelerate MIP performance.

  • MIPLearn provides a set of benchmark problems and random instance generators, covering applications from different domains, which can be used to quickly evaluate new learning-enhanced MIP techniques in a measurable and reproducible way.

  • MIPLearn is customizable and extensible. For MIP and ML researchers exploring new techniques to accelerate MIP performance based on historical data, each component of the reference solver can be individually replaced, extended or customized.

Documentation

For installation instructions, basic usage and benchmarks results, see the official documentation.

Acknowledgments

  • Based upon work supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357, and the U.S. Department of Energy Advanced Grid Modeling Program under Grant DE-OE0000875.

Citing MIPLearn

If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:

  • Alinson S. Xavier, Feng Qiu. MIPLearn: An Extensible Framework for Learning-Enhanced Optimization. Zenodo (2020). DOI: 10.5281/zenodo.4287567

If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:

  • Alinson S. Xavier, Feng Qiu, Shabbir Ahmed. Learning to Solve Large-Scale Unit Commitment Problems. INFORMS Journal on Computing (2020). DOI: 10.1287/ijoc.2020.0976

License

Released under the modified BSD license. See LICENSE for more details.