Implement solver.save and solver.load; update README

pull/1/head
Alinson S. Xavier 6 years ago
parent 480da41fa9
commit 5817f273e9

@ -22,8 +22,8 @@ The package is currently only available for Python and Pyomo. It can be installe
pip install git+ssh://git@github.com/iSoron/miplearn.git
```
Usage
-----
Typical Usage
-------------
### Using `LearningSolver`
@ -41,8 +41,8 @@ for instance in all_instances:
During the first call to `solver.solve(instance)`, the solver will process the instance from scratch, since no historical information is available, but it will already start gathering information. By calling `solver.fit()`, we instruct the solver to train all the internal Machine Learning models based on the information gathered so far. As this operation can be expensive, it may be performed after a larger batch of instances has been solved, instead of after every solve. After the first call to `solver.fit()`, subsequent calls to `solver.solve(instance)` will automatically use the trained Machine Learning models to accelerate the solution process.
Selecting the internal MIP solver
---------------------------------
### Selecting the internal MIP solver
By default, `LearningSolver` uses Cbc as its internal MIP solver. Alternative solvers can be specified through the `parent_solver`a argument, as follows. Persistent Pyomo solvers are supported. To select Gurobi, for example:
```python
from miplearn import LearningSolver
@ -69,7 +69,6 @@ An optional method which can be implemented is `instance.get_variable_category(v
It is not necessary to have a one-to-one correspondence between features and problem instances. One important (and deliberate) limitation of MIPLearn, however, is that `get_instance_features()` must always return arrays of same length for all relevant instances of the problem. Similarly, `get_variable_features(var, index)` must also always return arrays of same length for all variables in each category. It is up to the user to decide how to encode variable-length characteristics of the problem into fixed-length vectors. In graph problems, for example, graph embeddings can be used to reduce the (variable-length) lists of nodes and edges into a fixed-length structure that still preserves some properties of the graph. Different instance encodings may have significant impact on performance.
### Obtaining heuristic solutions
By default, `LearningSolver` uses Machine Learning to accelerate the MIP solution process, but keeps all optimality guarantees typically provided by MIP solvers. In the default mode of operation, predicted optimal solutions, for example, are used only as MIP starts.
@ -78,6 +77,29 @@ For more signifcant performance benefits, `LearningSolver` can also be configure
**Note:** *The heuristic mode should only be used if the solver is first trained on a large and statistically representative set of training instances.*
### Saving and loading solver state
After solving a large number of training instances, it may be desirable to save the current state of `LearningSolver` to disk, so that the solver can still use the acquired knowledge after the application restarts. This can be accomplished by using the methods `solver.save(filename)` and `solver.load(filename)`, as the following example illustrates:
```python
from miplearn import LearningSolver
solver = LearningSolver()
for instance in some_instances:
solver.solve(instance)
solver.fit()
solver.save("/tmp/miplearn.bin")
# Application restarts...
solver = LearningSolver()
solver.load("/tmp/miplearn.bin")
for instance in more_instances:
solver.solve(instance)
```
In addition to storing the training data, `solver.save` also serializes and stores all trained ML models themselves, so it is not necessary to call `solver.fit`.
Current Limitations
-------------------

@ -7,6 +7,7 @@ from .warmstart import KnnWarmStartPredictor
import pyomo.environ as pe
import numpy as np
from copy import deepcopy
import pickle
class LearningSolver:
@ -87,6 +88,23 @@ class LearningSolver:
y_train = y_train_dict[category]
self.ws_predictors[category] = deepcopy(self.ws_predictor_prototype)
self.ws_predictors[category].fit(x_train, y_train)
def save(self, filename):
with open(filename, "wb") as file:
pickle.dump({
"version": 1,
"x_train": self.x_train,
"y_train": self.y_train,
"ws_predictors": self.ws_predictors,
}, file)
def load(self, filename):
with open(filename, "rb") as file:
data = pickle.load(file)
assert data["version"] == 1
self.x_train = data["x_train"]
self.y_train = data["y_train"]
self.ws_predictors = self.ws_predictors
def _solve(self, model, tee=False):
if hasattr(self.parent_solver, "set_instance"):

@ -14,3 +14,20 @@ def test_solver():
solver.solve(instance)
solver.fit()
solver.solve(instance)
def test_solve_save_load():
instance = KnapsackInstance2(weights=[23., 26., 20., 18.],
prices=[505., 352., 458., 220.],
capacity=67.)
solver = LearningSolver()
solver.solve(instance)
solver.fit()
solver.save("/tmp/knapsack_train.bin")
prev_x_train_len = len(solver.x_train)
prev_y_train_len = len(solver.y_train)
solver = LearningSolver()
solver.load("/tmp/knapsack_train.bin")
assert len(solver.x_train) == prev_x_train_len
assert len(solver.y_train) == prev_y_train_len
Loading…
Cancel
Save