In all examples above, we have assumed that instances are available as `JuMPInstance` objects, stored in memory. When problem instances are very large, or when there is a large number of problem instances, this approach may require an excessive amount of memory. To reduce memory requirements, MIPLearn.jl can also operate on instances that are stored on disk, through the `FileInstance` class, as the next example illustrates.
# Create 600 problem instances and save them to files
# Create a large number of problem instances
for i in 1:600
for i in 1:600
m = Model()
@variable(m, x, Bin)
# Build JuMP model
@objective(m, Min, x)
model = Model()
@feature(x, [1.0])
@variable(...)
@objective(...)
@constraint(...)
# Add ML features and categories
@feature(...)
@category(...)
# Save instances to a file
instance = JuMPInstance(m)
instance = JuMPInstance(m)
save("instance-$i.bin", instance)
save("instance-$i.bin", instance)
end
end
# Initialize instances and solver
# Initialize training and test instances
training_instances = [FileInstance("instance-$i.bin") for i in 1:500]
training_instances = [FileInstance("instance-$i.bin") for i in 1:500]
test_instances = [FileInstance("instance-$i.bin") for i in 501:600]
test_instances = [FileInstance("instance-$i.bin") for i in 501:600]
# Initialize solver
solver = LearningSolver(Cbc.Optimizer)
solver = LearningSolver(Cbc.Optimizer)
# Solve training instances
# Solve training instances. Files are modified in-place, and at most one
# file is loaded to memory at a time.
for instance in training_instances
for instance in training_instances
solve!(solver, instance)
solve!(solver, instance)
end
end
@ -176,6 +173,39 @@ for instance in test_instances
end
end
```
```
### 1.6 Solving training instances in parallel
In many situations, instances can be solved in parallel to accelerate the training process. MIPLearn.jl provides the method `parallel_solve!(solver, instances)` to easily achieve this.
First, launch Julia in multi-process mode:
```
julia --procs 4
```
Then run the following script:
```julia
@everywhere using MIPLearn
@everywhere using Cbc
# Initialize training and test instances
training_instances = [...]
test_instances = [...]
# Initialize the solver
solver = LearningSolver(Cbc.Optimizer)
# Solve training instances in parallel. The number of instances solved
# simultaneously is the same as the `--procs` specified when running Julia.