465 Commits
v0.1 ... dev

Author SHA1 Message Date
aa291410d8 docs: Minor updates 2025-09-24 10:31:08 -05:00
ca05429203 uc: Add quadratic terms 2025-09-23 11:39:39 -05:00
4eeb1c1ab3 Add maxcut to problems.ipynb 2025-09-23 11:27:03 -05:00
bfaae7c005 BasicCollector: Make log file optional 2025-07-22 12:25:47 -05:00
596f41c477 BasicCollector: save solver log to file 2025-06-12 11:16:16 -05:00
19e1f52b4f BasicCollector: store data_filename in HDF5 file 2025-06-12 11:15:09 -05:00
7ed213d4ce MaxCut: add w_jitter parameter to control edge weight randomization 2025-06-12 10:55:40 -05:00
daa801b5e9 Pyomo: implement build_maxcut_model; add support for quadratic objectives 2025-06-11 14:23:10 -05:00
2ca2794457 GurobiModel: Capture static_var_obj_coeffs_quad 2025-06-11 13:19:36 -05:00
1c6912cc51 Add MaxCut problem 2025-06-11 11:58:57 -05:00
eb914a4bdd Replace NamedTemporaryFile with TemporaryDirectory in tests for better compatibility 2025-06-11 11:14:34 -05:00
a306f0df26 Update docs dependencies; re-run notebooks 2025-06-10 12:28:39 -05:00
e0b4181579 Fix pyomo warning 2025-06-10 11:48:37 -05:00
332b2b9fca Update CHANGELOG 2025-06-10 11:31:32 -05:00
af65069202 Bump version to 0.4.3 2025-06-10 11:29:03 -05:00
dadd2216f1 Make compatible with Gurobi 12 2025-06-10 11:27:02 -05:00
5fefb49566 Update to Gurobi 11 2025-06-10 11:27:02 -05:00
3775c3f780 Update docs; fix Sphinx deps; bump to 0.4.2 2024-12-10 12:15:24 -06:00
e66e6d7660 Update CHANGELOG 2024-12-10 11:04:40 -06:00
8e05a69351 Update dependency: Gurobi 11 2024-12-10 10:58:15 -06:00
7ccb7875b9 Allow components to return stats, instead of modifying in-place
Added for compatibility with Julia.
2024-08-20 16:46:20 -05:00
f085ab538b LearningSolver: return model 2024-05-31 11:53:56 -05:00
7f273ebb70 expert primal: Set value for int variables 2024-05-31 11:48:41 -05:00
26cfab0ebd h5: Store values using float64 2024-05-31 11:16:47 -05:00
52ed34784d Docs: Use single-thread example 2024-05-08 09:19:52 -05:00
0534d50af3 BasicCollector: Do not crash on exception 2024-02-26 16:41:50 -06:00
8a02e22a35 Update docs 2024-02-07 09:17:09 -06:00
702824a3b5 Bump version to 0.4 2024-02-06 16:17:27 -06:00
752885660d Update CHANGELOG 2024-02-06 16:10:22 -06:00
b55554d410 Add _gurobipy suffix to all build_model functions 2024-02-06 16:08:24 -06:00
fb3f219ea8 Add tutorial: Cuts and lazy constraints 2024-02-06 15:59:11 -06:00
714904ea35 Implement ExpertCutsComponent and ExpertLazyComponent 2024-02-06 11:57:11 -06:00
cec56cbd7b AbstractSolver: Fix field name 2024-02-06 11:56:54 -06:00
e75850fab8 LearningSolver: Keep original H5 file unmodified 2024-02-02 14:37:53 -06:00
687c271d4d Bump version to 0.4.0 2024-02-02 10:19:44 -06:00
60d9a68485 Solver: Make attributes private; ensure we're not calling them directly
Helps with Julia/JuMP integration.
2024-02-02 10:15:06 -06:00
33f2cb3d9e Cuts: Do not access attributes directly 2024-02-01 12:02:39 -06:00
5b28595b0b BasicCollector: Make LP and MPS optional 2024-02-01 12:02:23 -06:00
60c7222fbe Cuts: Call set_cuts instead of setting cuts_aot_ directly 2024-02-01 10:18:24 -06:00
281508f44c Store cuts and lazy constraints as JSON in H5 2024-02-01 10:06:21 -06:00
2774edae8c tsp: Remove some code duplication 2024-01-30 16:32:39 -06:00
25bbe20748 Make lazy constr component compatible with Pyomo+Gurobi 2024-01-30 16:25:46 -06:00
c9eef36c4e Make cuts component compatible with Pyomo+Gurobi 2024-01-29 00:41:29 -06:00
d2faa15079 Reformat; remove unused imports 2024-01-28 20:47:16 -06:00
8c2c45417b Update mypy 2024-01-28 20:30:18 -06:00
8805a83c1c Implement MemorizingCutsComponent; STAB: switch to edge formulation 2023-11-07 15:36:31 -06:00
b81815d35b Lazy: Minor fixes; make it compatible with Pyomo 2023-10-27 10:44:21 -05:00
a42cd5ae35 Lazy: Simplify method signature; switch to AbstractModel 2023-10-27 09:14:51 -05:00
7079a36203 Lazy: Rename fields 2023-10-27 08:53:38 -05:00
c1adc0b79e Implement MemorizingLazyConstrComponent 2023-10-26 15:37:05 -05:00
2d07a44f7d Fix mypy errors 2023-10-26 13:41:50 -05:00
e555dffc0c Reformat source code 2023-10-26 13:40:09 -05:00
cd32b0e70d Add test fixtures 2023-10-26 13:39:39 -05:00
40c7f2ffb5 io: Simplify more extensions 2023-06-09 10:57:54 -05:00
25728f5512 Small updates to Makefile 2023-06-09 10:57:41 -05:00
8dd5bb416b Minor fixes to docs and setup.py 2023-06-08 12:37:11 -05:00
1ea989d48a MIPLearn v0.3 2023-06-08 11:25:39 -05:00
6cc253a903 Update 2022-06-01 11:40:48 -05:00
3fd252659e Update docs 2022-03-01 10:01:24 -06:00
f794c27634 Add progress arg to LearningSolver.solve 2022-02-25 09:35:05 -06:00
ce78d5114a Merge branch 'feature/new-py-api' into feature/docs 2022-02-25 08:36:43 -06:00
04dd3ad5d5 Implement load; update fit 2022-02-25 08:26:33 -06:00
522f3a7e18 Change LearningSolver.solve and fit 2022-02-22 15:21:56 -06:00
c98ff4eab4 Implement save function 2022-02-22 09:34:08 -06:00
87bba1b38e Make TravelingSalesmanGenerator return data class 2022-02-22 09:23:55 -06:00
03e5acb11a Make MultiKnapsackGenerator return data class 2022-02-22 09:20:17 -06:00
b0d63a0a2d Make MaxWeightStableSetGenerator return data class 2022-02-22 09:16:37 -06:00
08fc18beb0 feature/docs 2022-01-25 12:00:57 -06:00
1811492557 Fix failing Gurobi tests 2022-01-25 11:57:14 -06:00
2a76dd42ec Allow user to attach arbitrary data to violations 2022-01-25 11:39:03 -06:00
ba8f5bb2f4 Upgrade to Gurobi 9.5 2022-01-25 08:33:23 -06:00
5075a3c2f2 install-deps: Specify gurobi version 2021-12-03 12:40:58 -06:00
2601ef1f9b Make progress bars optional; other minor fixes 2021-09-10 16:41:07 -05:00
2fd04eb274 Add run_benchmarks method 2021-09-10 16:40:39 -05:00
beb15f7667 Remove obsolete benchmark files 2021-09-10 16:35:17 -05:00
2a405f7ce3 Docs: update benchmarks 2021-09-10 16:34:11 -05:00
4c5d0071ee Improve getting-started.ipynb 2021-09-04 07:25:14 -05:00
22c1e0d269 Remove outdated docs; switch to Jupyter notebooks; add first tutorial 2021-09-04 06:48:45 -05:00
9bd64c885a Minor fixes 2021-09-04 06:31:37 -05:00
65122c25b7 Bump version to 0.2.0.dev13 2021-08-30 09:30:21 -05:00
08d7904fda Merge tag 'v0.2.0.dev12' into dev 2021-08-30 09:28:14 -05:00
3220337e37 Bump version: miplearn-0.2.0.dev12 2021-08-26 06:16:16 -05:00
35272e08c6 Primal: Skip non-binary variables 2021-08-18 10:34:56 -05:00
c6b31a827d GurobiSolver: Accept non-binary integer variables 2021-08-13 10:15:23 -05:00
9e023a375a AlvLouWeh2017: Remove slow loop in M3 2021-08-13 06:07:46 -05:00
f2b710e9f9 AlvLouWeh2017: Implement remaining features 2021-08-13 05:56:38 -05:00
0480461a7f AlvLouWeh2017: Implement features 12-19 2021-08-12 20:46:26 -05:00
6a01c98c07 Merge branch 'feature/hdf5' into dev 2021-08-12 08:05:27 -05:00
cea2d8c134 Fix failing tests 2021-08-12 08:01:09 -05:00
78d2ad4857 AlvLouWeh2017: Add some assertions; replace non-finite by zero 2021-08-12 07:52:48 -05:00
ccb1a1ed25 GurobiSolver: Fix LHS extraction 2021-08-12 07:52:34 -05:00
2b00cf5b96 Hdf5Sample: Store all fp arrays as float32 2021-08-12 07:51:59 -05:00
53a7c8f84a AlvLouWeh2017: Implement M1 features 2021-08-12 07:17:53 -05:00
fabb13dc7a Extract LHS as a sparse matrix 2021-08-12 05:35:04 -05:00
5b3a56f053 Re-add sample.{get,put}_bytes 2021-08-11 06:24:10 -05:00
256d3d094f AlvLouWeh2017: Remove sample argument 2021-08-11 06:17:57 -05:00
a65ebfb17c Re-enable half-precision; minor changes to FeaturesExtractor benchmark 2021-08-10 17:30:16 -05:00
9cfb31bacb Remove {get,put}_set and deprecated functions 2021-08-10 17:27:06 -05:00
ed58242b5c Remove most usages of put_{vector,vector_list}; deprecate get_set 2021-08-10 11:52:02 -05:00
60b9a6775f Use NumPy to compute AlvLouWeh2017 features 2021-08-10 10:28:30 -05:00
e852d5cdca Use np.ndarray for constraint methods in Instance 2021-08-10 07:09:42 -05:00
895cb962b6 Make get_variable_{categories,features} return np.ndarray 2021-08-09 15:19:53 -05:00
56b39b6c9c Make get_instance_features return np.ndarray 2021-08-09 14:02:14 -05:00
47d3011808 Use np.ndarray in instance features 2021-08-09 10:01:58 -05:00
63eff336e2 Implement sample.{get,put}_sparse 2021-08-09 07:09:02 -05:00
5b54153a3a Use np in Constraints.lazy; replace some get_vector 2021-08-09 06:27:03 -05:00
f809dd7de4 Use np.ndarray in Constraints.{basis_status,senses} 2021-08-09 06:09:26 -05:00
9ddda7e1e2 Use np.ndarray for constraint names 2021-08-09 05:41:01 -05:00
45667ac2e4 Use np.ndarray for var_types, basis_status 2021-08-08 07:36:57 -05:00
7d55d6f34c Use np.array for Variables.names 2021-08-08 07:24:14 -05:00
f69067aafd Implement {get,put}_array; make other methods deprecated 2021-08-08 06:52:24 -05:00
0a32586bf8 Use np.ndarray in Constraints 2021-08-05 15:57:02 -05:00
0c4b0ea81a Use np.ndarray in Variables 2021-08-05 15:42:19 -05:00
b6426462a1 Fix failing tests 2021-08-05 14:05:50 -05:00
475fe3d985 Sample: do not check data by default; minor fixes 2021-08-05 12:34:55 -05:00
95b9ce29fd Hdf5Sample: Use latest HDF5 file format 2021-08-05 10:18:34 -05:00
4a52911924 AlvLouWeh2017: Replace non-finite features by constant 2021-08-04 13:54:14 -05:00
e72f3b553f Hdf5Sample: Use half-precision for floats 2021-08-04 13:44:42 -05:00
067f0f847c Add mip_ prefix to dynamic constraints 2021-08-04 13:38:23 -05:00
ca925119b3 Add static_ prefix to all static features 2021-08-04 13:35:16 -05:00
10eed9b306 Don't include intermediary features in sample; rename some keys 2021-08-04 13:22:12 -05:00
865a4b2f40 Hdf5Sample: Store string vectors as "S" dtype instead of obj 2021-08-04 11:34:56 -05:00
c513515725 Hdf5Sample: Enable compression 2021-07-28 10:14:55 -05:00
7163472cfc Bump version to 0.2.0.dev11 2021-07-28 09:33:40 -05:00
7d5ec1344a Make Hdf5Sample work with bytearray 2021-07-28 09:06:15 -05:00
a69cbed7b7 Improve error messages in assertions 2021-07-28 08:57:09 -05:00
fc55a077f2 Sample: Allow numpy arrays 2021-07-28 08:21:56 -05:00
6fd839351c GurobiSolver: Fix error messages 2021-07-27 11:50:03 -05:00
b6880f068c Hdf5Sample: store lengths as dataset instead of attr 2021-07-27 11:47:26 -05:00
728a6bc835 Remove debug statement 2021-07-27 11:24:41 -05:00
d30c3232e6 FileInstance.save: create file when it does not already exist 2021-07-27 11:22:40 -05:00
4f14b99a75 Add h5py to setup.py 2021-07-27 11:12:07 -05:00
15e08f6c36 Implement FileInstance 2021-07-27 11:02:04 -05:00
f1dc450cbf Do nothing on put_scalar(None) 2021-07-27 10:55:19 -05:00
6c98986675 Hdf5Sample: Return None for non-existing keys 2021-07-27 10:49:30 -05:00
a0f8bf15d6 Handle completely empty veclists 2021-07-27 10:45:11 -05:00
3da8d532a8 Sample: handle None in vectors 2021-07-27 10:37:02 -05:00
284ba15db6 Implement sample.{get,put}_bytes 2021-07-27 10:01:32 -05:00
962707e8b7 Replace push_sample by create_sample 2021-07-27 09:25:40 -05:00
4224586d10 Remove sample.{get,set} 2021-07-27 09:00:04 -05:00
ef9c48d79a Replace Hashable by str 2021-07-15 16:21:40 -05:00
8d89285cb9 Implement {get,put}_vector_list 2021-07-15 16:00:13 -05:00
8fc7c6ab71 Split Sample.{get,put} into {get,put}_{scalar,vector} 2021-07-14 10:50:54 -05:00
0a399deeee Implement Hdf5Sample 2021-07-14 09:56:25 -05:00
021a71f60c Reorganize feature tests; add basic sample tests 2021-07-14 08:39:19 -05:00
235c3e55c2 Make Sample abstract; create MemorySample 2021-07-14 08:31:01 -05:00
851b8001bb Move features to its own package 2021-07-14 08:23:52 -05:00
ed77d548aa Remove unused function 2021-07-14 08:16:49 -05:00
609c5c7694 Rename Variables and Constraints; move to internal.py 2021-07-06 17:08:22 -05:00
c8c29138ca Remove unused classes and functions 2021-07-06 17:04:32 -05:00
cd9e5d4144 Remove sample.after_load 2021-07-06 16:58:09 -05:00
b4a267a524 Remove sample.after_lp 2021-07-01 12:25:50 -05:00
4093ac62fd Remove sample.after_mip 2021-07-01 11:45:19 -05:00
7c4c301611 Extract instance, var and constr features into sample 2021-07-01 11:06:36 -05:00
061b1349fe Move user_cuts/lazy_enforced to sample.data 2021-07-01 08:46:27 -05:00
80281df8d8 Replace instance.samples by instance.get/push_sample 2021-06-29 16:49:24 -05:00
a5092cc2b9 Request constraint features/categories in bulk 2021-06-29 09:54:35 -05:00
8118ab4110 Remove EnforceOverrides 2021-06-29 09:05:14 -05:00
438859e493 Request variable features/categories in bulk 2021-06-29 09:02:46 -05:00
6969f2ffd2 Measure time extracting features 2021-06-29 07:52:04 -05:00
5b4b8adee5 LearningSolver: add extract_sa, extract_lhs arguments 2021-06-28 17:34:15 -05:00
101bd94a5b Make read/write_pickle_gz quiet 2021-06-28 10:17:41 -05:00
46a7d3fe26 BenchmarkRunner.fit: Only iterate through files twice 2021-06-28 09:32:30 -05:00
aaef8b8fb3 Bump version to 0.2.0.dev10 2021-06-28 09:32:30 -05:00
173d73b718 setup.py: Require numpy<1.21 2021-05-26 10:05:02 -05:00
343afaeec0 Fix MyPy errors 2021-05-26 09:49:58 -05:00
4c7e63409d Improve logging 2021-05-26 09:01:40 -05:00
476c27d0d9 Merge branch 'feature/sphinx' into dev 2021-05-24 09:34:11 -05:00
3f117e9171 Replace mkdocs by sphinx 2021-05-24 09:33:45 -05:00
ddd136c661 assert_equals: Handle ndarray with booleans 2021-05-20 11:38:35 -05:00
52093eb1c0 Combine np.ndarray conversion with rounding 2021-05-20 11:18:17 -05:00
34c71796e1 assert_equals: Recursively convert np.ndarray 2021-05-20 11:06:58 -05:00
cdd38cdfb8 Make assert_equals work with np.ndarray 2021-05-20 10:41:38 -05:00
310394b397 Bump to 0.2.0.dev9 2021-05-20 10:26:40 -05:00
81b7047c4c gurobi.py: Remove tuples 2021-05-20 10:25:56 -05:00
c494f3e804 Remove tuples from ConstraintFeatures 2021-05-20 10:23:53 -05:00
f9ac65bf9c Remove tuples from VariableFeatures 2021-05-20 10:03:18 -05:00
fa969cf066 Constraint features: Fix conversion to list 2021-05-20 08:54:18 -05:00
659131c8cf Only use p_tqdm is n_jobs>1 2021-05-20 08:39:51 -05:00
983e5fe117 Add docs-sphinx 2021-05-20 08:36:34 -05:00
13373c2573 Bump version to 0.2.0.dev6 2021-05-18 09:25:26 -05:00
4bf4d09cb5 Remove unused classes and methods 2021-05-15 14:29:11 -05:00
91c8db2225 Refactor StaticLazy; remove old constraint methods 2021-05-15 14:15:48 -05:00
53d3e9d98a Implement ConstraintFeatures.__getitem__ 2021-05-15 09:38:00 -05:00
83c46d70a3 Implement bulk constraint methods 2021-05-15 09:26:55 -05:00
8e61b7be5f Remove EnforceOverrides 2021-05-10 13:31:43 -05:00
17d4bc6ab9 Remove empty docstring 2021-05-10 10:52:02 -05:00
249002dcf3 Fix mypy issues 2021-04-30 11:55:08 -05:00
c3d26a1c75 Reduce memory consumption of parallel_solve 2021-04-30 11:54:55 -05:00
0ba8cc16fd GurobiSolver: Implement relax/enforce constraint 2021-04-15 15:22:12 -05:00
4dd4ef52bd Add with_lhs argument 2021-04-15 12:39:48 -05:00
18521331c9 Extract more features to ConstraintFeatures 2021-04-15 12:21:19 -05:00
230d13a5c0 Create ConstraintFeatures 2021-04-15 11:49:58 -05:00
0e9c8b0a49 Rename features.constraints to constraints_old 2021-04-15 11:00:52 -05:00
8f73d87d2d Fix failing test 2021-04-15 10:49:48 -05:00
39597287a6 Make extractor configurable 2021-04-15 09:57:10 -05:00
95e326f5f6 Use compact variable features everywhere 2021-04-15 09:49:35 -05:00
fec0113722 Rename features.variables to variables_old; update FeatureExtractor 2021-04-15 06:54:27 -05:00
08f0bedbe0 Implement more compact get_variables 2021-04-15 06:26:33 -05:00
e6eca2ee7f GurobiSolver: Performance improvements 2021-04-15 04:12:10 -05:00
e1f32b1798 Add n_jobs to BenchmarkRunner.fit 2021-04-13 19:30:42 -05:00
77b10b9609 Parallel processing 2021-04-13 19:28:18 -05:00
bec7dae6d9 Add pre argument to sample_xy 2021-04-13 19:19:49 -05:00
a01c179341 LearningSolver: Load each instance exactly twice during fit 2021-04-13 18:11:37 -05:00
ef7a50e871 Only include static features in after-load 2021-04-13 16:08:30 -05:00
8f41278713 GurobiSolver: Improve get_constraints 2021-04-13 15:35:20 -05:00
37a1bc9fe6 Fix mypy errors 2021-04-13 14:36:20 -05:00
61645491a4 GurobiSolver: Bulk query 2021-04-13 10:54:01 -05:00
25affca3ec GurobiSolver: Accept integer variables, as long as bounds=(0,1) 2021-04-13 10:39:36 -05:00
c4a6665825 Remove obsolete methods 2021-04-13 09:42:25 -05:00
c26b852c67 Update UserCutsComponent 2021-04-13 09:08:49 -05:00
a4433916e5 Update DynamicLazyConstraintsComponent 2021-04-13 08:42:06 -05:00
b5411b8950 Update ObjectiveValueComponent 2021-04-13 07:53:23 -05:00
a9dcdb8e4e Update PrimalSolutionComponent 2021-04-13 07:23:07 -05:00
d7aa31f3eb Fix mypy errors 2021-04-13 06:47:31 -05:00
9d404f29a7 Call new fit method 2021-04-12 10:30:47 -05:00
cb62345acf Refactor StaticLazy 2021-04-12 10:05:17 -05:00
e6672a45a0 Rename more methods to _old 2021-04-12 08:55:01 -05:00
08ede5db09 Component: add new callback methods 2021-04-12 08:34:46 -05:00
6f6cd3018b Rewrite DynamicLazy.sample_xy 2021-04-12 08:11:39 -05:00
bccf0e9860 Rewrite StaticLazy.sample_xy 2021-04-12 07:35:51 -05:00
2979bd157c Rewrite PrimalSolutionComponent.sample_xy 2021-04-11 21:52:59 -05:00
d90d7762e3 Rewrite ObjectiveValueComponent.sample_xy 2021-04-11 21:27:25 -05:00
2da60dd293 Rename methods that use TrainingSample to _old 2021-04-11 21:00:04 -05:00
5fd13981d4 Append sample 2021-04-11 17:39:55 -05:00
fde6dc5a60 Combine after_load, after_lp and after_mip into Sample dataclass 2021-04-11 17:20:17 -05:00
2d4ded1978 Fix some mypy issues 2021-04-11 17:07:45 -05:00
16630b3a36 GurobiPyomoSolver: Extract same features as GurobiSolver 2021-04-11 17:05:41 -05:00
6bc81417ac Sort methods 2021-04-11 16:50:00 -05:00
fcb511a2c6 Pyomo: Collect variable reduced costs 2021-04-11 16:30:00 -05:00
3cfadf4e97 Pyomo: Collect variable bounds, obj_coeff, value, type 2021-04-11 16:21:31 -05:00
6b15337e4c Add mip_stats to after-mip features 2021-04-11 09:14:05 -05:00
bd78518c1f Convert MIPSolveStats into dataclass 2021-04-11 09:10:14 -05:00
2bc1e21f8e Add lp_stats to after-lp features 2021-04-11 08:57:57 -05:00
945f6a091c Convert LPSolveStats into dataclass 2021-04-11 08:41:50 -05:00
6afdf2ed55 Collect features 3 times (after-load, after-lp, after-mip) 2021-04-11 08:03:46 -05:00
d85a63f869 Small fixes to Alvarez2017 features 2021-04-11 08:03:17 -05:00
c39231cb18 Implement a small subset of Alvarez2017 features 2021-04-10 19:48:58 -05:00
9ca4cc3c24 Include additional features in instance.features 2021-04-10 19:11:38 -05:00
733c8299e0 Add more variable features 2021-04-10 18:56:59 -05:00
5e1f26e4b0 Add more constraint features 2021-04-10 17:38:03 -05:00
b5e602cdc1 get_constraints: Fetch slack and dual values 2021-04-10 17:24:03 -05:00
088d679f61 Redesign InternalSolver constraint methods 2021-04-10 15:53:38 -05:00
f70363db0d Replace build_lazy_constraint by enforce_lazy_constraint 2021-04-10 10:05:30 -05:00
735884151d Reorganize callbacks 2021-04-10 09:04:34 -05:00
6ac738beb4 PyomoSolver: Implement missing constraint methods 2021-04-09 22:31:17 -05:00
9368b37139 Replace individual constraint methods by single get_constraints 2021-04-09 21:51:38 -05:00
626d75f25e Reorganize internal solver tests 2021-04-09 20:33:48 -05:00
a8224b5a38 Move instance fixtures into the main source; remove duplication 2021-04-09 19:07:46 -05:00
f3fd1e0cda Make internal_solvers into a fixture 2021-04-09 18:35:19 -05:00
31d0a0861d Bump version to 0.2.0.dev3 2021-04-09 09:06:28 -05:00
5d7c2ea089 Require Python 3.7+ 2021-04-09 09:04:34 -05:00
4e230c2120 Move all dependencies to setup.py 2021-04-09 09:01:09 -05:00
7d3b065a3e Add Overrides to setup.py; bump to 0.2.0.dev2 2021-04-09 08:29:15 -05:00
3f4336f902 Always remove .mypy_cache; fix more mypy tests 2021-04-09 08:18:54 -05:00
32b6a8f3fa Bump version to 0.2.0.dev1 2021-04-09 08:08:16 -05:00
166cdb81d7 Fix tests 2021-04-09 07:59:52 -05:00
57624bd75c Update gitignore 2021-04-09 07:53:14 -05:00
c66a59d668 Make version a pre-release 2021-04-09 07:53:14 -05:00
74ceb776c3 Skip extracting features if already computed 2021-04-09 07:53:14 -05:00
5aa434b439 Fix failing mypy tests 2021-04-09 07:41:23 -05:00
5116681291 Add some InternalSolver tests to main package 2021-04-08 11:23:56 -05:00
3edc8139e9 Improve logging 2021-04-08 11:23:30 -05:00
6330354c47 Remove EnforceOverrides; automatically convert np.ndarray features 2021-04-08 07:50:16 -05:00
157825a345 mypy: Disable implicit optionals 2021-04-07 21:36:37 -05:00
e9cd6d1715 Add types to remaining files; activate mypy's disallow_untyped_defs 2021-04-07 21:25:30 -05:00
f5606efb72 Add types to log.py 2021-04-07 21:01:21 -05:00
331ee5914d Add types to solvers 2021-04-07 20:58:44 -05:00
38212fb858 Add types to tsp.py 2021-04-07 20:33:28 -05:00
f7545204d7 Add types to stab.py 2021-04-07 20:25:59 -05:00
2c93ff38fc Add types to knapsack.py 2021-04-07 20:21:28 -05:00
0232219a0e Make InternalSolver clonable 2021-04-07 19:52:21 -05:00
ebccde6a03 Update CHANGELOG.md 2021-04-07 17:52:20 -05:00
0516d4a802 Update CHANGELOG.md 2021-04-07 16:44:03 -05:00
d76dc768b0 Add CHANGELOG.md 2021-04-07 15:32:43 -05:00
1380165e3d Benchmark: Reduce time limit during training 2021-04-07 12:07:17 -05:00
96093a9b8e Enforce more overrides 2021-04-07 12:01:05 -05:00
1cf6124757 Refer to variables by varname instead of (vname, index) 2021-04-07 11:56:05 -05:00
856b595d5e PickleGzInstance: Replace implicit load by load/free methods 2021-04-06 19:23:08 -05:00
f495297168 Remove experimental LP components 2021-04-06 17:00:51 -05:00
f90f295620 Reorganize instance package 2021-04-06 16:31:47 -05:00
3543a2ba92 Optimize imports 2021-04-06 16:23:55 -05:00
332cdbd839 Update copyright year 2021-04-06 16:22:56 -05:00
b0bf42e69d Remove obsolete extractor classes 2021-04-06 16:18:26 -05:00
9e7eed1dbd Finish rewrite of user cuts component 2021-04-06 16:17:05 -05:00
9f2d7439dc Add user cut callbacks; begin rewrite of UserCutsComponent 2021-04-06 12:46:37 -05:00
cfb17551f1 Make sample_xy an instance method 2021-04-06 11:24:56 -05:00
54c20382c9 Finish DynamicLazyConstraintsComponent rewrite 2021-04-06 08:19:29 -05:00
c6aee4f90d Make sample_ method accept instance 2021-04-06 06:48:47 -05:00
bb91c83187 LazyDynamic: Rewrite fit method 2021-04-06 06:28:23 -05:00
6e326d5d6e Move feature classes to features.py 2021-04-05 20:38:31 -05:00
b11779817a Convert TrainingSample to dataclass 2021-04-05 20:36:04 -05:00
aeed338837 Convert ConstraintFeatures to dataclass 2021-04-05 20:12:07 -05:00
94084e0669 Convert InstanceFeatures into dataclass 2021-04-05 20:02:24 -05:00
d79eec5da6 Convert VariableFeatures into dataclass 2021-04-04 22:56:26 -05:00
59f4f75a53 Convert Features into dataclass 2021-04-04 22:37:16 -05:00
f2520f33fb Correctly store features and training data for file-based instances 2021-04-04 22:00:21 -05:00
025e08f85e LazyStatic: Use dynamic thresholds 2021-04-04 20:42:04 -05:00
08e808690e Replace InstanceIterator by PickleGzInstance 2021-04-04 14:56:33 -05:00
b4770c6c0a Fix failing tests 2021-04-04 08:55:55 -05:00
96e7a0946e pre-commit: Use specific version of Black 2021-04-04 08:55:37 -05:00
b70aa1574e Update Makefile and GH actions 2021-04-04 08:47:49 -05:00
6e614264b5 StaticLazy: Refactor 2021-04-04 08:39:56 -05:00
168f56c296 Fix typos 2021-04-03 19:13:00 -05:00
ea5c35fe18 Objective: Refactoring 2021-04-03 19:10:29 -05:00
185b95118a Objective: Rewrite sample_evaluate 2021-04-03 18:37:49 -05:00
7af22bd16b Refactor ObjectiveValueComponent 2021-04-03 10:24:05 -05:00
8e1ed6afcb GitHub Actions: Run tests daily 2021-04-03 08:43:43 -05:00
c02b116d8e Fix decorator version 2021-04-03 08:36:51 -05:00
674c16cbed FeaturesExtractor: Fix assertion 2021-04-03 08:27:30 -05:00
ca555f785a GitHub Actions: Use specific version of Black 2021-04-03 08:00:22 -05:00
d8747289dd Remove benchmark results from repository 2021-04-03 07:58:17 -05:00
7a6b31ca9a Fix benchmark scripts; add more input checks 2021-04-03 07:57:22 -05:00
0bce2051a8 Redesign component.evaluate 2021-04-02 08:10:08 -05:00
0c687692f7 Make all before/solve callbacks receive same parameters 2021-04-02 07:05:16 -05:00
8eb2b63a85 Primal: Refactor stats 2021-04-02 06:32:44 -05:00
ef556f94f0 Rename xy_sample to xy 2021-04-02 06:26:48 -05:00
bc8fe4dc98 Components: Switch from factory methods to prototype objects 2021-04-01 08:34:56 -05:00
59c734f2a1 Add ScikitLearnRegressor; move sklean classes to their own file 2021-04-01 07:54:14 -05:00
820a6256c2 Make classifiers and regressors clonable 2021-04-01 07:41:59 -05:00
ac29b5213f Objective: Add tests 2021-04-01 07:21:44 -05:00
b83911a91d Primal: Add end-to-end tests 2021-03-31 12:51:18 -05:00
db2f426140 Primal: reactivate before_solve_mip 2021-03-31 12:08:49 -05:00
fe7bad885c Make xy_sample receive features, not instances 2021-03-31 10:05:59 -05:00
8fc9979b37 Use instance.features in LazyStatic and Objective 2021-03-31 09:21:34 -05:00
5db4addfa5 Add instance-level features to instance.features 2021-03-31 09:14:06 -05:00
0f5a6745a4 Primal: Refactoring 2021-03-31 09:08:01 -05:00
4f46866921 Primal: Use instance.features 2021-03-31 08:22:43 -05:00
12fca1f22b Extract all features ahead of time 2021-03-31 07:42:01 -05:00
b3c24814b0 Refactor PrimalSolutionComponent 2021-03-31 06:55:24 -05:00
ec69464794 Refactor primal 2021-03-30 21:44:13 -05:00
9cf28f3cdc Add variables to model features 2021-03-30 21:29:33 -05:00
1224613b1a Implement component.fit, component.fit_xy 2021-03-30 21:18:40 -05:00
205a972937 Add StaticLazyComponent.xy 2021-03-30 20:45:22 -05:00
07388d9490 Remove unused composite component 2021-03-30 17:25:50 -05:00
64a63264c7 Rename xy to xy_sample 2021-03-30 17:24:27 -05:00
e8adeb28a3 Add ObjectiveValueComponent.xy 2021-03-30 17:17:29 -05:00
9266743940 Add Component.xy and PrimalSolutionComponent.xy 2021-03-30 17:08:10 -05:00
75d1eee424 DropRedundant: Make x_y parallel 2021-03-30 10:06:55 -05:00
3b61a15ead Add after_solve_lp callback; make dict keys consistent 2021-03-30 10:05:28 -05:00
6ae052c8d0 Rename before/after_solve to before/after_solve_mip 2021-03-30 09:04:41 -05:00
bcaf26b18c Sklearn: Handle the special case when all labels are the same 2021-03-02 19:31:12 -06:00
b6ea0c5f1b ConstraintFeatures: Store lhs and sense 2021-03-02 18:14:36 -06:00
3a60deac63 LearningSolver: Handle exceptions in parallel_solve 2021-03-02 17:27:50 -06:00
bca6581b0f DropRedundant: Clear pool before each solve 2021-03-02 17:27:50 -06:00
1397937f03 Add first model feature (constraint RHS) 2021-03-02 17:21:05 -06:00
31ca45036a Organize test fixtures; handle infeasibility in DropRedundant 2021-02-02 10:24:51 -06:00
8153dfc825 DropRedundant: Update for new classifier interface 2021-02-02 09:26:16 -06:00
d3c5371fa5 VarIndex: Use tuples instead of lists 2021-02-02 09:10:49 -06:00
d1bbe48662 GurobiSolver: Small fix to _update_vars 2021-02-02 08:40:44 -06:00
Feng
b97ead8aa2 Update about.md 2021-01-27 21:34:15 -06:00
Feng
7885ce83bd Update about.md 2021-01-27 21:32:33 -06:00
9abcea05cd Objective: Use LP value as feature 2021-01-26 22:28:20 -06:00
fe47b0825f Remove unused extractors 2021-01-26 22:20:18 -06:00
603902e608 Refactor ObjectiveComponent 2021-01-26 22:16:46 -06:00
2e845058fc Update benchmark script 2021-01-26 20:38:37 -06:00
4d4e2a3eef Fix tests on Python 3.7 2021-01-25 18:13:03 -06:00
edd0c8d750 Remove RelaxationComponent 2021-01-25 18:12:52 -06:00
a97089fc34 Primal: Add tolerance in binary check 2021-01-25 17:54:41 -06:00
a0062edb5a Update benchmark scripts 2021-01-25 17:54:23 -06:00
203afc6993 Primal: Compute statistics 2021-01-25 16:02:40 -06:00
b0b013dd0a Fix all tests 2021-01-25 15:19:58 -06:00
3ab3bb3c1f Refactor PrimalSolutionComponent 2021-01-25 14:54:58 -06:00
f68cc5bd59 Refactor thresholds 2021-01-25 09:52:49 -06:00
4da561a6a8 AdaptiveClassifier: Refactor and add tests 2021-01-25 08:59:06 -06:00
8dba65dd9c Start refactoring of classifiers 2021-01-22 11:35:29 -06:00
b87ef651e1 Document and simplify Classifier and Regressor 2021-01-22 09:06:04 -06:00
f90d78f802 Move tests to separate folder 2021-01-22 07:42:28 -06:00
e2048fc659 Docs: Minor fixes to CSS 2021-01-22 07:30:33 -06:00
ea4bdd38be Fix broken links in documentation 2021-01-22 07:24:39 -06:00
f755661fa6 Simplify BenchmarkRunner; update docs 2021-01-22 07:22:19 -06:00
aa9cefb9c9 GitHub Actions: Remove python 3.9 (no xpresss available) 2021-01-21 18:55:31 -06:00
c342a870d1 Minor fixes to docstrings; make some classes private 2021-01-21 18:54:05 -06:00
7dbbfdc418 Minor fixes 2021-01-21 18:21:53 -06:00
f7ce441fa6 Add types to internal solvers 2021-01-21 17:19:28 -06:00
d500294ebd Add more types to LearningSolver 2021-01-21 16:33:55 -06:00
fc0835e694 Add type annotations to components 2021-01-21 15:54:23 -06:00
a98a783969 Update tests 2021-01-21 14:38:12 -06:00
a42c5ebdc3 Remove unused methods 2021-01-21 14:27:28 -06:00
868675ecf2 Implement some constraint methods in Pyomo 2021-01-21 14:24:06 -06:00
13e142432a Add types to remaining InternalSolver methods 2021-01-21 14:02:18 -06:00
fb887d2444 Update README.md 2021-01-21 13:03:18 -06:00
0cf963e873 Fix tests for Python 3.6 2021-01-21 13:01:14 -06:00
6890840c6d InternalSolver: Better specify and test infeasibility 2021-01-21 09:15:14 -06:00
05497cab07 Merge branch 'feature/training_sample' into dev 2021-01-21 08:32:20 -06:00
372d6eb066 Instance: Reformat comments 2021-01-21 08:29:38 -06:00
a1b959755c Fix solve_lp_first=False and add tests 2021-01-21 08:25:57 -06:00
06402516e6 Move collected data to instance.training_data 2021-01-21 08:21:40 -06:00
23dd311d75 Reorganize imports; start moving data to instance.training_data 2021-01-20 12:02:25 -06:00
947189f25f Disallow untyped calls and incomplete defs 2021-01-20 10:48:03 -06:00
7555f561f8 Use TypedDict from typing_extensions 2021-01-20 10:25:22 -06:00
3b2413291e Add mypy to requirements 2021-01-20 10:13:11 -06:00
87dc9f5f11 Remove scipy requirement 2021-01-20 10:11:03 -06:00
1971389a57 Add types to InternalSolver 2021-01-20 10:07:28 -06:00
69a82172b9 Fix some _compute_gap corner cases; add tests 2021-01-20 08:56:02 -06:00
a536d2ecc6 Fix various warnings 2021-01-19 22:52:39 -06:00
36061d5a14 Make _compute_gap static 2021-01-19 22:32:29 -06:00
9ddb952db0 Make LearningSolver.add internal 2021-01-19 22:32:05 -06:00
4b8672870a Add XpressPyomoSolver 2021-01-19 22:28:39 -06:00
34e1711081 Remove incorrect import 2021-01-19 22:02:17 -06:00
0371b2c7a9 Simply Pyomo solvers 2021-01-19 21:54:37 -06:00
185025e86c Remove methods to set solver parameters 2021-01-19 21:38:01 -06:00
ffc77075f5 Require a callable as the internal solver 2021-01-19 21:21:39 -06:00
3ff773402d Remove unused variable 2021-01-19 20:33:06 -06:00
fb006a7880 Merge branch 'dev' of github.com:iSoron/miplearn into dev 2021-01-19 09:52:51 -06:00
872ef0eb06 Benchmark: Move relative statistics to benchmark script 2021-01-19 09:47:29 -06:00
96a57efd25 Update .gitignore 2021-01-19 09:09:42 -06:00
23b38727a2 Benchmark: Update Makefile 2021-01-19 09:09:16 -06:00
d7aac56bd9 Benchmark: Remove unused save_chart; load multiple results 2021-01-19 09:09:03 -06:00
f05db85df8 Benchmark: Avoid loading instances to memory 2021-01-19 09:07:55 -06:00
aecc3a311f Merge branch 'feature/convert-ineqs' into dev 2021-01-19 07:22:14 -06:00
3efc92742d Merge pull request #3 from GregorCH/dev-robust-gap
Make gap computation robust against missing upper/lower bounds
2021-01-15 08:01:55 -06:00
Gregor Hendel
601bfa261a make gap computation robust against missing upper/lower bounds 2021-01-15 08:43:54 +01:00
088a4a0355 Fix formatting 2021-01-14 21:01:42 -06:00
5a062ad97e ConvertTight: Use x function from DropRedundant 2021-01-14 21:01:34 -06:00
fab7b5419b BenchmarkRunner: Create parent dirs in save_results 2021-01-14 21:00:52 -06:00
622d132ba2 Update package description 2021-01-14 18:35:17 -06:00
0ff16040b2 Update package description 2021-01-14 18:26:23 -06:00
137247aed9 GurobiSolver: Randomize seed 2021-01-14 11:31:40 -06:00
7e4b1d77a3 DropRedundant: Collect data from multiple runs 2021-01-14 11:27:47 -06:00
e12a896504 Add training_data argument to after_solve 2021-01-14 10:37:48 -06:00
30d6ea0a9b Benchmark: Include solver log in results file 2021-01-14 10:00:58 -06:00
beee252fa2 simulate_perfect: Do not overwrite original file 2021-01-13 11:04:33 -06:00
b01d97cc2b ConvertTight: Always check feasibility 2021-01-13 09:28:55 -06:00
d67af4a26b ConvertTight: Detect and fix sub-optimality 2021-01-12 11:56:25 -06:00
c9ad7a3f56 Benchmark: Add extra columns to CSV 2021-01-12 11:22:42 -06:00
f77d1d5de9 ConvertTight: Detect and fix infeasibility 2021-01-12 10:05:57 -06:00
e59386f941 Update .gitignore 2021-01-12 07:59:39 -06:00
dfe0239dff LearningSolver: Implement simulate_perfect 2021-01-12 07:54:58 -06:00
bdfe343fea Silence debug statements 2021-01-12 07:54:32 -06:00
7f55426909 Remove debug print statements 2021-01-11 10:26:44 -06:00
1a04482a20 Small improvements to benchmark scripts 2021-01-11 10:26:44 -06:00
3f1aec7fad RelaxationComponent: Always use np arrays 2021-01-07 12:29:43 -06:00
4057a65506 ConvertTightIneqs: Convert only inequalities, not equalities 2021-01-07 11:54:00 -06:00
1e3d4482f4 ConvertTightIneqs: Reduce default slack_tolerance to zero 2021-01-07 11:07:12 -06:00
317e16d471 ConvertTight: Don't take any action on constraints with negative slack 2021-01-07 11:03:02 -06:00
ec00f7555a Export steps 2021-01-07 10:34:38 -06:00
d8dc8471aa Implement tests for ConvertTightIneqsIntoEqsStep 2021-01-07 10:29:22 -06:00
0377b5b546 Minor changes to docstrings 2021-01-07 10:08:14 -06:00
191da25cfc Split relaxation.py into multiple files 2021-01-07 10:01:04 -06:00
144ee668e9 Fix failing tests 2021-01-07 09:41:55 -06:00
28e2ba7c01 Update README.md 2020-12-30 09:15:51 -06:00
c2b0fb5fb0 Create config.yml 2020-12-30 08:40:13 -06:00
8d832bf439 Update issue templates 2020-12-30 08:39:23 -06:00
6db5a7ccd2 Benchmark: Use default components to generate training data 2020-12-16 07:47:38 -06:00
c1b4ea448d PyomoSolver: Never query values of fixed variables 2020-12-08 13:12:47 -06:00
4a26de5ff1 RelaxationComponent: Convert tight inequalities into equalities 2020-12-05 21:11:08 -06:00
5b5f4b7671 InternalSolver: set_constraint_sense, set_constraint_rhs 2020-12-05 21:09:35 -06:00
8bb9996384 Break down RelaxationComponent into multiple steps 2020-12-05 20:34:29 -06:00
6540c88cc5 Component: Add default implementations to all methods 2020-12-05 20:34:00 -06:00
94b493ac4b Implement CompositeComponent 2020-12-05 20:16:22 -06:00
95672ad529 Update README.md 2020-12-05 11:31:49 -06:00
718ac0da06 Reformat additional files 2020-12-05 11:14:15 -06:00
d99600f101 Reformat source code with Black; add pre-commit hooks and CI checks 2020-12-05 10:59:33 -06:00
3823931382 RelaxationComponent: max_iterations 2020-12-04 10:30:55 -06:00
0b41c882ff Merge branch 'feature/files' into dev 2020-12-04 09:41:23 -06:00
388b10c63c Train without loading all instances to memory 2020-12-04 09:37:41 -06:00
54d80bfa85 RelaxationComponent: Implement check_dropped 2020-12-04 09:33:46 -06:00
51b5d8e549 Component: rename iteration_cb and lazy_cb 2020-12-04 08:35:43 -06:00
87a89eaf96 Update references; add DOI 2020-12-03 12:21:27 -06:00
e7426e445a Make tests compatible with Python 3.7+ 2020-12-03 12:00:32 -06:00
57d185dfc2 Merge branch 'gh-actions' into dev 2020-12-03 11:46:38 -06:00
272eb647fd Switch to GitHub runners; temporarily disable CPLEX 2020-12-03 11:43:13 -06:00
f34bfccf8b Set specific versions for all dependencies 2020-12-03 11:30:12 -06:00
f03cc15b75 Allow solve and parallel_solve to operate on files 2020-10-08 17:48:08 -05:00
217 changed files with 12876 additions and 6287 deletions

View File

@@ -1,18 +0,0 @@
name: Test
on: push
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v1
- name: Run tests
run: |
rm -rf ~/.conda/envs/miplearn-test
yes | conda create --name miplearn-test python=3.6
(cd /opt/gurobi900/linux64 && ~/.conda/envs/miplearn-test/bin/python setup.py install)
(cd /opt/cplex-12.8/cplex/python/3.6/x86-64_linux && ~/.conda/envs/miplearn-test/bin/python setup.py install)
make install test \
PYTHON=~/.conda/envs/miplearn-test/bin/python \
PIP=~/.conda/envs/miplearn-test/bin/pip3 \
PYTEST=~/.conda/envs/miplearn-test/bin/pytest

14
.gitignore vendored
View File

@@ -1,5 +1,7 @@
TODO.md
.idea
*.gz
done
*.bin
*$py.class
*.cover
@@ -39,8 +41,8 @@ TODO.md
/site
ENV/
MANIFEST
__pycache__/
__pypackages__/
**/__pycache__/
**/__pypackages__/
build/
celerybeat-schedule
celerybeat.pid
@@ -56,7 +58,6 @@ eggs/
env.bak/
env/
htmlcov/
instance/
ipython_config.py
lib/
lib64/
@@ -75,3 +76,10 @@ venv.bak/
venv/
wheels/
notebooks/
.vscode
tmp
benchmark/data
benchmark/results
**/*.xz
**/*.h5
**/*.jld2

7
.mypy.ini Normal file
View File

@@ -0,0 +1,7 @@
[mypy]
ignore_missing_imports = True
disallow_untyped_defs = True
disallow_untyped_calls = True
disallow_incomplete_defs = True
pretty = True
no_implicit_optional = True

27
.zenodo.json Normal file
View File

@@ -0,0 +1,27 @@
{
"creators": [
{
"orcid": "0000-0002-5022-9802",
"affiliation": "Argonne National Laboratory",
"name": "Santos Xavier, Alinson"
},
{
"affiliation": "Argonne National Laboratory",
"name": "Qiu, Feng"
},
{
"affiliation": "Georgia Institute of Technology",
"name": "Gu, Xiaoyi"
},
{
"affiliation": "Georgia Institute of Technology",
"name": "Becu, Berkay"
},
{
"affiliation": "Georgia Institute of Technology",
"name": "Dey, Santanu S."
}
],
"title": "MIPLearn: An Extensible Framework for Learning-Enhanced Optimization",
"description": "<b>MIPLearn</b> is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Linear Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS."
}

71
CHANGELOG.md Normal file
View File

@@ -0,0 +1,71 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to
[Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.4.3] - 2025-05-10
## Changed
- Update dependency: Gurobi 12
## [0.4.2] - 2024-12-10
## Changed
- H5File: Use float64 precision instead of float32
- LearningSolver: optimize now returns (model, stats) instead of just stats
- Update dependency: Gurobi 11
## [0.4.0] - 2024-02-06
### Added
- Add ML strategies for user cuts
- Add ML strategies for lazy constraints
### Changed
- LearningSolver.solve no longer generates HDF5 files; use a collector instead.
- Add `_gurobipy` suffix to all `build_model` functions; implement some `_pyomo`
and `_jump` functions.
## [0.3.0] - 2023-06-08
This is a complete rewrite of the original prototype package, with an entirely
new API, focused on performance, scalability and flexibility.
### Added
- Add support for Python/Gurobipy and Julia/JuMP, in addition to the existing
Python/Pyomo interface.
- Add six new random instance generators (bin packing, capacitated p-median, set
cover, set packing, unit commitment, vertex cover), in addition to the three
existing generators (multiknapsack, stable set, tsp).
- Collect some additional raw training data (e.g. basis status, reduced costs,
etc)
- Add new primal solution ML strategies (memorizing, independent vars and joint
vars)
- Add new primal solution actions (set warm start, fix variables, enforce
proximity)
- Add runnable tutorials and user guides to the documentation.
### Changed
- To support large-scale problems and datasets, switch from an in-memory
architecture to a file-based architecture, using HDF5 files.
- To accelerate development cycle, split training data collection from feature
extraction.
### Removed
- Temporarily remove ML strategies for lazy constraints
- Remove benchmarks from documentation. These will be published in a separate
paper.
## [0.1.0] - 2020-11-23
- Initial public release

View File

@@ -22,4 +22,4 @@ DISCLAIMER
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
********************************************************************************
********************************************************************************

View File

@@ -1,13 +1,18 @@
PYTHON := python3
PYTEST := pytest
PIP := pip3
PYTEST_ARGS := -W ignore::DeprecationWarning -vv -x --log-level=DEBUG
VERSION := 0.2
PIP := $(PYTHON) -m pip
MYPY := $(PYTHON) -m mypy
PYTEST_ARGS := -W ignore::DeprecationWarning -vv --log-level=DEBUG
VERSION := 0.4
all: docs test
conda-create:
conda env remove -n miplearn
conda create -n miplearn python=3.12
clean:
rm -rf build
rm -rf build/* dist/*
develop:
$(PYTHON) setup.py develop
@@ -19,19 +24,29 @@ dist-upload:
$(PYTHON) -m twine upload dist/*
docs:
mkdocs build -d ../docs/$(VERSION)/
rm -rf ../docs/$(VERSION)
cd docs; make dirhtml
rsync -avP --delete-after docs/_build/dirhtml/ ../docs/$(VERSION)/
docs-dev:
mkdocs build -d ../docs/dev/
install-deps:
$(PIP) install --upgrade pip
$(PIP) install --upgrade -r requirements.txt
install:
$(PIP) install -r requirements.txt
$(PYTHON) setup.py install
uninstall:
$(PIP) uninstall miplearn
test:
$(PYTEST) $(PYTEST_ARGS)
reformat:
$(PYTHON) -m black .
.PHONY: test test-watch docs install
test:
# pyflakes miplearn tests
black --check .
# rm -rf .mypy_cache
$(MYPY) -p miplearn
$(MYPY) -p tests
$(PYTEST) $(PYTEST_ARGS)
.PHONY: test test-watch docs install dist

View File

@@ -1,27 +1,69 @@
![Build status](https://img.shields.io/github/workflow/status/ANL-CEEESA/MIPLearn/Test)
![BSD License](https://img.shields.io/badge/license-BSD-blue)
<h1 align="center">MIPLearn</h1>
<p align="center">
<a href="https://github.com/ANL-CEEESA/MIPLearn/actions">
<img src="https://github.com/ANL-CEEESA/MIPLearn/workflows/Test/badge.svg">
</a>
<a href="https://doi.org/10.5281/zenodo.4287567">
<img src="https://zenodo.org/badge/DOI/10.5281/zenodo.4287567.svg">
</a>
<a href="https://github.com/ANL-CEEESA/MIPLearn/releases/">
<img src="https://img.shields.io/github/v/release/ANL-CEEESA/MIPLearn?include_prereleases&label=pre-release">
</a>
<a href="https://github.com/ANL-CEEESA/MIPLearn/discussions">
<img src="https://img.shields.io/badge/GitHub-Discussions-%23fc4ebc" />
</a>
</p>
MIPLearn
========
**MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS.
**MIPLearn** is an extensible framework for **Learning-Enhanced Mixed-Integer Optimization**, an approach targeted at discrete optimization problems that need to be repeatedly solved with only minor changes to input data.
The package uses Machine Learning (ML) to automatically identify patterns in previously solved instances of the problem, or in the solution process itself, and produces hints that can guide a conventional MIP solver towards the optimal solution faster. For particular classes of problems, this approach has been shown to provide significant performance benefits (see [benchmarks](https://anl-ceeesa.github.io/MIPLearn/0.1/problems/) and [references](https://anl-ceeesa.github.io/MIPLearn/0.1/about/)).
Features
--------
* **MIPLearn proposes a flexible problem specification format,** which allows users to describe their particular optimization problems to a Learning-Enhanced MIP solver, both from the MIP perspective and from the ML perspective, without making any assumptions on the problem being modeled, the mathematical formulation of the problem, or ML encoding.
* **MIPLearn provides a reference implementation of a *Learning-Enhanced Solver*,** which can use the above problem specification format to automatically predict, based on previously solved instances, a number of hints to accelerate MIP performance.
* **MIPLearn provides a set of benchmark problems and random instance generators,** covering applications from different domains, which can be used to quickly evaluate new learning-enhanced MIP techniques in a measurable and reproducible way.
* **MIPLearn is customizable and extensible**. For MIP and ML researchers exploring new techniques to accelerate MIP performance based on historical data, each component of the reference solver can be individually replaced, extended or customized.
Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits.
Documentation
-------------
For installation instructions, basic usage and benchmarks results, see the [official documentation](https://anl-ceeesa.github.io/MIPLearn/).
- Tutorials:
1. [Getting started (Pyomo)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-pyomo/)
2. [Getting started (Gurobipy)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-gurobipy/)
3. [Getting started (JuMP)](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/getting-started-jump/)
4. [User cuts and lazy constraints](https://anl-ceeesa.github.io/MIPLearn/0.4/tutorials/cuts-gurobipy/)
- User Guide
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/problems/)
2. [Training data collectors](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/collectors/)
3. [Feature extractors](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/features/)
4. [Primal components](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/primal/)
5. [Learning solver](https://anl-ceeesa.github.io/MIPLearn/0.4/guide/solvers/)
- Python API Reference
1. [Benchmark problems](https://anl-ceeesa.github.io/MIPLearn/0.4/api/problems/)
2. [Collectors & extractors](https://anl-ceeesa.github.io/MIPLearn/0.4/api/collectors/)
3. [Components](https://anl-ceeesa.github.io/MIPLearn/0.4/api/components/)
4. [Solvers](https://anl-ceeesa.github.io/MIPLearn/0.4/api/solvers/)
5. [Helpers](https://anl-ceeesa.github.io/MIPLearn/0.4/api/helpers/)
Authors
-------
- **Alinson S. Xavier** (Argonne National Laboratory)
- **Feng Qiu** (Argonne National Laboratory)
- **Xiaoyi Gu** (Georgia Institute of Technology)
- **Berkay Becu** (Georgia Institute of Technology)
- **Santanu S. Dey** (Georgia Institute of Technology)
Acknowledgments
---------------
* Based upon work supported by **Laboratory Directed Research and Development** (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy.
* Based upon work supported by the **U.S. Department of Energy Advanced Grid Modeling Program**.
Citing MIPLearn
---------------
If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.4)*. Zenodo (2024). DOI: [10.5281/zenodo.4287567](https://doi.org/10.5281/zenodo.4287567)
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:
* **Alinson S. Xavier, Feng Qiu, Shabbir Ahmed.** *Learning to Solve Large-Scale Unit Commitment Problems.* INFORMS Journal on Computing (2020). DOI: [10.1287/ijoc.2020.0976](https://doi.org/10.1287/ijoc.2020.0976)
License
-------

View File

@@ -1,47 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
# Written by Alinson S. Xavier <axavier@anl.gov>
DATAFILE := miplearn-train-data.tar.gz
CHALLENGES := \
stab/ChallengeA \
knapsack/ChallengeA \
tsp/ChallengeA
main: $(addsuffix /performance.png, $(CHALLENGES))
%/train_instances.bin:
python benchmark.py train $*
%/benchmark_baseline.csv: %/train_instances.bin
python benchmark.py test-baseline $*
%/benchmark_ml.csv: %/benchmark_baseline.csv
python benchmark.py test-ml $*
%/performance.png: %/benchmark_ml.csv
python benchmark.py charts $*
clean:
rm -rvf $(CHALLENGES)
clean-ml:
rm -rvf */*/benchmark_ml.csv
clean-charts:
rm -rfv */*/performance.png
training-data-push:
tar -cvvzf $(DATAFILE) */*/*.bin
rsync -avP $(DATAFILE) andromeda:/www/axavier.org/projects/miplearn/$(DATAFILE)
rm -fv $(DATAFILE)
training-data-pull:
wget https://axavier.org/projects/miplearn/$(DATAFILE)
tar -xvvzf $(DATAFILE)
rm -f $(DATAFILE)
.PHONY: clean clean-ml clean-charts
.SECONDARY:

View File

@@ -1,201 +0,0 @@
#!/usr/bin/env python
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
"""Benchmark script
Usage:
benchmark.py train <challenge>
benchmark.py test-baseline <challenge>
benchmark.py test-ml <challenge>
benchmark.py charts <challenge>
Options:
-h --help Show this screen
"""
from docopt import docopt
import importlib, pathlib
from miplearn import (LearningSolver, BenchmarkRunner)
from numpy import median
import pyomo.environ as pe
import pickle
import logging
import sys
logging.basicConfig(format='%(asctime)s %(levelname).1s %(name)s: %(message)12s',
datefmt='%H:%M:%S',
level=logging.INFO,
stream=sys.stdout)
logging.getLogger('gurobipy').setLevel(logging.ERROR)
logging.getLogger('pyomo.core').setLevel(logging.ERROR)
logging.getLogger('miplearn').setLevel(logging.INFO)
logger = logging.getLogger("benchmark")
n_jobs = 10
train_time_limit = 3600
test_time_limit = 900
internal_solver = "gurobi"
args = docopt(__doc__)
basepath = args["<challenge>"]
pathlib.Path(basepath).mkdir(parents=True, exist_ok=True)
def save(obj, filename):
logger.info("Writing %s..." % filename)
with open(filename, "wb") as file:
pickle.dump(obj, file)
def load(filename):
import pickle
with open(filename, "rb") as file:
return pickle.load(file)
def train():
problem_name, challenge_name = args["<challenge>"].split("/")
pkg = importlib.import_module("miplearn.problems.%s" % problem_name)
challenge = getattr(pkg, challenge_name)()
train_instances = challenge.training_instances
test_instances = challenge.test_instances
solver = LearningSolver(time_limit=train_time_limit,
solver=internal_solver,
components={})
solver.parallel_solve(train_instances, n_jobs=n_jobs)
save(train_instances, "%s/train_instances.bin" % basepath)
save(test_instances, "%s/test_instances.bin" % basepath)
def test_baseline():
test_instances = load("%s/test_instances.bin" % basepath)
solvers = {
"baseline": LearningSolver(
time_limit=test_time_limit,
solver=internal_solver,
),
}
benchmark = BenchmarkRunner(solvers)
benchmark.parallel_solve(test_instances, n_jobs=n_jobs)
benchmark.save_results("%s/benchmark_baseline.csv" % basepath)
def test_ml():
logger.info("Loading instances...")
train_instances = load("%s/train_instances.bin" % basepath)
test_instances = load("%s/test_instances.bin" % basepath)
solvers = {
"ml-exact": LearningSolver(
time_limit=test_time_limit,
solver=internal_solver,
),
"ml-heuristic": LearningSolver(
time_limit=test_time_limit,
solver=internal_solver,
mode="heuristic",
),
}
benchmark = BenchmarkRunner(solvers)
logger.info("Loading results...")
benchmark.load_results("%s/benchmark_baseline.csv" % basepath)
logger.info("Fitting...")
benchmark.fit(train_instances)
logger.info("Solving...")
benchmark.parallel_solve(test_instances, n_jobs=n_jobs)
benchmark.save_results("%s/benchmark_ml.csv" % basepath)
def charts():
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
sns.set_palette("Blues_r")
benchmark = BenchmarkRunner({})
benchmark.load_results("%s/benchmark_ml.csv" % basepath)
results = benchmark.raw_results()
results["Gap (%)"] = results["Gap"] * 100.0
sense = results.loc[0, "Sense"]
if sense == "min":
primal_column = "Relative Upper Bound"
obj_column = "Upper Bound"
predicted_obj_column = "Predicted UB"
else:
primal_column = "Relative Lower Bound"
obj_column = "Lower Bound"
predicted_obj_column = "Predicted LB"
palette={
"baseline": "#9b59b6",
"ml-exact": "#3498db",
"ml-heuristic": "#95a5a6"
}
fig, (ax1, ax2, ax3, ax4) = plt.subplots(nrows=1,
ncols=4,
figsize=(12,4),
gridspec_kw={'width_ratios': [2, 1, 1, 2]},
)
sns.stripplot(x="Solver",
y="Wallclock Time",
data=results,
ax=ax1,
jitter=0.25,
palette=palette,
size=4.0,
);
sns.barplot(x="Solver",
y="Wallclock Time",
data=results,
ax=ax1,
errwidth=0.,
alpha=0.4,
palette=palette,
estimator=median,
);
ax1.set(ylabel='Wallclock Time (s)')
ax2.set_ylim(-0.5, 5.5)
sns.stripplot(x="Solver",
y="Gap (%)",
jitter=0.25,
data=results[results["Solver"] != "ml-heuristic"],
ax=ax2,
palette=palette,
size=4.0,
);
ax3.set_ylim(0.95,1.05)
sns.stripplot(x="Solver",
y=primal_column,
jitter=0.25,
data=results[results["Solver"] == "ml-heuristic"],
ax=ax3,
palette=palette,
);
sns.scatterplot(x=obj_column,
y=predicted_obj_column,
hue="Solver",
data=results[results["Solver"] == "ml-exact"],
ax=ax4,
palette=palette,
);
xlim, ylim = ax4.get_xlim(), ax4.get_ylim()
ax4.plot([-1e10, 1e10], [-1e10, 1e10], ls='-', color="#cccccc");
ax4.set_xlim(xlim)
ax4.set_ylim(ylim)
ax4.get_legend().remove()
fig.tight_layout()
plt.savefig("%s/performance.png" % basepath,
bbox_inches='tight',
dpi=150)
if __name__ == "__main__":
if args["train"]:
train()
if args["test-baseline"]:
test_baseline()
if args["test-ml"]:
test_ml()
if args["charts"]:
charts()

View File

@@ -1,51 +0,0 @@
,Solver,Instance,Wallclock Time,Lower Bound,Upper Bound,Gap,Nodes,Mode,Relative Lower Bound,Relative Upper Bound,Relative Wallclock Time,Relative Gap,Relative Nodes
0,baseline,0,662.7372989654541,59162.0,59167.0,8.451370812345763e-05,18688107.0,exact,1.0,1.0,1.0,1.0,1.0
1,baseline,1,900.0007548332214,59137.0,59256.0,0.002012276578115224,24175550.0,exact,1.0,1.0,1.0,1.0,1.0
2,baseline,2,900.0016160011292,59186.0,59285.0,0.0016726928665562802,24089218.0,exact,1.0,1.0,1.0,1.0,1.0
3,baseline,3,900.0023140907288,59145.0,59231.0,0.0014540535970918927,24595759.0,exact,1.0,1.0,1.0,1.0,1.0
4,baseline,4,900.0024960041046,59142.0,59213.0,0.0012005004903452706,25467171.0,exact,1.0,1.0,1.0,1.0,1.0
5,baseline,5,900.002925157547,59126.0,59244.0,0.0019957379156377904,23457042.0,exact,1.0,1.0,1.0,1.0,1.0
6,baseline,6,900.0031039714813,59125.0,59236.97169757604,0.0018938130668251741,24240772.0,exact,1.0,1.0,1.0,1.0,1.0
7,baseline,7,900.002781867981,59105.0,59212.0,0.001810337534895525,24042592.0,exact,1.0,1.0,1.0,1.0,1.0
8,baseline,8,900.0021660327911,59169.0,59251.0,0.0013858608392908448,25512146.0,exact,1.0,1.0,1.0,1.0,1.0
9,baseline,9,900.0015439987183,59130.0,59256.0,0.00213089802130898,23227790.0,exact,1.0,1.0,1.0,1.0,1.0
10,baseline,10,900.0024099349976,59127.0,59201.0,0.0012515432881762985,25015636.0,exact,1.0,1.0,1.0,1.0,1.0
11,baseline,11,900.0025849342346,59198.0,59289.0,0.0015372140950707794,24558832.0,exact,1.0,1.0,1.0,1.0,1.0
12,baseline,12,900.0022029876709,59102.0,59224.0,0.002064227944908802,24026788.0,exact,1.0,1.0,1.0,1.0,1.0
13,baseline,13,900.0011007785797,59150.0,59206.0,0.0009467455621301775,24953207.0,exact,1.0,1.0,1.0,1.0,1.0
14,baseline,14,900.0014700889587,59169.0,59250.0,0.0013689600973482736,25494260.0,exact,1.0,1.0,1.0,1.0,1.0
15,baseline,15,900.0013790130615,59083.0,59196.0,0.0019125636816004605,23792716.0,exact,1.0,1.0,1.0,1.0,1.0
16,baseline,16,900.0020098686218,59126.0,59233.0,0.0018096945506207082,23398798.0,exact,1.0,1.0,1.0,1.0,1.0
17,baseline,17,900.0023510456085,59156.0,59197.0,0.0006930826965988235,25573586.0,exact,1.0,1.0,1.0,1.0,1.0
18,baseline,18,900.002711057663,59118.0,59211.0,0.0015731249365675429,24489136.0,exact,1.0,1.0,1.0,1.0,1.0
19,baseline,19,724.1934628486633,59159.0,59164.0,8.451799388089724e-05,20931760.0,exact,1.0,1.0,1.0,1.0,1.0
20,baseline,20,900.0011439323425,59068.0,59191.0,0.0020823457709758246,23411794.0,exact,1.0,1.0,1.0,1.0,1.0
21,baseline,21,380.06568694114685,59175.0,59180.0,8.449514152936207e-05,11618526.0,exact,1.0,1.0,1.0,1.0,1.0
22,baseline,22,900.0016028881073,59121.0,59154.94711904252,0.0005741973079365614,26352886.0,exact,1.0,1.0,1.0,1.0,1.0
23,baseline,23,230.25152111053467,59193.0,59198.0,8.44694474008751e-05,6776049.0,exact,1.0,1.0,1.0,1.0,1.0
24,baseline,24,900.0010840892792,59162.0,59240.0,0.001318413846725939,24727727.0,exact,1.0,1.0,1.0,1.0,1.0
25,baseline,25,900.0015320777893,59096.0,59210.0,0.001929064572898335,23438919.0,exact,1.0,1.0,1.0,1.0,1.0
26,baseline,26,900.0015478134155,59089.0,59203.0,0.001929293100238623,23826788.0,exact,1.0,1.0,1.0,1.0,1.0
27,baseline,27,900.0010070800781,59153.0,59249.0,0.0016229100806383447,24336831.0,exact,1.0,1.0,1.0,1.0,1.0
28,baseline,28,900.001277923584,59112.0,59208.0,0.0016240357287860333,25111591.0,exact,1.0,1.0,1.0,1.0,1.0
29,baseline,29,900.0012440681458,59182.0,59263.0,0.0013686593896792944,24919871.0,exact,1.0,1.0,1.0,1.0,1.0
30,baseline,30,900.0012910366058,59134.0,59241.0,0.001809449724354855,23615391.0,exact,1.0,1.0,1.0,1.0,1.0
31,baseline,31,900.0023548603058,59082.0,59169.0,0.0014725297044785213,26213904.0,exact,1.0,1.0,1.0,1.0,1.0
32,baseline,32,875.9193549156189,59175.0,59180.0,8.449514152936207e-05,24935695.0,exact,1.0,1.0,1.0,1.0,1.0
33,baseline,33,900.0018489360809,59088.0,59177.0,0.0015062279989168698,25210167.0,exact,1.0,1.0,1.0,1.0,1.0
34,baseline,34,232.1541509628296,59190.0,59195.0,8.447372867038352e-05,7309410.0,exact,1.0,1.0,1.0,1.0,1.0
35,baseline,35,900.0025398731232,59183.0,59262.0,0.001334842775797104,23927493.0,exact,1.0,1.0,1.0,1.0,1.0
36,baseline,36,900.0010929107666,59166.0,59254.0,0.00148734070243045,25589946.0,exact,1.0,1.0,1.0,1.0,1.0
37,baseline,37,622.9371509552002,59202.0,59207.0,8.445660619573663e-05,18595087.0,exact,1.0,1.0,1.0,1.0,1.0
38,baseline,38,557.924427986145,59212.0,59217.0,8.444234276835777e-05,16270407.0,exact,1.0,1.0,1.0,1.0,1.0
39,baseline,39,900.0010092258453,59143.0,59185.0,0.0007101432122144632,26304077.0,exact,1.0,1.0,1.0,1.0,1.0
40,baseline,40,900.0011250972748,59158.0,59242.99535479154,0.0014367516615088902,23949337.0,exact,1.0,1.0,1.0,1.0,1.0
41,baseline,41,900.000893831253,59170.0,59257.0,0.0014703396991718777,24299427.0,exact,1.0,1.0,1.0,1.0,1.0
42,baseline,42,900.0017001628876,59089.0,59228.0,0.002352383692396216,23229681.0,exact,1.0,1.0,1.0,1.0,1.0
43,baseline,43,127.60789799690247,59232.0,59237.0,8.44138303619665e-05,4041704.0,exact,1.0,1.0,1.0,1.0,1.0
44,baseline,44,166.38699293136597,59201.0,59206.0,8.445803280349994e-05,5151689.0,exact,1.0,1.0,1.0,1.0,1.0
45,baseline,45,900.0007989406586,59135.0,59247.0,0.001893971421324089,26922402.0,exact,1.0,1.0,1.0,1.0,1.0
46,baseline,46,900.001415014267,59152.0,59254.0,0.001724371111712199,26485728.0,exact,1.0,1.0,1.0,1.0,1.0
47,baseline,47,900.0020279884338,59123.0,59235.0,0.0018943558344468312,28222784.0,exact,1.0,1.0,1.0,1.0,1.0
48,baseline,48,900.0011022090912,59176.0,59284.0,0.0018250642152223874,28675410.0,exact,1.0,1.0,1.0,1.0,1.0
49,baseline,49,900.0012428760529,59150.0,59206.0,0.0009467455621301775,30531240.0,exact,1.0,1.0,1.0,1.0,1.0
1 Solver Instance Wallclock Time Lower Bound Upper Bound Gap Nodes Mode Relative Lower Bound Relative Upper Bound Relative Wallclock Time Relative Gap Relative Nodes
2 0 baseline 0 662.7372989654541 59162.0 59167.0 8.451370812345763e-05 18688107.0 exact 1.0 1.0 1.0 1.0 1.0
3 1 baseline 1 900.0007548332214 59137.0 59256.0 0.002012276578115224 24175550.0 exact 1.0 1.0 1.0 1.0 1.0
4 2 baseline 2 900.0016160011292 59186.0 59285.0 0.0016726928665562802 24089218.0 exact 1.0 1.0 1.0 1.0 1.0
5 3 baseline 3 900.0023140907288 59145.0 59231.0 0.0014540535970918927 24595759.0 exact 1.0 1.0 1.0 1.0 1.0
6 4 baseline 4 900.0024960041046 59142.0 59213.0 0.0012005004903452706 25467171.0 exact 1.0 1.0 1.0 1.0 1.0
7 5 baseline 5 900.002925157547 59126.0 59244.0 0.0019957379156377904 23457042.0 exact 1.0 1.0 1.0 1.0 1.0
8 6 baseline 6 900.0031039714813 59125.0 59236.97169757604 0.0018938130668251741 24240772.0 exact 1.0 1.0 1.0 1.0 1.0
9 7 baseline 7 900.002781867981 59105.0 59212.0 0.001810337534895525 24042592.0 exact 1.0 1.0 1.0 1.0 1.0
10 8 baseline 8 900.0021660327911 59169.0 59251.0 0.0013858608392908448 25512146.0 exact 1.0 1.0 1.0 1.0 1.0
11 9 baseline 9 900.0015439987183 59130.0 59256.0 0.00213089802130898 23227790.0 exact 1.0 1.0 1.0 1.0 1.0
12 10 baseline 10 900.0024099349976 59127.0 59201.0 0.0012515432881762985 25015636.0 exact 1.0 1.0 1.0 1.0 1.0
13 11 baseline 11 900.0025849342346 59198.0 59289.0 0.0015372140950707794 24558832.0 exact 1.0 1.0 1.0 1.0 1.0
14 12 baseline 12 900.0022029876709 59102.0 59224.0 0.002064227944908802 24026788.0 exact 1.0 1.0 1.0 1.0 1.0
15 13 baseline 13 900.0011007785797 59150.0 59206.0 0.0009467455621301775 24953207.0 exact 1.0 1.0 1.0 1.0 1.0
16 14 baseline 14 900.0014700889587 59169.0 59250.0 0.0013689600973482736 25494260.0 exact 1.0 1.0 1.0 1.0 1.0
17 15 baseline 15 900.0013790130615 59083.0 59196.0 0.0019125636816004605 23792716.0 exact 1.0 1.0 1.0 1.0 1.0
18 16 baseline 16 900.0020098686218 59126.0 59233.0 0.0018096945506207082 23398798.0 exact 1.0 1.0 1.0 1.0 1.0
19 17 baseline 17 900.0023510456085 59156.0 59197.0 0.0006930826965988235 25573586.0 exact 1.0 1.0 1.0 1.0 1.0
20 18 baseline 18 900.002711057663 59118.0 59211.0 0.0015731249365675429 24489136.0 exact 1.0 1.0 1.0 1.0 1.0
21 19 baseline 19 724.1934628486633 59159.0 59164.0 8.451799388089724e-05 20931760.0 exact 1.0 1.0 1.0 1.0 1.0
22 20 baseline 20 900.0011439323425 59068.0 59191.0 0.0020823457709758246 23411794.0 exact 1.0 1.0 1.0 1.0 1.0
23 21 baseline 21 380.06568694114685 59175.0 59180.0 8.449514152936207e-05 11618526.0 exact 1.0 1.0 1.0 1.0 1.0
24 22 baseline 22 900.0016028881073 59121.0 59154.94711904252 0.0005741973079365614 26352886.0 exact 1.0 1.0 1.0 1.0 1.0
25 23 baseline 23 230.25152111053467 59193.0 59198.0 8.44694474008751e-05 6776049.0 exact 1.0 1.0 1.0 1.0 1.0
26 24 baseline 24 900.0010840892792 59162.0 59240.0 0.001318413846725939 24727727.0 exact 1.0 1.0 1.0 1.0 1.0
27 25 baseline 25 900.0015320777893 59096.0 59210.0 0.001929064572898335 23438919.0 exact 1.0 1.0 1.0 1.0 1.0
28 26 baseline 26 900.0015478134155 59089.0 59203.0 0.001929293100238623 23826788.0 exact 1.0 1.0 1.0 1.0 1.0
29 27 baseline 27 900.0010070800781 59153.0 59249.0 0.0016229100806383447 24336831.0 exact 1.0 1.0 1.0 1.0 1.0
30 28 baseline 28 900.001277923584 59112.0 59208.0 0.0016240357287860333 25111591.0 exact 1.0 1.0 1.0 1.0 1.0
31 29 baseline 29 900.0012440681458 59182.0 59263.0 0.0013686593896792944 24919871.0 exact 1.0 1.0 1.0 1.0 1.0
32 30 baseline 30 900.0012910366058 59134.0 59241.0 0.001809449724354855 23615391.0 exact 1.0 1.0 1.0 1.0 1.0
33 31 baseline 31 900.0023548603058 59082.0 59169.0 0.0014725297044785213 26213904.0 exact 1.0 1.0 1.0 1.0 1.0
34 32 baseline 32 875.9193549156189 59175.0 59180.0 8.449514152936207e-05 24935695.0 exact 1.0 1.0 1.0 1.0 1.0
35 33 baseline 33 900.0018489360809 59088.0 59177.0 0.0015062279989168698 25210167.0 exact 1.0 1.0 1.0 1.0 1.0
36 34 baseline 34 232.1541509628296 59190.0 59195.0 8.447372867038352e-05 7309410.0 exact 1.0 1.0 1.0 1.0 1.0
37 35 baseline 35 900.0025398731232 59183.0 59262.0 0.001334842775797104 23927493.0 exact 1.0 1.0 1.0 1.0 1.0
38 36 baseline 36 900.0010929107666 59166.0 59254.0 0.00148734070243045 25589946.0 exact 1.0 1.0 1.0 1.0 1.0
39 37 baseline 37 622.9371509552002 59202.0 59207.0 8.445660619573663e-05 18595087.0 exact 1.0 1.0 1.0 1.0 1.0
40 38 baseline 38 557.924427986145 59212.0 59217.0 8.444234276835777e-05 16270407.0 exact 1.0 1.0 1.0 1.0 1.0
41 39 baseline 39 900.0010092258453 59143.0 59185.0 0.0007101432122144632 26304077.0 exact 1.0 1.0 1.0 1.0 1.0
42 40 baseline 40 900.0011250972748 59158.0 59242.99535479154 0.0014367516615088902 23949337.0 exact 1.0 1.0 1.0 1.0 1.0
43 41 baseline 41 900.000893831253 59170.0 59257.0 0.0014703396991718777 24299427.0 exact 1.0 1.0 1.0 1.0 1.0
44 42 baseline 42 900.0017001628876 59089.0 59228.0 0.002352383692396216 23229681.0 exact 1.0 1.0 1.0 1.0 1.0
45 43 baseline 43 127.60789799690247 59232.0 59237.0 8.44138303619665e-05 4041704.0 exact 1.0 1.0 1.0 1.0 1.0
46 44 baseline 44 166.38699293136597 59201.0 59206.0 8.445803280349994e-05 5151689.0 exact 1.0 1.0 1.0 1.0 1.0
47 45 baseline 45 900.0007989406586 59135.0 59247.0 0.001893971421324089 26922402.0 exact 1.0 1.0 1.0 1.0 1.0
48 46 baseline 46 900.001415014267 59152.0 59254.0 0.001724371111712199 26485728.0 exact 1.0 1.0 1.0 1.0 1.0
49 47 baseline 47 900.0020279884338 59123.0 59235.0 0.0018943558344468312 28222784.0 exact 1.0 1.0 1.0 1.0 1.0
50 48 baseline 48 900.0011022090912 59176.0 59284.0 0.0018250642152223874 28675410.0 exact 1.0 1.0 1.0 1.0 1.0
51 49 baseline 49 900.0012428760529 59150.0 59206.0 0.0009467455621301775 30531240.0 exact 1.0 1.0 1.0 1.0 1.0

View File

@@ -1,151 +0,0 @@
,Solver,Instance,Wallclock Time,Lower Bound,Upper Bound,Gap,Nodes,Mode,Relative Lower Bound,Relative Upper Bound,Relative Wallclock Time,Relative Gap,Relative Nodes,Predicted LB,Predicted UB,Sense
0,baseline,0,662.7372989654541,59162.0,59167.0,8.451370812345763e-05,18688107.0,exact,1.0,1.0004734608295711,8.70828628181939,1.0,5.622307810564691,,,
1,baseline,1,900.0007548332214,59137.0,59256.0,0.002012276578115224,24175550.0,exact,1.0,1.0019275641675967,5.256438928065297,23.8,2.9939288096459404,,,
2,baseline,2,900.0016160011293,59186.0,59285.0,0.0016726928665562802,24089218.0,exact,1.0,1.0015880792688077,6.615464163621432,19.8,3.971008553523482,,,
3,baseline,3,900.0023140907288,59145.0,59231.0,0.0014540535970918927,24595759.0,exact,1.0,1.0013693998309383,9.839168761119028,17.2,6.1066754492159765,,,
4,baseline,4,900.0024960041046,59142.0,59213.0,0.0012005004903452704,25467171.0,exact,1.0,1.0011162314042927,11.236705591195049,14.261938547000467,7.640381275476392,,,
5,baseline,5,900.002925157547,59126.0,59244.0,0.0019957379156377904,23457042.0,exact,1.0,1.0021143794719125,10.760170783867167,29.494511720731996,5.847345591041515,,,
6,baseline,6,900.0031039714813,59125.0,59236.97169757604,0.001893813066825174,24240772.0,exact,1.0,1.0018090934817527,5.582936655618509,22.394339515207683,3.3210931747954593,,,
7,baseline,7,900.002781867981,59105.0,59212.0,0.001810337534895525,24042592.0,exact,1.0,1.001725596345796,1.7540923744921773,21.400000000000002,2.390273708531383,,,
8,baseline,8,900.0021660327911,59169.0,59251.0,0.0013858608392908448,25512146.0,exact,1.0,1.0013181687594004,9.026100681465717,20.5,5.950915491512204,,,
9,baseline,9,900.0015439987183,59130.0,59256.0,0.00213089802130898,23227790.0,exact,1.0,1.0021478462345041,7.880275979497338,25.197442922374428,4.198068271097029,,,
10,baseline,10,900.0024099349976,59127.0,59201.0,0.0012515432881762985,25015636.0,exact,1.0,1.0011838122135597,13.04240249187423,18.5,8.325144625534431,,,
11,baseline,11,900.0025849342346,59198.0,59289.0,0.0015372140950707794,24558832.0,exact,1.0,1.0017741281427412,4.941352404443008,18.19415858643873,2.9831824806949396,,,
12,baseline,12,900.0022029876709,59102.0,59224.0,0.002064227944908802,24026788.0,exact,1.0,1.0019794609775492,5.378482195683288,24.400000000000002,2.947983888580514,,,
13,baseline,13,900.0011007785797,59150.0,59206.0,0.0009467455621301775,24953207.0,exact,1.0,1.0008790614328702,12.451848934586094,13.999999999999998,8.0844140865203,,,
14,baseline,14,900.0014700889587,59169.0,59250.0,0.0013689600973482733,25494260.0,exact,1.0,1.0014705136656357,10.286586243074352,20.246577599756627,6.714445655657852,,,
15,baseline,15,900.0013790130615,59083.0,59196.0,0.0019125636816004605,23792716.0,exact,1.0,1.0018277822908204,7.486704871117682,22.6,4.211841162852367,,,
16,baseline,16,900.0020098686218,59126.0,59233.0,0.0018096945506207078,23398798.0,exact,1.0,1.001724983511187,10.473065188128833,21.39999999999999,5.842727665213484,,,
17,baseline,17,900.0023510456085,59156.0,59197.0,0.0006930826965988236,25573586.0,exact,1.0,1.0006930826965987,12.267049016770867,10.249306917303404,7.807679578926801,,,
18,baseline,18,900.002711057663,59118.0,59211.0,0.0015731249365675427,24489136.0,exact,1.0,1.0014884224413512,19.84721287191386,18.599999999999998,12.51075303699155,,,
19,baseline,19,724.1934628486632,59159.0,59164.0,8.451799388089724e-05,20931760.0,exact,1.0,1.0,16.225906582646804,1.0,11.203195934004649,,,
20,baseline,20,900.0011439323425,59068.0,59191.0,0.0020823457709758246,23411794.0,exact,1.0,1.0020823457709758,4.553908811228339,24.597917654229025,2.465201726709367,,,
21,baseline,21,380.06568694114685,59175.0,59180.0,8.449514152936208e-05,11618526.0,exact,1.0,1.0000168978860744,7.912557532546788,1.2500000000000002,5.757968140817527,,,
22,baseline,22,900.0016028881073,59121.0,59154.94711904253,0.0005741973079365614,26352886.0,exact,1.0,1.0004895835849292,14.040381831195205,6.789423808503489,9.40925154833969,,,
23,baseline,23,230.2515211105347,59193.0,59198.0,8.44694474008751e-05,6776049.0,exact,1.0,1.0000337860666262,8.514542938541073,1.6666666666666665,5.846961043264265,,,
24,baseline,24,900.0010840892792,59162.0,59240.0,0.001318413846725939,24727727.0,exact,1.0,1.0015046237595306,8.166041819193602,15.595781075690478,4.940286958561307,,,
25,baseline,25,900.0015320777893,59096.0,59210.0,0.0019290645728983352,23438919.0,exact,1.0,1.0018443004348487,4.510705225924555,22.800000000000004,2.4862311177206236,,,
26,baseline,26,900.0015478134155,59089.0,59203.0,0.001929293100238623,23826788.0,exact,1.0,1.0018953816994127,8.894248013836016,28.49903535344988,4.939630736149807,,,
27,baseline,27,900.0010070800781,59153.0,59249.0,0.0016229100806383447,24336831.0,exact,1.0,1.001538253490652,10.091974748981741,19.2,6.084119530266811,,,
28,baseline,28,900.001277923584,59112.0,59208.0,0.0016240357287860333,25111591.0,exact,1.0,1.001759610178668,4.691858473111718,19.195777507105156,2.8027693088636902,,,
29,baseline,29,900.0012440681458,59182.0,59263.0,0.0013686593896792946,24919871.0,exact,1.0,1.0012840657576834,7.56448716001105,16.200000000000003,4.595493258310103,,,
30,baseline,30,900.0012910366057,59134.0,59241.0,0.001809449724354855,23615391.0,exact,1.0,1.0017247501648658,9.031820270959846,21.4,5.0375202116086095,,,
31,baseline,31,900.0023548603058,59082.0,59169.0,0.0014725297044785213,26213904.0,exact,1.0,1.0013883344383072,10.04484347330425,17.513740798402758,6.772202916754611,,,
32,baseline,32,875.9193549156189,59175.0,59180.0,8.449514152936208e-05,24935695.0,exact,1.0,1.0,17.593042802030894,1.0000000000000002,11.640863122484049,,,
33,baseline,33,900.0018489360809,59088.0,59177.0,0.0015062279989168698,25210167.0,exact,1.0,1.0017435758540136,6.884821185789175,17.794276333604117,4.149329955955837,,,
34,baseline,34,232.1541509628296,59190.0,59195.0,8.447372867038352e-05,7309410.0,exact,1.0,1.0000337877789605,7.0924172424290814,1.6666666666666667,5.371410472817808,,,
35,baseline,35,900.0025398731233,59183.0,59262.0,0.001334842775797104,23927493.0,exact,1.0,1.0012671701556084,12.033207650833896,19.75,7.544694464615838,,,
36,baseline,36,900.0010929107666,59166.0,59254.0,0.00148734070243045,25589946.0,exact,1.0,1.001402714167413,6.350860539510311,17.599999999999998,3.8428016019966393,,,
37,baseline,37,622.9371509552003,59202.0,59207.0,8.445660619573664e-05,18595087.0,exact,1.0,1.0007944557133197,5.973803009115679,1.0000000000000002,3.967704381695003,,,
38,baseline,38,557.924427986145,59212.0,59217.0,8.444234276835778e-05,16270407.0,exact,1.0,1.0000168873277493,11.776853444610294,1.25,8.26095170688216,,,
39,baseline,39,900.0010092258452,59143.0,59185.0,0.0007101432122144633,26304077.0,exact,1.0,1.0006424670735625,19.899221771336656,10.5,13.274381840230484,,,
40,baseline,40,900.0011250972748,59158.0,59242.995354791536,0.0014367516615088902,23949337.0,exact,1.0,1.0013521179587164,11.618267974647784,16.999070958308586,7.043833563281406,,,
41,baseline,41,900.000893831253,59170.0,59257.0,0.0014703396991718775,24299427.0,exact,1.0,1.0013857203210816,9.82799588949917,17.4,5.913918333433118,,,
42,baseline,42,900.0017001628876,59089.0,59228.0,0.002352383692396216,23229681.0,exact,1.0,1.002267573696145,5.808712131855351,27.799999999999997,2.898764108739463,,,
43,baseline,43,127.60789799690248,59232.0,59237.0,8.44138303619665e-05,4041704.0,exact,1.0,1.0000168816260382,7.326284526634964,1.25,5.803235519206498,,,
44,baseline,44,166.38699293136597,59201.0,59206.0,8.445803280349994e-05,5151689.0,exact,1.0,1.0000168904653324,11.73610231262945,1.25,8.75222174086627,,,
45,baseline,45,900.0007989406586,59135.0,59247.0,0.001893971421324089,26922402.0,exact,1.0,1.0020634249471458,4.661659000381325,22.394318085736028,2.560992168457917,,,
46,baseline,46,900.001415014267,59152.0,59254.0,0.001724371111712199,26485728.0,exact,1.0,1.001639704515104,10.21700063178535,20.4,6.123151372348114,,,
47,baseline,47,900.0020279884337,59123.0,59235.0,0.0018943558344468312,28222784.0,exact,1.0,1.0018435206169876,10.41166170663056,22.39924225766622,6.824112904473778,,,
48,baseline,48,900.0011022090912,59176.0,59284.0,0.0018250642152223876,28675410.0,exact,0.9998817227920179,1.0016219503953505,9.824748557639392,21.602555089901312,6.1253016948312595,,,
49,baseline,49,900.0012428760529,59150.0,59206.0,0.0009467455621301775,30531240.0,exact,1.0,1.001115995941833,12.802607222912103,11.197159763313609,9.326024324264884,,,
50,ml-exact,0,649.376060962677,59162.0,59167.0,8.451370812345763e-05,18101461.0,exact,1.0,1.0004734608295711,8.532721264142948,1.0,5.445815649649917,59126.38771406158,59263.992667692604,max
51,ml-exact,1,900.0008749961853,59137.0,59256.99395246509,0.0020290842021930574,23342139.0,exact,1.0,1.0019443703707194,5.256439629875021,23.998790493018166,2.8907182021033684,59159.91471896955,59292.24515179818,max
52,ml-exact,2,900.0023529529572,59186.0,59272.0,0.0014530463285236373,24785817.0,exact,1.0,1.0013684512848238,6.615469580587714,17.2,4.085840034868203,59194.00902156645,59323.12664303628,max
53,ml-exact,3,900.0030598640442,59145.0,59228.0,0.0014033307971933384,24207954.0,exact,1.0,1.0013186813186814,9.839176914197518,16.599999999999998,6.010390586749109,59141.813764752675,59274.22541262452,max
54,ml-exact,4,900.0010681152344,59142.0,59214.0,0.0012174089479557674,24895987.0,exact,1.0,1.0011331384387514,11.236687763725765,14.462810920901886,7.469020917529618,59144.93070046487,59273.654326628006,max
55,ml-exact,5,900.0023910999298,59126.0,59241.96239661033,0.0019612758618937826,23775703.0,exact,1.0,1.0020799133376805,10.760164398831064,28.98520564396274,5.926781054105736,59145.04845907292,59279.36037916677,max
56,ml-exact,6,900.0027949810028,59125.0,59236.0,0.0018773784355179705,23994400.0,exact,1.0,1.0017926602401488,5.582934738875933,22.200000000000003,3.2873391191217904,59136.974634353304,59268.30857737715,max
57,ml-exact,7,900.0025460720062,59105.0,59212.0,0.001810337534895525,24113420.0,exact,1.0,1.001725596345796,1.7540919149292407,21.400000000000002,2.397315308132119,59125.024194597165,59260.190615193496,max
58,ml-exact,8,900.0025360584259,59169.0,59247.0,0.0013182578715205597,26072662.0,exact,1.0,1.0012505703614825,9.026104392444157,19.5,6.081660406018433,59155.83957873982,59292.27671868388,max
59,ml-exact,9,900.0029451847076,59130.0,59260.0,0.002198545577541011,23411285.0,exact,1.0,1.0022154949348037,7.880288248067729,25.99736174530695,4.231232189722303,59169.22723451526,59303.04692137199,max
60,ml-exact,10,900.002711057663,59127.0,59199.0,0.0012177177939012634,25692461.0,exact,1.0,1.001149989007458,13.042406855599213,18.0,8.550390388271678,59122.74947289353,59256.99939978048,max
61,ml-exact,11,900.0020880699158,59198.0,59271.0,0.0012331497685732625,25044283.0,exact,1.0,1.0014699918896999,4.941349676471181,14.59531403087942,3.042150631885348,59194.32665087494,59329.026081343145,max
62,ml-exact,12,900.00151014328,59102.0,59222.0,0.0020303881425332475,23860011.0,exact,1.0,1.001945624037762,5.378478055192067,24.0,2.9275210656269923,59122.0422679371,59259.06427666924,max
63,ml-exact,13,900.0019900798798,59150.0,59203.96517006869,0.0009123443798594788,26120169.0,exact,1.0,1.0008446625768113,12.451861238399319,13.491292517172042,8.462489899830945,59136.5588570761,59273.99511476752,max
64,ml-exact,14,900.0015881061554,59169.0,59254.0,0.0014365630651185586,25996193.0,exact,1.0,1.001538123489343,10.28658759195445,21.246408592337204,6.8466401908701435,59154.03941864104,59289.795816422404,max
65,ml-exact,15,900.0014340877533,59083.0,59198.0,0.001946414366230557,23870719.0,exact,1.0,1.0018616301110208,7.486705329259161,23.0,4.2256494328382725,59098.66203099143,59228.46969562256,max
66,ml-exact,16,900.0031027793884,59126.0,59230.0,0.001758955451070595,23581309.0,exact,1.0,1.0016742487020345,10.47307790601788,20.799999999999997,5.888301034790237,59145.19863458289,59278.22282794401,max
67,ml-exact,17,900.001091003418,59156.0,59189.0,0.0005578470484819798,25974343.0,exact,1.0,1.000557847048482,12.267031842372049,8.249442152951518,7.930031690398847,59127.14368529404,59267.37100160227,max
68,ml-exact,18,900.0016350746155,59118.0,59207.0,0.00150546364897324,24423315.0,exact,1.0,1.001420766875835,19.84718914391357,17.8,12.477127094628871,59117.71413049835,59253.828178881624,max
69,ml-exact,19,765.9083199501038,59159.0,59164.0,8.451799388089724e-05,22220414.0,exact,1.0,1.0,17.160548234580467,1.0,11.892915444124142,59144.031479967074,59274.181598455296,max
70,ml-exact,20,900.0015428066254,59068.0,59191.0,0.0020823457709758246,23475405.0,exact,1.0,1.0020823457709758,4.55391082948923,24.597917654229025,2.471899801493286,59082.22244394464,59224.2971810249,max
71,ml-exact,21,428.18276500701904,59175.0,59180.0,8.449514152936207e-05,12999314.0,exact,1.0,1.0000168978860744,8.91430318224869,1.25,6.442266072691428,59137.98908684254,59265.43858161565,max
72,ml-exact,22,900.0029540061951,59121.0,59157.0,0.0006089206880803775,26135751.0,exact,1.0,1.0005243040286844,14.040402909173055,7.199999999999999,9.33172387888638,59089.5855153968,59226.64120575328,max
73,ml-exact,23,287.76060605049133,59193.0,59198.0,8.44694474008751e-05,7976958.0,exact,1.0,1.0000337860666262,10.641189358576659,1.6666666666666665,6.883209178350868,59140.59510257357,59271.35823619222,max
74,ml-exact,24,900.0017418861389,59162.0,59248.0,0.0014536357797234711,25158901.0,exact,1.0,1.001639870839039,8.166047787627152,17.195348365504884,5.026430067835795,59150.82751230766,59285.37521566199,max
75,ml-exact,25,900.0040528774261,59096.0,59209.0,0.001912142953837823,25156445.0,exact,1.0,1.0018273802473732,4.510717859885376,22.6,2.6684138620141735,59123.050715683305,59257.270874657275,max
76,ml-exact,26,900.0013389587402,59089.0,59199.0,0.0018615986054934083,23531336.0,exact,1.0,1.0018276894958622,8.894245949833698,27.499069200697253,4.878379350513735,59118.02329883662,59245.3612305393,max
77,ml-exact,27,900.0014967918396,59153.0,59246.0,0.0015721941406183963,25053692.0,exact,1.0,1.001487541837114,10.091980240263075,18.599999999999998,6.263332181683365,59150.16240797118,59285.64193535254,max
78,ml-exact,28,900.001298904419,59112.0,59215.0,0.0017424550006766815,24700106.0,exact,1.0,1.0018780454791554,4.691858582488349,20.59546961699824,2.756842408849359,59105.239747395564,59244.63088727762,max
79,ml-exact,29,900.0012950897217,59182.0,59264.0,0.0013855564191815079,24368468.0,exact,1.0,1.001300961359758,7.564487588846076,16.4,4.493808591920299,59171.61319933031,59313.94235456237,max
80,ml-exact,30,900.0028159618378,59134.0,59240.0,0.0017925389792674265,24191195.0,exact,1.0,1.001707840849524,9.031835574105251,21.2,5.160347916977751,59148.03866371211,59281.43814639009,max
81,ml-exact,31,900.0012450218201,59082.0,59151.0,0.0011678683863105513,27104772.0,exact,1.0,1.0010836987334635,10.044831086499027,13.890208219422876,7.002353254836392,59069.41377677144,59203.823126466195,max
82,ml-exact,32,900.0074319839478,59175.0,59197.964519141264,0.0003880780589989647,25690960.0,exact,1.0,1.0003035572683552,18.076857400376596,4.592903828252747,11.99344749946664,59157.64838384942,59290.523875622814,max
83,ml-exact,33,900.0013158321381,59088.0,59177.0,0.0015062279989168698,24765342.0,exact,1.0,1.0017435758540136,6.884817107658309,17.794276333604117,4.076116410894511,59081.01088134223,59219.82627379965,max
84,ml-exact,34,239.74270606040955,59190.0,59195.0,8.447372867038352e-05,7385996.0,exact,1.0,1.0000337877789605,7.324251128646419,1.6666666666666667,5.427690643511643,59143.23414106734,59275.920682982185,max
85,ml-exact,35,900.0009059906006,59183.0,59259.953906894,0.0013002704643901015,23480392.0,exact,1.0,1.0012326001806815,12.033185805509241,19.238476723499844,7.403716868683651,59171.487583005524,59304.98468958104,max
86,ml-exact,36,900.0016748905182,59166.0,59246.0,0.0013521279113004091,25394548.0,exact,1.0,1.0012675128018793,6.350864646252256,16.0,3.813458994262065,59158.08309355407,59296.56640928705,max
87,ml-exact,37,662.0322141647339,59202.0,59207.0,8.445660619573663e-05,20024242.0,exact,1.0,1.0007944557133197,6.348714355925809,1.0,4.272648615385403,59175.74869590455,59307.463498356396,max
88,ml-exact,38,446.4717228412628,59212.0,59217.0,8.444234276835777e-05,13956868.0,exact,1.0,1.0000168873277493,9.424272864415235,1.25,7.086301684237463,59166.58497608687,59301.825104164076,max
89,ml-exact,39,900.00270819664,59143.0,59184.0,0.0006932350404950712,26147788.0,exact,1.0,1.000625560045311,19.899259335957453,10.249999999999998,13.195510421802544,59114.89526526199,59248.56148321837,max
90,ml-exact,40,900.0016450881958,59158.0,59249.0,0.0015382534906521518,24805820.0,exact,1.0,1.0014536112097088,11.618274687299241,18.2,7.2957371421479085,59155.37108495968,59280.88309711401,max
91,ml-exact,41,900.002336025238,59170.0,59245.0,0.0012675342234240324,25030081.0,exact,1.0,1.0011829319814112,9.82801163823526,15.0,6.09174261241699,59151.28565990848,59290.71555008509,max
92,ml-exact,42,900.0009651184082,59089.0,59214.0,0.002115452960787964,22815014.0,exact,1.0,1.0020306630114733,5.808707387795664,24.999999999999996,2.8470190237906574,59118.99827325628,59249.97235571583,max
93,ml-exact,43,155.91705012321472,59232.0,59237.0,8.44138303619665e-05,4841747.0,exact,1.0,1.0000168816260382,8.951582854095786,1.25,6.951968319652182,59161.307350621355,59293.05481512429,max
94,ml-exact,44,166.24281811714172,59201.0,59206.0,8.445803280349994e-05,5152172.0,exact,1.0,1.0000168904653324,11.72593294577673,1.25,8.753042311188128,59149.268537538504,59278.50295984831,max
95,ml-exact,45,900.0014069080353,59135.0,59246.0,0.0018770609622051238,26599239.0,exact,1.0,1.002046511627907,4.661662149419189,22.194368817113382,2.5302513039490457,59157.45644175721,59292.66862156513,max
96,ml-exact,46,900.0008680820465,59152.0,59255.0,0.0017412767108466324,25313316.0,exact,1.0,1.0016566086853627,10.21699442289862,20.599999999999998,5.852105164112592,59168.404124939494,59297.86061218224,max
97,ml-exact,47,900.0009942054749,59123.0,59235.0,0.0018943558344468312,26222277.0,exact,1.0,1.0018435206169876,10.411649747325901,22.39924225766622,6.340401388480525,59155.45167739807,59284.172466642885,max
98,ml-exact,48,900.0013608932495,59176.0,59288.0,0.00189265918615655,27293583.0,exact,0.9998817227920179,1.0016895316618233,9.82475138153239,22.40264972286062,5.830132165779587,59191.94394017174,59332.08571744459,max
99,ml-exact,49,900.0012040138245,59150.0,59203.0,0.0008960270498732037,30493377.0,exact,1.0,1.0010652688535677,12.802606670093036,10.59731191885038,9.314458752116828,59139.98187390398,59273.225027033564,max
100,ml-heuristic,0,76.10421586036682,59134.0,59139.0,8.455372543714276e-05,3323921.0,heuristic,0.9995267232345086,1.0,1.0,1.000473500862448,1.0,59126.38771406158,59263.992667692604,max
101,ml-heuristic,1,171.21872186660767,59137.0,59142.0,8.454943605526151e-05,8074858.0,heuristic,1.0,1.0,1.0,1.0,1.0,59159.91471896955,59292.24515179818,max
102,ml-heuristic,2,136.04512000083923,59186.0,59191.0,8.447943770486263e-05,6066272.0,heuristic,1.0,1.0,1.0,1.0,1.0,59194.00902156645,59323.12664303628,max
103,ml-heuristic,3,91.47137689590454,59145.0,59150.0,8.453799983092401e-05,4027684.0,heuristic,1.0,1.0,1.0,1.0,1.0,59141.813764752675,59274.22541262452,max
104,ml-heuristic,4,80.09487199783325,59142.0,59146.97828536885,8.417512713219175e-05,3333233.0,heuristic,1.0,1.0,1.0,1.0,1.0,59144.93070046487,59273.654326628006,max
105,ml-heuristic,5,83.6420669555664,59115.0,59119.0,6.766472130592912e-05,4011571.0,heuristic,0.999813956634983,1.0,1.0,1.0,1.0,59145.04845907292,59279.36037916677,max
106,ml-heuristic,6,161.20603895187378,59125.0,59130.0,8.456659619450317e-05,7299034.0,heuristic,1.0,1.0,1.0,1.0,1.0,59136.974634353304,59268.30857737715,max
107,ml-heuristic,7,513.0874490737915,59105.0,59110.0,8.459521191100583e-05,10058510.0,heuristic,1.0,1.0,1.0,1.0,1.0,59125.024194597165,59260.190615193496,max
108,ml-heuristic,8,99.7110710144043,59169.0,59173.0,6.760296777028511e-05,4287096.0,heuristic,1.0,1.0,1.0,1.0,1.0,59155.83957873982,59292.27671868388,max
109,ml-heuristic,9,114.2093939781189,59124.0,59129.0,8.456802652053312e-05,5532971.0,heuristic,0.999898528665652,1.0,1.0,1.0,1.0,59169.22723451526,59303.04692137199,max
110,ml-heuristic,10,69.00587606430054,59127.0,59131.0,6.765098855007019e-05,3004829.0,heuristic,1.0,1.0,1.0,1.0,1.0,59122.74947289353,59256.99939978048,max
111,ml-heuristic,11,182.13689517974854,59179.0,59184.0,8.448943037226044e-05,8232427.0,heuristic,0.9996790432109193,1.0,1.0,1.0,1.0,59194.32665087494,59329.026081343145,max
112,ml-heuristic,12,167.33386301994324,59102.0,59107.0,8.459950593888532e-05,8150244.0,heuristic,1.0,1.0,1.0,1.0,1.0,59122.0422679371,59259.06427666924,max
113,ml-heuristic,13,72.27851104736328,59150.0,59154.0,6.76246830092984e-05,3086582.0,heuristic,1.0,1.0,1.0,1.0,1.0,59136.5588570761,59273.99511476752,max
114,ml-heuristic,14,87.49272584915161,59159.0,59163.0,6.761439510471779e-05,3796927.0,heuristic,0.9998309925805743,1.0,1.0,1.0,1.0,59154.03941864104,59289.795816422404,max
115,ml-heuristic,15,120.21328401565552,59083.0,59088.0,8.462671157524161e-05,5649006.0,heuristic,1.0,1.0,1.0,1.0,1.0,59098.66203099143,59228.46969562256,max
116,ml-heuristic,16,85.93491911888123,59126.0,59131.0,8.456516591685553e-05,4004773.0,heuristic,1.0,1.0,1.0,1.0,1.0,59145.19863458289,59278.22282794401,max
117,ml-heuristic,17,73.36747002601624,59152.0,59156.0,6.76223965377333e-05,3275440.0,heuristic,0.9999323821759416,1.0,1.0,1.0,1.0,59127.14368529404,59267.37100160227,max
118,ml-heuristic,18,45.34655404090881,59118.0,59123.0,8.457660949287865e-05,1957447.0,heuristic,1.0,1.0,1.0,1.0,1.0,59117.71413049835,59253.828178881624,max
119,ml-heuristic,19,44.6319260597229,59159.0,59164.0,8.451799388089724e-05,1868374.0,heuristic,1.0,1.0,1.0,1.0,1.0,59144.031479967074,59274.181598455296,max
120,ml-heuristic,20,197.63266706466675,59063.0,59068.0,8.465536799688468e-05,9496908.0,heuristic,0.9999153517979278,1.0,1.0,1.0,1.0,59082.22244394464,59224.2971810249,max
121,ml-heuristic,21,48.03322887420654,59175.0,59179.0,6.759611322348965e-05,2017817.0,heuristic,1.0,1.0,1.0,1.0,1.0,59137.98908684254,59265.43858161565,max
122,ml-heuristic,22,64.1009349822998,59121.0,59126.0,8.457231778894132e-05,2800742.0,heuristic,1.0,1.0,1.0,1.0,1.0,59089.5855153968,59226.64120575328,max
123,ml-heuristic,23,27.042146921157837,59193.0,59196.0,5.068166844052506e-05,1158901.0,heuristic,1.0,1.0,1.0,1.0,1.0,59140.59510257357,59271.35823619222,max
124,ml-heuristic,24,110.21264696121216,59146.0,59151.0,8.453657052040713e-05,5005322.0,heuristic,0.9997295561340049,1.0,1.0,1.0,1.0,59150.82751230766,59285.37521566199,max
125,ml-heuristic,25,199.52568101882935,59096.0,59101.0,8.460809530255854e-05,9427490.0,heuristic,1.0,1.0,1.0,1.0,1.0,59123.050715683305,59257.270874657275,max
126,ml-heuristic,26,101.18916702270508,59087.0,59091.0,6.769678609508014e-05,4823597.0,heuristic,0.9999661527526273,1.0,1.0,1.0,1.0,59118.02329883662,59245.3612305393,max
127,ml-heuristic,27,89.17987108230591,59153.0,59158.0,8.452656669991379e-05,4000058.0,heuristic,1.0,1.0,1.0,1.0,1.0,59150.16240797118,59285.64193535254,max
128,ml-heuristic,28,191.82191514968872,59099.0,59104.0,8.46038004027141e-05,8959564.0,heuristic,0.9997800784950602,1.0,1.0,1.0,1.0,59105.239747395564,59244.63088727762,max
129,ml-heuristic,29,118.9771659374237,59182.0,59187.0,8.448514751106756e-05,5422676.0,heuristic,1.0,1.0,1.0,1.0,1.0,59171.61319933031,59313.94235456237,max
130,ml-heuristic,30,99.64783000946045,59134.0,59139.0,8.455372543714276e-05,4687900.0,heuristic,1.0,1.0,1.0,1.0,1.0,59148.03866371211,59281.43814639009,max
131,ml-heuristic,31,89.59844493865967,59082.0,59086.96752812557,8.407853704291519e-05,3870809.0,heuristic,1.0,1.0,1.0,1.0,1.0,59069.41377677144,59203.823126466195,max
132,ml-heuristic,32,49.78782606124878,59175.0,59180.0,8.449514152936207e-05,2142083.0,heuristic,1.0,1.0,1.0,1.0,1.0,59157.64838384942,59290.523875622814,max
133,ml-heuristic,33,130.72261786460876,59069.0,59074.0,8.464676903282602e-05,6075720.0,heuristic,0.9996784457080964,1.0,1.0,1.0,1.0,59081.01088134223,59219.82627379965,max
134,ml-heuristic,34,32.732726097106934,59190.0,59193.0,5.0684237202230105e-05,1360799.0,heuristic,1.0,1.0,1.0,1.0,1.0,59143.23414106734,59275.920682982185,max
135,ml-heuristic,35,74.79323601722717,59183.0,59187.0,6.758697598972678e-05,3171433.0,heuristic,1.0,1.0,1.0,1.0,1.0,59171.487583005524,59304.98468958104,max
136,ml-heuristic,36,141.71325087547302,59166.0,59171.0,8.450799445627557e-05,6659190.0,heuristic,1.0,1.0,1.0,1.0,1.0,59158.08309355407,59296.56640928705,max
137,ml-heuristic,37,104.27815413475037,59155.0,59160.0,8.452370890034655e-05,4686611.0,heuristic,0.9992061079017601,1.0,1.0,1.0007945228636634,1.0,59175.74869590455,59307.463498356396,max
138,ml-heuristic,38,47.3746600151062,59212.0,59216.0,6.755387421468622e-05,1969556.0,heuristic,1.0,1.0,1.0,1.0,1.0,59166.58497608687,59301.825104164076,max
139,ml-heuristic,39,45.22795009613037,59143.0,59147.0,6.763268687756793e-05,1981567.0,heuristic,1.0,1.0,1.0,1.0,1.0,59114.89526526199,59248.56148321837,max
140,ml-heuristic,40,77.46431112289429,59158.0,59163.0,8.451942256330505e-05,3400043.0,heuristic,1.0,1.0,1.0,1.0,1.0,59155.37108495968,59280.88309711401,max
141,ml-heuristic,41,91.57522082328796,59170.0,59175.0,8.450228156160216e-05,4108854.0,heuristic,1.0,1.0,1.0,1.0,1.0,59151.28565990848,59290.71555008509,max
142,ml-heuristic,42,154.93997287750244,59089.0,59094.0,8.461811843151856e-05,8013650.0,heuristic,1.0,1.0,1.0,1.0,1.0,59118.99827325628,59249.97235571583,max
143,ml-heuristic,43,17.417819023132324,59232.0,59236.0,6.75310642895732e-05,696457.0,heuristic,1.0,1.0,1.0,1.0,1.0,59161.307350621355,59293.05481512429,max
144,ml-heuristic,44,14.177363872528076,59201.0,59205.0,6.756642624279995e-05,588615.0,heuristic,1.0,1.0,1.0,1.0,1.0,59149.268537538504,59278.50295984831,max
145,ml-heuristic,45,193.06448602676392,59120.0,59125.0,8.457374830852504e-05,10512489.0,heuristic,0.9997463431132155,1.0,1.0,1.0,1.0,59157.45644175721,59292.66862156513,max
146,ml-heuristic,46,88.08861303329468,59152.0,59157.0,8.452799567216662e-05,4325506.0,heuristic,1.0,1.0,1.0,1.0,1.0,59168.404124939494,59297.86061218224,max
147,ml-heuristic,47,86.44172787666321,59121.0,59126.0,8.457231778894132e-05,4135744.0,heuristic,0.999966172217242,1.0,1.0,1.0,1.0,59155.45167739807,59284.172466642885,max
148,ml-heuristic,48,91.60550999641418,59183.0,59188.0,8.448371998715847e-05,4681469.0,heuristic,1.0,1.0,1.0,1.0,1.0,59191.94394017174,59332.08571744459,max
149,ml-heuristic,49,70.29827809333801,59135.0,59140.0,8.45522955948254e-05,3273768.0,heuristic,0.9997464074387151,1.0,1.0,1.0,1.0,59139.98187390398,59273.225027033564,max
1 Solver Instance Wallclock Time Lower Bound Upper Bound Gap Nodes Mode Relative Lower Bound Relative Upper Bound Relative Wallclock Time Relative Gap Relative Nodes Predicted LB Predicted UB Sense
2 0 baseline 0 662.7372989654541 59162.0 59167.0 8.451370812345763e-05 18688107.0 exact 1.0 1.0004734608295711 8.70828628181939 1.0 5.622307810564691
3 1 baseline 1 900.0007548332214 59137.0 59256.0 0.002012276578115224 24175550.0 exact 1.0 1.0019275641675967 5.256438928065297 23.8 2.9939288096459404
4 2 baseline 2 900.0016160011293 59186.0 59285.0 0.0016726928665562802 24089218.0 exact 1.0 1.0015880792688077 6.615464163621432 19.8 3.971008553523482
5 3 baseline 3 900.0023140907288 59145.0 59231.0 0.0014540535970918927 24595759.0 exact 1.0 1.0013693998309383 9.839168761119028 17.2 6.1066754492159765
6 4 baseline 4 900.0024960041046 59142.0 59213.0 0.0012005004903452704 25467171.0 exact 1.0 1.0011162314042927 11.236705591195049 14.261938547000467 7.640381275476392
7 5 baseline 5 900.002925157547 59126.0 59244.0 0.0019957379156377904 23457042.0 exact 1.0 1.0021143794719125 10.760170783867167 29.494511720731996 5.847345591041515
8 6 baseline 6 900.0031039714813 59125.0 59236.97169757604 0.001893813066825174 24240772.0 exact 1.0 1.0018090934817527 5.582936655618509 22.394339515207683 3.3210931747954593
9 7 baseline 7 900.002781867981 59105.0 59212.0 0.001810337534895525 24042592.0 exact 1.0 1.001725596345796 1.7540923744921773 21.400000000000002 2.390273708531383
10 8 baseline 8 900.0021660327911 59169.0 59251.0 0.0013858608392908448 25512146.0 exact 1.0 1.0013181687594004 9.026100681465717 20.5 5.950915491512204
11 9 baseline 9 900.0015439987183 59130.0 59256.0 0.00213089802130898 23227790.0 exact 1.0 1.0021478462345041 7.880275979497338 25.197442922374428 4.198068271097029
12 10 baseline 10 900.0024099349976 59127.0 59201.0 0.0012515432881762985 25015636.0 exact 1.0 1.0011838122135597 13.04240249187423 18.5 8.325144625534431
13 11 baseline 11 900.0025849342346 59198.0 59289.0 0.0015372140950707794 24558832.0 exact 1.0 1.0017741281427412 4.941352404443008 18.19415858643873 2.9831824806949396
14 12 baseline 12 900.0022029876709 59102.0 59224.0 0.002064227944908802 24026788.0 exact 1.0 1.0019794609775492 5.378482195683288 24.400000000000002 2.947983888580514
15 13 baseline 13 900.0011007785797 59150.0 59206.0 0.0009467455621301775 24953207.0 exact 1.0 1.0008790614328702 12.451848934586094 13.999999999999998 8.0844140865203
16 14 baseline 14 900.0014700889587 59169.0 59250.0 0.0013689600973482733 25494260.0 exact 1.0 1.0014705136656357 10.286586243074352 20.246577599756627 6.714445655657852
17 15 baseline 15 900.0013790130615 59083.0 59196.0 0.0019125636816004605 23792716.0 exact 1.0 1.0018277822908204 7.486704871117682 22.6 4.211841162852367
18 16 baseline 16 900.0020098686218 59126.0 59233.0 0.0018096945506207078 23398798.0 exact 1.0 1.001724983511187 10.473065188128833 21.39999999999999 5.842727665213484
19 17 baseline 17 900.0023510456085 59156.0 59197.0 0.0006930826965988236 25573586.0 exact 1.0 1.0006930826965987 12.267049016770867 10.249306917303404 7.807679578926801
20 18 baseline 18 900.002711057663 59118.0 59211.0 0.0015731249365675427 24489136.0 exact 1.0 1.0014884224413512 19.84721287191386 18.599999999999998 12.51075303699155
21 19 baseline 19 724.1934628486632 59159.0 59164.0 8.451799388089724e-05 20931760.0 exact 1.0 1.0 16.225906582646804 1.0 11.203195934004649
22 20 baseline 20 900.0011439323425 59068.0 59191.0 0.0020823457709758246 23411794.0 exact 1.0 1.0020823457709758 4.553908811228339 24.597917654229025 2.465201726709367
23 21 baseline 21 380.06568694114685 59175.0 59180.0 8.449514152936208e-05 11618526.0 exact 1.0 1.0000168978860744 7.912557532546788 1.2500000000000002 5.757968140817527
24 22 baseline 22 900.0016028881073 59121.0 59154.94711904253 0.0005741973079365614 26352886.0 exact 1.0 1.0004895835849292 14.040381831195205 6.789423808503489 9.40925154833969
25 23 baseline 23 230.2515211105347 59193.0 59198.0 8.44694474008751e-05 6776049.0 exact 1.0 1.0000337860666262 8.514542938541073 1.6666666666666665 5.846961043264265
26 24 baseline 24 900.0010840892792 59162.0 59240.0 0.001318413846725939 24727727.0 exact 1.0 1.0015046237595306 8.166041819193602 15.595781075690478 4.940286958561307
27 25 baseline 25 900.0015320777893 59096.0 59210.0 0.0019290645728983352 23438919.0 exact 1.0 1.0018443004348487 4.510705225924555 22.800000000000004 2.4862311177206236
28 26 baseline 26 900.0015478134155 59089.0 59203.0 0.001929293100238623 23826788.0 exact 1.0 1.0018953816994127 8.894248013836016 28.49903535344988 4.939630736149807
29 27 baseline 27 900.0010070800781 59153.0 59249.0 0.0016229100806383447 24336831.0 exact 1.0 1.001538253490652 10.091974748981741 19.2 6.084119530266811
30 28 baseline 28 900.001277923584 59112.0 59208.0 0.0016240357287860333 25111591.0 exact 1.0 1.001759610178668 4.691858473111718 19.195777507105156 2.8027693088636902
31 29 baseline 29 900.0012440681458 59182.0 59263.0 0.0013686593896792946 24919871.0 exact 1.0 1.0012840657576834 7.56448716001105 16.200000000000003 4.595493258310103
32 30 baseline 30 900.0012910366057 59134.0 59241.0 0.001809449724354855 23615391.0 exact 1.0 1.0017247501648658 9.031820270959846 21.4 5.0375202116086095
33 31 baseline 31 900.0023548603058 59082.0 59169.0 0.0014725297044785213 26213904.0 exact 1.0 1.0013883344383072 10.04484347330425 17.513740798402758 6.772202916754611
34 32 baseline 32 875.9193549156189 59175.0 59180.0 8.449514152936208e-05 24935695.0 exact 1.0 1.0 17.593042802030894 1.0000000000000002 11.640863122484049
35 33 baseline 33 900.0018489360809 59088.0 59177.0 0.0015062279989168698 25210167.0 exact 1.0 1.0017435758540136 6.884821185789175 17.794276333604117 4.149329955955837
36 34 baseline 34 232.1541509628296 59190.0 59195.0 8.447372867038352e-05 7309410.0 exact 1.0 1.0000337877789605 7.0924172424290814 1.6666666666666667 5.371410472817808
37 35 baseline 35 900.0025398731233 59183.0 59262.0 0.001334842775797104 23927493.0 exact 1.0 1.0012671701556084 12.033207650833896 19.75 7.544694464615838
38 36 baseline 36 900.0010929107666 59166.0 59254.0 0.00148734070243045 25589946.0 exact 1.0 1.001402714167413 6.350860539510311 17.599999999999998 3.8428016019966393
39 37 baseline 37 622.9371509552003 59202.0 59207.0 8.445660619573664e-05 18595087.0 exact 1.0 1.0007944557133197 5.973803009115679 1.0000000000000002 3.967704381695003
40 38 baseline 38 557.924427986145 59212.0 59217.0 8.444234276835778e-05 16270407.0 exact 1.0 1.0000168873277493 11.776853444610294 1.25 8.26095170688216
41 39 baseline 39 900.0010092258452 59143.0 59185.0 0.0007101432122144633 26304077.0 exact 1.0 1.0006424670735625 19.899221771336656 10.5 13.274381840230484
42 40 baseline 40 900.0011250972748 59158.0 59242.995354791536 0.0014367516615088902 23949337.0 exact 1.0 1.0013521179587164 11.618267974647784 16.999070958308586 7.043833563281406
43 41 baseline 41 900.000893831253 59170.0 59257.0 0.0014703396991718775 24299427.0 exact 1.0 1.0013857203210816 9.82799588949917 17.4 5.913918333433118
44 42 baseline 42 900.0017001628876 59089.0 59228.0 0.002352383692396216 23229681.0 exact 1.0 1.002267573696145 5.808712131855351 27.799999999999997 2.898764108739463
45 43 baseline 43 127.60789799690248 59232.0 59237.0 8.44138303619665e-05 4041704.0 exact 1.0 1.0000168816260382 7.326284526634964 1.25 5.803235519206498
46 44 baseline 44 166.38699293136597 59201.0 59206.0 8.445803280349994e-05 5151689.0 exact 1.0 1.0000168904653324 11.73610231262945 1.25 8.75222174086627
47 45 baseline 45 900.0007989406586 59135.0 59247.0 0.001893971421324089 26922402.0 exact 1.0 1.0020634249471458 4.661659000381325 22.394318085736028 2.560992168457917
48 46 baseline 46 900.001415014267 59152.0 59254.0 0.001724371111712199 26485728.0 exact 1.0 1.001639704515104 10.21700063178535 20.4 6.123151372348114
49 47 baseline 47 900.0020279884337 59123.0 59235.0 0.0018943558344468312 28222784.0 exact 1.0 1.0018435206169876 10.41166170663056 22.39924225766622 6.824112904473778
50 48 baseline 48 900.0011022090912 59176.0 59284.0 0.0018250642152223876 28675410.0 exact 0.9998817227920179 1.0016219503953505 9.824748557639392 21.602555089901312 6.1253016948312595
51 49 baseline 49 900.0012428760529 59150.0 59206.0 0.0009467455621301775 30531240.0 exact 1.0 1.001115995941833 12.802607222912103 11.197159763313609 9.326024324264884
52 50 ml-exact 0 649.376060962677 59162.0 59167.0 8.451370812345763e-05 18101461.0 exact 1.0 1.0004734608295711 8.532721264142948 1.0 5.445815649649917 59126.38771406158 59263.992667692604 max
53 51 ml-exact 1 900.0008749961853 59137.0 59256.99395246509 0.0020290842021930574 23342139.0 exact 1.0 1.0019443703707194 5.256439629875021 23.998790493018166 2.8907182021033684 59159.91471896955 59292.24515179818 max
54 52 ml-exact 2 900.0023529529572 59186.0 59272.0 0.0014530463285236373 24785817.0 exact 1.0 1.0013684512848238 6.615469580587714 17.2 4.085840034868203 59194.00902156645 59323.12664303628 max
55 53 ml-exact 3 900.0030598640442 59145.0 59228.0 0.0014033307971933384 24207954.0 exact 1.0 1.0013186813186814 9.839176914197518 16.599999999999998 6.010390586749109 59141.813764752675 59274.22541262452 max
56 54 ml-exact 4 900.0010681152344 59142.0 59214.0 0.0012174089479557674 24895987.0 exact 1.0 1.0011331384387514 11.236687763725765 14.462810920901886 7.469020917529618 59144.93070046487 59273.654326628006 max
57 55 ml-exact 5 900.0023910999298 59126.0 59241.96239661033 0.0019612758618937826 23775703.0 exact 1.0 1.0020799133376805 10.760164398831064 28.98520564396274 5.926781054105736 59145.04845907292 59279.36037916677 max
58 56 ml-exact 6 900.0027949810028 59125.0 59236.0 0.0018773784355179705 23994400.0 exact 1.0 1.0017926602401488 5.582934738875933 22.200000000000003 3.2873391191217904 59136.974634353304 59268.30857737715 max
59 57 ml-exact 7 900.0025460720062 59105.0 59212.0 0.001810337534895525 24113420.0 exact 1.0 1.001725596345796 1.7540919149292407 21.400000000000002 2.397315308132119 59125.024194597165 59260.190615193496 max
60 58 ml-exact 8 900.0025360584259 59169.0 59247.0 0.0013182578715205597 26072662.0 exact 1.0 1.0012505703614825 9.026104392444157 19.5 6.081660406018433 59155.83957873982 59292.27671868388 max
61 59 ml-exact 9 900.0029451847076 59130.0 59260.0 0.002198545577541011 23411285.0 exact 1.0 1.0022154949348037 7.880288248067729 25.99736174530695 4.231232189722303 59169.22723451526 59303.04692137199 max
62 60 ml-exact 10 900.002711057663 59127.0 59199.0 0.0012177177939012634 25692461.0 exact 1.0 1.001149989007458 13.042406855599213 18.0 8.550390388271678 59122.74947289353 59256.99939978048 max
63 61 ml-exact 11 900.0020880699158 59198.0 59271.0 0.0012331497685732625 25044283.0 exact 1.0 1.0014699918896999 4.941349676471181 14.59531403087942 3.042150631885348 59194.32665087494 59329.026081343145 max
64 62 ml-exact 12 900.00151014328 59102.0 59222.0 0.0020303881425332475 23860011.0 exact 1.0 1.001945624037762 5.378478055192067 24.0 2.9275210656269923 59122.0422679371 59259.06427666924 max
65 63 ml-exact 13 900.0019900798798 59150.0 59203.96517006869 0.0009123443798594788 26120169.0 exact 1.0 1.0008446625768113 12.451861238399319 13.491292517172042 8.462489899830945 59136.5588570761 59273.99511476752 max
66 64 ml-exact 14 900.0015881061554 59169.0 59254.0 0.0014365630651185586 25996193.0 exact 1.0 1.001538123489343 10.28658759195445 21.246408592337204 6.8466401908701435 59154.03941864104 59289.795816422404 max
67 65 ml-exact 15 900.0014340877533 59083.0 59198.0 0.001946414366230557 23870719.0 exact 1.0 1.0018616301110208 7.486705329259161 23.0 4.2256494328382725 59098.66203099143 59228.46969562256 max
68 66 ml-exact 16 900.0031027793884 59126.0 59230.0 0.001758955451070595 23581309.0 exact 1.0 1.0016742487020345 10.47307790601788 20.799999999999997 5.888301034790237 59145.19863458289 59278.22282794401 max
69 67 ml-exact 17 900.001091003418 59156.0 59189.0 0.0005578470484819798 25974343.0 exact 1.0 1.000557847048482 12.267031842372049 8.249442152951518 7.930031690398847 59127.14368529404 59267.37100160227 max
70 68 ml-exact 18 900.0016350746155 59118.0 59207.0 0.00150546364897324 24423315.0 exact 1.0 1.001420766875835 19.84718914391357 17.8 12.477127094628871 59117.71413049835 59253.828178881624 max
71 69 ml-exact 19 765.9083199501038 59159.0 59164.0 8.451799388089724e-05 22220414.0 exact 1.0 1.0 17.160548234580467 1.0 11.892915444124142 59144.031479967074 59274.181598455296 max
72 70 ml-exact 20 900.0015428066254 59068.0 59191.0 0.0020823457709758246 23475405.0 exact 1.0 1.0020823457709758 4.55391082948923 24.597917654229025 2.471899801493286 59082.22244394464 59224.2971810249 max
73 71 ml-exact 21 428.18276500701904 59175.0 59180.0 8.449514152936207e-05 12999314.0 exact 1.0 1.0000168978860744 8.91430318224869 1.25 6.442266072691428 59137.98908684254 59265.43858161565 max
74 72 ml-exact 22 900.0029540061951 59121.0 59157.0 0.0006089206880803775 26135751.0 exact 1.0 1.0005243040286844 14.040402909173055 7.199999999999999 9.33172387888638 59089.5855153968 59226.64120575328 max
75 73 ml-exact 23 287.76060605049133 59193.0 59198.0 8.44694474008751e-05 7976958.0 exact 1.0 1.0000337860666262 10.641189358576659 1.6666666666666665 6.883209178350868 59140.59510257357 59271.35823619222 max
76 74 ml-exact 24 900.0017418861389 59162.0 59248.0 0.0014536357797234711 25158901.0 exact 1.0 1.001639870839039 8.166047787627152 17.195348365504884 5.026430067835795 59150.82751230766 59285.37521566199 max
77 75 ml-exact 25 900.0040528774261 59096.0 59209.0 0.001912142953837823 25156445.0 exact 1.0 1.0018273802473732 4.510717859885376 22.6 2.6684138620141735 59123.050715683305 59257.270874657275 max
78 76 ml-exact 26 900.0013389587402 59089.0 59199.0 0.0018615986054934083 23531336.0 exact 1.0 1.0018276894958622 8.894245949833698 27.499069200697253 4.878379350513735 59118.02329883662 59245.3612305393 max
79 77 ml-exact 27 900.0014967918396 59153.0 59246.0 0.0015721941406183963 25053692.0 exact 1.0 1.001487541837114 10.091980240263075 18.599999999999998 6.263332181683365 59150.16240797118 59285.64193535254 max
80 78 ml-exact 28 900.001298904419 59112.0 59215.0 0.0017424550006766815 24700106.0 exact 1.0 1.0018780454791554 4.691858582488349 20.59546961699824 2.756842408849359 59105.239747395564 59244.63088727762 max
81 79 ml-exact 29 900.0012950897217 59182.0 59264.0 0.0013855564191815079 24368468.0 exact 1.0 1.001300961359758 7.564487588846076 16.4 4.493808591920299 59171.61319933031 59313.94235456237 max
82 80 ml-exact 30 900.0028159618378 59134.0 59240.0 0.0017925389792674265 24191195.0 exact 1.0 1.001707840849524 9.031835574105251 21.2 5.160347916977751 59148.03866371211 59281.43814639009 max
83 81 ml-exact 31 900.0012450218201 59082.0 59151.0 0.0011678683863105513 27104772.0 exact 1.0 1.0010836987334635 10.044831086499027 13.890208219422876 7.002353254836392 59069.41377677144 59203.823126466195 max
84 82 ml-exact 32 900.0074319839478 59175.0 59197.964519141264 0.0003880780589989647 25690960.0 exact 1.0 1.0003035572683552 18.076857400376596 4.592903828252747 11.99344749946664 59157.64838384942 59290.523875622814 max
85 83 ml-exact 33 900.0013158321381 59088.0 59177.0 0.0015062279989168698 24765342.0 exact 1.0 1.0017435758540136 6.884817107658309 17.794276333604117 4.076116410894511 59081.01088134223 59219.82627379965 max
86 84 ml-exact 34 239.74270606040955 59190.0 59195.0 8.447372867038352e-05 7385996.0 exact 1.0 1.0000337877789605 7.324251128646419 1.6666666666666667 5.427690643511643 59143.23414106734 59275.920682982185 max
87 85 ml-exact 35 900.0009059906006 59183.0 59259.953906894 0.0013002704643901015 23480392.0 exact 1.0 1.0012326001806815 12.033185805509241 19.238476723499844 7.403716868683651 59171.487583005524 59304.98468958104 max
88 86 ml-exact 36 900.0016748905182 59166.0 59246.0 0.0013521279113004091 25394548.0 exact 1.0 1.0012675128018793 6.350864646252256 16.0 3.813458994262065 59158.08309355407 59296.56640928705 max
89 87 ml-exact 37 662.0322141647339 59202.0 59207.0 8.445660619573663e-05 20024242.0 exact 1.0 1.0007944557133197 6.348714355925809 1.0 4.272648615385403 59175.74869590455 59307.463498356396 max
90 88 ml-exact 38 446.4717228412628 59212.0 59217.0 8.444234276835777e-05 13956868.0 exact 1.0 1.0000168873277493 9.424272864415235 1.25 7.086301684237463 59166.58497608687 59301.825104164076 max
91 89 ml-exact 39 900.00270819664 59143.0 59184.0 0.0006932350404950712 26147788.0 exact 1.0 1.000625560045311 19.899259335957453 10.249999999999998 13.195510421802544 59114.89526526199 59248.56148321837 max
92 90 ml-exact 40 900.0016450881958 59158.0 59249.0 0.0015382534906521518 24805820.0 exact 1.0 1.0014536112097088 11.618274687299241 18.2 7.2957371421479085 59155.37108495968 59280.88309711401 max
93 91 ml-exact 41 900.002336025238 59170.0 59245.0 0.0012675342234240324 25030081.0 exact 1.0 1.0011829319814112 9.82801163823526 15.0 6.09174261241699 59151.28565990848 59290.71555008509 max
94 92 ml-exact 42 900.0009651184082 59089.0 59214.0 0.002115452960787964 22815014.0 exact 1.0 1.0020306630114733 5.808707387795664 24.999999999999996 2.8470190237906574 59118.99827325628 59249.97235571583 max
95 93 ml-exact 43 155.91705012321472 59232.0 59237.0 8.44138303619665e-05 4841747.0 exact 1.0 1.0000168816260382 8.951582854095786 1.25 6.951968319652182 59161.307350621355 59293.05481512429 max
96 94 ml-exact 44 166.24281811714172 59201.0 59206.0 8.445803280349994e-05 5152172.0 exact 1.0 1.0000168904653324 11.72593294577673 1.25 8.753042311188128 59149.268537538504 59278.50295984831 max
97 95 ml-exact 45 900.0014069080353 59135.0 59246.0 0.0018770609622051238 26599239.0 exact 1.0 1.002046511627907 4.661662149419189 22.194368817113382 2.5302513039490457 59157.45644175721 59292.66862156513 max
98 96 ml-exact 46 900.0008680820465 59152.0 59255.0 0.0017412767108466324 25313316.0 exact 1.0 1.0016566086853627 10.21699442289862 20.599999999999998 5.852105164112592 59168.404124939494 59297.86061218224 max
99 97 ml-exact 47 900.0009942054749 59123.0 59235.0 0.0018943558344468312 26222277.0 exact 1.0 1.0018435206169876 10.411649747325901 22.39924225766622 6.340401388480525 59155.45167739807 59284.172466642885 max
100 98 ml-exact 48 900.0013608932495 59176.0 59288.0 0.00189265918615655 27293583.0 exact 0.9998817227920179 1.0016895316618233 9.82475138153239 22.40264972286062 5.830132165779587 59191.94394017174 59332.08571744459 max
101 99 ml-exact 49 900.0012040138245 59150.0 59203.0 0.0008960270498732037 30493377.0 exact 1.0 1.0010652688535677 12.802606670093036 10.59731191885038 9.314458752116828 59139.98187390398 59273.225027033564 max
102 100 ml-heuristic 0 76.10421586036682 59134.0 59139.0 8.455372543714276e-05 3323921.0 heuristic 0.9995267232345086 1.0 1.0 1.000473500862448 1.0 59126.38771406158 59263.992667692604 max
103 101 ml-heuristic 1 171.21872186660767 59137.0 59142.0 8.454943605526151e-05 8074858.0 heuristic 1.0 1.0 1.0 1.0 1.0 59159.91471896955 59292.24515179818 max
104 102 ml-heuristic 2 136.04512000083923 59186.0 59191.0 8.447943770486263e-05 6066272.0 heuristic 1.0 1.0 1.0 1.0 1.0 59194.00902156645 59323.12664303628 max
105 103 ml-heuristic 3 91.47137689590454 59145.0 59150.0 8.453799983092401e-05 4027684.0 heuristic 1.0 1.0 1.0 1.0 1.0 59141.813764752675 59274.22541262452 max
106 104 ml-heuristic 4 80.09487199783325 59142.0 59146.97828536885 8.417512713219175e-05 3333233.0 heuristic 1.0 1.0 1.0 1.0 1.0 59144.93070046487 59273.654326628006 max
107 105 ml-heuristic 5 83.6420669555664 59115.0 59119.0 6.766472130592912e-05 4011571.0 heuristic 0.999813956634983 1.0 1.0 1.0 1.0 59145.04845907292 59279.36037916677 max
108 106 ml-heuristic 6 161.20603895187378 59125.0 59130.0 8.456659619450317e-05 7299034.0 heuristic 1.0 1.0 1.0 1.0 1.0 59136.974634353304 59268.30857737715 max
109 107 ml-heuristic 7 513.0874490737915 59105.0 59110.0 8.459521191100583e-05 10058510.0 heuristic 1.0 1.0 1.0 1.0 1.0 59125.024194597165 59260.190615193496 max
110 108 ml-heuristic 8 99.7110710144043 59169.0 59173.0 6.760296777028511e-05 4287096.0 heuristic 1.0 1.0 1.0 1.0 1.0 59155.83957873982 59292.27671868388 max
111 109 ml-heuristic 9 114.2093939781189 59124.0 59129.0 8.456802652053312e-05 5532971.0 heuristic 0.999898528665652 1.0 1.0 1.0 1.0 59169.22723451526 59303.04692137199 max
112 110 ml-heuristic 10 69.00587606430054 59127.0 59131.0 6.765098855007019e-05 3004829.0 heuristic 1.0 1.0 1.0 1.0 1.0 59122.74947289353 59256.99939978048 max
113 111 ml-heuristic 11 182.13689517974854 59179.0 59184.0 8.448943037226044e-05 8232427.0 heuristic 0.9996790432109193 1.0 1.0 1.0 1.0 59194.32665087494 59329.026081343145 max
114 112 ml-heuristic 12 167.33386301994324 59102.0 59107.0 8.459950593888532e-05 8150244.0 heuristic 1.0 1.0 1.0 1.0 1.0 59122.0422679371 59259.06427666924 max
115 113 ml-heuristic 13 72.27851104736328 59150.0 59154.0 6.76246830092984e-05 3086582.0 heuristic 1.0 1.0 1.0 1.0 1.0 59136.5588570761 59273.99511476752 max
116 114 ml-heuristic 14 87.49272584915161 59159.0 59163.0 6.761439510471779e-05 3796927.0 heuristic 0.9998309925805743 1.0 1.0 1.0 1.0 59154.03941864104 59289.795816422404 max
117 115 ml-heuristic 15 120.21328401565552 59083.0 59088.0 8.462671157524161e-05 5649006.0 heuristic 1.0 1.0 1.0 1.0 1.0 59098.66203099143 59228.46969562256 max
118 116 ml-heuristic 16 85.93491911888123 59126.0 59131.0 8.456516591685553e-05 4004773.0 heuristic 1.0 1.0 1.0 1.0 1.0 59145.19863458289 59278.22282794401 max
119 117 ml-heuristic 17 73.36747002601624 59152.0 59156.0 6.76223965377333e-05 3275440.0 heuristic 0.9999323821759416 1.0 1.0 1.0 1.0 59127.14368529404 59267.37100160227 max
120 118 ml-heuristic 18 45.34655404090881 59118.0 59123.0 8.457660949287865e-05 1957447.0 heuristic 1.0 1.0 1.0 1.0 1.0 59117.71413049835 59253.828178881624 max
121 119 ml-heuristic 19 44.6319260597229 59159.0 59164.0 8.451799388089724e-05 1868374.0 heuristic 1.0 1.0 1.0 1.0 1.0 59144.031479967074 59274.181598455296 max
122 120 ml-heuristic 20 197.63266706466675 59063.0 59068.0 8.465536799688468e-05 9496908.0 heuristic 0.9999153517979278 1.0 1.0 1.0 1.0 59082.22244394464 59224.2971810249 max
123 121 ml-heuristic 21 48.03322887420654 59175.0 59179.0 6.759611322348965e-05 2017817.0 heuristic 1.0 1.0 1.0 1.0 1.0 59137.98908684254 59265.43858161565 max
124 122 ml-heuristic 22 64.1009349822998 59121.0 59126.0 8.457231778894132e-05 2800742.0 heuristic 1.0 1.0 1.0 1.0 1.0 59089.5855153968 59226.64120575328 max
125 123 ml-heuristic 23 27.042146921157837 59193.0 59196.0 5.068166844052506e-05 1158901.0 heuristic 1.0 1.0 1.0 1.0 1.0 59140.59510257357 59271.35823619222 max
126 124 ml-heuristic 24 110.21264696121216 59146.0 59151.0 8.453657052040713e-05 5005322.0 heuristic 0.9997295561340049 1.0 1.0 1.0 1.0 59150.82751230766 59285.37521566199 max
127 125 ml-heuristic 25 199.52568101882935 59096.0 59101.0 8.460809530255854e-05 9427490.0 heuristic 1.0 1.0 1.0 1.0 1.0 59123.050715683305 59257.270874657275 max
128 126 ml-heuristic 26 101.18916702270508 59087.0 59091.0 6.769678609508014e-05 4823597.0 heuristic 0.9999661527526273 1.0 1.0 1.0 1.0 59118.02329883662 59245.3612305393 max
129 127 ml-heuristic 27 89.17987108230591 59153.0 59158.0 8.452656669991379e-05 4000058.0 heuristic 1.0 1.0 1.0 1.0 1.0 59150.16240797118 59285.64193535254 max
130 128 ml-heuristic 28 191.82191514968872 59099.0 59104.0 8.46038004027141e-05 8959564.0 heuristic 0.9997800784950602 1.0 1.0 1.0 1.0 59105.239747395564 59244.63088727762 max
131 129 ml-heuristic 29 118.9771659374237 59182.0 59187.0 8.448514751106756e-05 5422676.0 heuristic 1.0 1.0 1.0 1.0 1.0 59171.61319933031 59313.94235456237 max
132 130 ml-heuristic 30 99.64783000946045 59134.0 59139.0 8.455372543714276e-05 4687900.0 heuristic 1.0 1.0 1.0 1.0 1.0 59148.03866371211 59281.43814639009 max
133 131 ml-heuristic 31 89.59844493865967 59082.0 59086.96752812557 8.407853704291519e-05 3870809.0 heuristic 1.0 1.0 1.0 1.0 1.0 59069.41377677144 59203.823126466195 max
134 132 ml-heuristic 32 49.78782606124878 59175.0 59180.0 8.449514152936207e-05 2142083.0 heuristic 1.0 1.0 1.0 1.0 1.0 59157.64838384942 59290.523875622814 max
135 133 ml-heuristic 33 130.72261786460876 59069.0 59074.0 8.464676903282602e-05 6075720.0 heuristic 0.9996784457080964 1.0 1.0 1.0 1.0 59081.01088134223 59219.82627379965 max
136 134 ml-heuristic 34 32.732726097106934 59190.0 59193.0 5.0684237202230105e-05 1360799.0 heuristic 1.0 1.0 1.0 1.0 1.0 59143.23414106734 59275.920682982185 max
137 135 ml-heuristic 35 74.79323601722717 59183.0 59187.0 6.758697598972678e-05 3171433.0 heuristic 1.0 1.0 1.0 1.0 1.0 59171.487583005524 59304.98468958104 max
138 136 ml-heuristic 36 141.71325087547302 59166.0 59171.0 8.450799445627557e-05 6659190.0 heuristic 1.0 1.0 1.0 1.0 1.0 59158.08309355407 59296.56640928705 max
139 137 ml-heuristic 37 104.27815413475037 59155.0 59160.0 8.452370890034655e-05 4686611.0 heuristic 0.9992061079017601 1.0 1.0 1.0007945228636634 1.0 59175.74869590455 59307.463498356396 max
140 138 ml-heuristic 38 47.3746600151062 59212.0 59216.0 6.755387421468622e-05 1969556.0 heuristic 1.0 1.0 1.0 1.0 1.0 59166.58497608687 59301.825104164076 max
141 139 ml-heuristic 39 45.22795009613037 59143.0 59147.0 6.763268687756793e-05 1981567.0 heuristic 1.0 1.0 1.0 1.0 1.0 59114.89526526199 59248.56148321837 max
142 140 ml-heuristic 40 77.46431112289429 59158.0 59163.0 8.451942256330505e-05 3400043.0 heuristic 1.0 1.0 1.0 1.0 1.0 59155.37108495968 59280.88309711401 max
143 141 ml-heuristic 41 91.57522082328796 59170.0 59175.0 8.450228156160216e-05 4108854.0 heuristic 1.0 1.0 1.0 1.0 1.0 59151.28565990848 59290.71555008509 max
144 142 ml-heuristic 42 154.93997287750244 59089.0 59094.0 8.461811843151856e-05 8013650.0 heuristic 1.0 1.0 1.0 1.0 1.0 59118.99827325628 59249.97235571583 max
145 143 ml-heuristic 43 17.417819023132324 59232.0 59236.0 6.75310642895732e-05 696457.0 heuristic 1.0 1.0 1.0 1.0 1.0 59161.307350621355 59293.05481512429 max
146 144 ml-heuristic 44 14.177363872528076 59201.0 59205.0 6.756642624279995e-05 588615.0 heuristic 1.0 1.0 1.0 1.0 1.0 59149.268537538504 59278.50295984831 max
147 145 ml-heuristic 45 193.06448602676392 59120.0 59125.0 8.457374830852504e-05 10512489.0 heuristic 0.9997463431132155 1.0 1.0 1.0 1.0 59157.45644175721 59292.66862156513 max
148 146 ml-heuristic 46 88.08861303329468 59152.0 59157.0 8.452799567216662e-05 4325506.0 heuristic 1.0 1.0 1.0 1.0 1.0 59168.404124939494 59297.86061218224 max
149 147 ml-heuristic 47 86.44172787666321 59121.0 59126.0 8.457231778894132e-05 4135744.0 heuristic 0.999966172217242 1.0 1.0 1.0 1.0 59155.45167739807 59284.172466642885 max
150 148 ml-heuristic 48 91.60550999641418 59183.0 59188.0 8.448371998715847e-05 4681469.0 heuristic 1.0 1.0 1.0 1.0 1.0 59191.94394017174 59332.08571744459 max
151 149 ml-heuristic 49 70.29827809333801 59135.0 59140.0 8.45522955948254e-05 3273768.0 heuristic 0.9997464074387151 1.0 1.0 1.0 1.0 59139.98187390398 59273.225027033564 max

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

View File

@@ -1,51 +0,0 @@
,Solver,Instance,Wallclock Time,Lower Bound,Upper Bound,Gap,Nodes,Relative Lower Bound,Relative Upper Bound,Relative Wallclock Time,Relative Gap,Relative Nodes
0,baseline,0,89.5249240398407,8160.106459602758,8160.106459602758,0.0,50428.0,1.0,1.0,1.0,,1.0
1,baseline,1,68.46735715866089,8329.665354500348,8329.665354500348,0.0,36735.0,1.0,1.0,1.0,,1.0
2,baseline,2,131.6971151828766,8247.871141626507,8247.871141626507,0.0,77216.0,1.0,1.0,1.0,,1.0
3,baseline,3,32.94829607009888,8386.859108879815,8386.859108879815,0.0,17422.0,1.0,1.0,1.0,,1.0
4,baseline,4,80.09613800048828,8197.045478427175,8197.045478427175,0.0,47823.0,1.0,1.0,1.0,,1.0
5,baseline,5,70.24885201454163,8184.416683317542,8184.416683317542,0.0,37633.0,1.0,1.0,1.0,,1.0
6,baseline,6,76.99211096763611,8146.291920190363,8146.291920190363,0.0,38061.0,1.0,1.0,1.0,,1.0
7,baseline,7,90.94351601600647,8332.628442208696,8332.628442208696,0.0,49185.0,1.0,1.0,1.0,,1.0
8,baseline,8,91.29237294197083,8189.394992049158,8189.394992049159,1.110576181336875e-16,52509.0,1.0,1.0,1.0,1.0,1.0
9,baseline,9,59.57663106918335,8264.94306032112,8264.94306032112,0.0,35568.0,1.0,1.0,1.0,,1.0
10,baseline,10,74.4443690776825,8225.694775199881,8225.694775199881,0.0,38905.0,1.0,1.0,1.0,,1.0
11,baseline,11,47.8407769203186,8380.21322380759,8380.21322380759,0.0,32029.0,1.0,1.0,1.0,,1.0
12,baseline,12,40.67424297332764,8335.12040209855,8335.120402098551,2.182319289698426e-16,31346.0,1.0,1.0,1.0,1.0,1.0
13,baseline,13,82.80278611183167,8180.950128996085,8180.950128996086,1.1117225840912633e-16,45396.0,1.0,1.0,1.0,1.0,1.0
14,baseline,14,89.99744701385498,8335.244300219336,8335.244300219336,0.0,47528.0,1.0,1.0,1.0,,1.0
15,baseline,15,72.18464493751526,8281.242353501702,8281.242353501702,0.0,38504.0,1.0,1.0,1.0,,1.0
16,baseline,16,42.17434501647949,8269.820198565656,8269.820198565656,0.0,23531.0,1.0,1.0,1.0,,1.0
17,baseline,17,65.91456389427185,8349.788875581982,8349.788875581982,0.0,35240.0,1.0,1.0,1.0,,1.0
18,baseline,18,49.87329697608948,8354.975512102363,8354.975512102363,0.0,31665.0,1.0,1.0,1.0,,1.0
19,baseline,19,80.3313570022583,8148.698058722395,8148.698058722395,0.0,48047.0,1.0,1.0,1.0,,1.0
20,baseline,20,34.744563817977905,8254.22546708772,8254.22546708772,0.0,19831.0,1.0,1.0,1.0,,1.0
21,baseline,21,40.45663404464722,8337.747084077018,8337.747084077018,0.0,18857.0,1.0,1.0,1.0,,1.0
22,baseline,22,59.21903705596924,8372.097133312143,8372.097133312143,0.0,37278.0,1.0,1.0,1.0,,1.0
23,baseline,23,80.84772300720215,8163.180180623385,8163.180180623385,0.0,50384.0,1.0,1.0,1.0,,1.0
24,baseline,24,79.59622597694397,8251.926305990946,8251.926305990948,2.2043209501583402e-16,45222.0,1.0,1.0,1.0,1.0,1.0
25,baseline,25,43.39374899864197,8208.77608322561,8208.77608322561,0.0,28242.0,1.0,1.0,1.0,,1.0
26,baseline,26,73.40401291847229,8263.930518826672,8263.930518826672,0.0,41508.0,1.0,1.0,1.0,,1.0
27,baseline,27,68.43603801727295,8198.51655526816,8198.51655526816,0.0,34134.0,1.0,1.0,1.0,,1.0
28,baseline,28,38.52493691444397,8429.328796791307,8429.328796791307,0.0,23191.0,1.0,1.0,1.0,,1.0
29,baseline,29,63.41107797622681,8471.392061275592,8471.392061275594,2.1472142835423904e-16,36104.0,1.0,1.0,1.0,1.0,1.0
30,baseline,30,73.6661651134491,8300.292335288888,8300.292335288888,0.0,39931.0,1.0,1.0,1.0,,1.0
31,baseline,31,34.113643169403076,8472.780799342136,8472.780799342136,0.0,17604.0,1.0,1.0,1.0,,1.0
32,baseline,32,63.027442932128906,8176.089207977811,8176.089207977811,0.0,35832.0,1.0,1.0,1.0,,1.0
33,baseline,33,54.692622900009155,8349.997774829048,8349.997774829048,0.0,36893.0,1.0,1.0,1.0,,1.0
34,baseline,34,73.5447518825531,8228.164027545597,8228.164027545597,0.0,46086.0,1.0,1.0,1.0,,1.0
35,baseline,35,32.710362911224365,8348.576374334334,8348.576374334334,0.0,17965.0,1.0,1.0,1.0,,1.0
36,baseline,36,70.76628684997559,8200.622970997243,8200.622970997245,2.2181112459126466e-16,37770.0,1.0,1.0,1.0,1.0,1.0
37,baseline,37,36.678386926651,8449.787502150532,8449.787502150532,0.0,20885.0,1.0,1.0,1.0,,1.0
38,baseline,38,86.8393452167511,8323.602064698229,8323.602064698229,0.0,50488.0,1.0,1.0,1.0,,1.0
39,baseline,39,61.66756081581116,8230.716290385615,8230.716290385615,0.0,34925.0,1.0,1.0,1.0,,1.0
40,baseline,40,115.80898809432983,8028.769787381955,8028.769787381955,0.0,69443.0,1.0,1.0,1.0,,1.0
41,baseline,41,59.32002782821655,8214.630250558439,8214.630250558439,0.0,36252.0,1.0,1.0,1.0,,1.0
42,baseline,42,27.367344856262207,8482.332346423325,8482.332346423327,2.1444448640506932e-16,10937.0,1.0,1.0,1.0,1.0,1.0
43,baseline,43,42.98321795463562,8350.150643446867,8350.150643446867,0.0,31065.0,1.0,1.0,1.0,,1.0
44,baseline,44,64.18663907051086,8325.739376420757,8325.739376420757,0.0,37466.0,1.0,1.0,1.0,,1.0
45,baseline,45,63.78522491455078,8320.79317232281,8320.793440026451,3.217285123971039e-08,38840.0,1.0,1.0,1.0,1.0,1.0
46,baseline,46,31.455862998962402,8341.756982876166,8341.756982876166,0.0,16130.0,1.0,1.0,1.0,,1.0
47,baseline,47,39.206948041915894,8206.985832918781,8206.985832918781,0.0,25335.0,1.0,1.0,1.0,,1.0
48,baseline,48,62.641757011413574,8197.315974091358,8197.315974091358,0.0,54514.0,1.0,1.0,1.0,,1.0
49,baseline,49,49.18351912498474,8090.681320538064,8090.681320538064,0.0,38800.0,1.0,1.0,1.0,,1.0
1 Solver Instance Wallclock Time Lower Bound Upper Bound Gap Nodes Relative Lower Bound Relative Upper Bound Relative Wallclock Time Relative Gap Relative Nodes
2 0 baseline 0 89.5249240398407 8160.106459602758 8160.106459602758 0.0 50428.0 1.0 1.0 1.0 1.0
3 1 baseline 1 68.46735715866089 8329.665354500348 8329.665354500348 0.0 36735.0 1.0 1.0 1.0 1.0
4 2 baseline 2 131.6971151828766 8247.871141626507 8247.871141626507 0.0 77216.0 1.0 1.0 1.0 1.0
5 3 baseline 3 32.94829607009888 8386.859108879815 8386.859108879815 0.0 17422.0 1.0 1.0 1.0 1.0
6 4 baseline 4 80.09613800048828 8197.045478427175 8197.045478427175 0.0 47823.0 1.0 1.0 1.0 1.0
7 5 baseline 5 70.24885201454163 8184.416683317542 8184.416683317542 0.0 37633.0 1.0 1.0 1.0 1.0
8 6 baseline 6 76.99211096763611 8146.291920190363 8146.291920190363 0.0 38061.0 1.0 1.0 1.0 1.0
9 7 baseline 7 90.94351601600647 8332.628442208696 8332.628442208696 0.0 49185.0 1.0 1.0 1.0 1.0
10 8 baseline 8 91.29237294197083 8189.394992049158 8189.394992049159 1.110576181336875e-16 52509.0 1.0 1.0 1.0 1.0 1.0
11 9 baseline 9 59.57663106918335 8264.94306032112 8264.94306032112 0.0 35568.0 1.0 1.0 1.0 1.0
12 10 baseline 10 74.4443690776825 8225.694775199881 8225.694775199881 0.0 38905.0 1.0 1.0 1.0 1.0
13 11 baseline 11 47.8407769203186 8380.21322380759 8380.21322380759 0.0 32029.0 1.0 1.0 1.0 1.0
14 12 baseline 12 40.67424297332764 8335.12040209855 8335.120402098551 2.182319289698426e-16 31346.0 1.0 1.0 1.0 1.0 1.0
15 13 baseline 13 82.80278611183167 8180.950128996085 8180.950128996086 1.1117225840912633e-16 45396.0 1.0 1.0 1.0 1.0 1.0
16 14 baseline 14 89.99744701385498 8335.244300219336 8335.244300219336 0.0 47528.0 1.0 1.0 1.0 1.0
17 15 baseline 15 72.18464493751526 8281.242353501702 8281.242353501702 0.0 38504.0 1.0 1.0 1.0 1.0
18 16 baseline 16 42.17434501647949 8269.820198565656 8269.820198565656 0.0 23531.0 1.0 1.0 1.0 1.0
19 17 baseline 17 65.91456389427185 8349.788875581982 8349.788875581982 0.0 35240.0 1.0 1.0 1.0 1.0
20 18 baseline 18 49.87329697608948 8354.975512102363 8354.975512102363 0.0 31665.0 1.0 1.0 1.0 1.0
21 19 baseline 19 80.3313570022583 8148.698058722395 8148.698058722395 0.0 48047.0 1.0 1.0 1.0 1.0
22 20 baseline 20 34.744563817977905 8254.22546708772 8254.22546708772 0.0 19831.0 1.0 1.0 1.0 1.0
23 21 baseline 21 40.45663404464722 8337.747084077018 8337.747084077018 0.0 18857.0 1.0 1.0 1.0 1.0
24 22 baseline 22 59.21903705596924 8372.097133312143 8372.097133312143 0.0 37278.0 1.0 1.0 1.0 1.0
25 23 baseline 23 80.84772300720215 8163.180180623385 8163.180180623385 0.0 50384.0 1.0 1.0 1.0 1.0
26 24 baseline 24 79.59622597694397 8251.926305990946 8251.926305990948 2.2043209501583402e-16 45222.0 1.0 1.0 1.0 1.0 1.0
27 25 baseline 25 43.39374899864197 8208.77608322561 8208.77608322561 0.0 28242.0 1.0 1.0 1.0 1.0
28 26 baseline 26 73.40401291847229 8263.930518826672 8263.930518826672 0.0 41508.0 1.0 1.0 1.0 1.0
29 27 baseline 27 68.43603801727295 8198.51655526816 8198.51655526816 0.0 34134.0 1.0 1.0 1.0 1.0
30 28 baseline 28 38.52493691444397 8429.328796791307 8429.328796791307 0.0 23191.0 1.0 1.0 1.0 1.0
31 29 baseline 29 63.41107797622681 8471.392061275592 8471.392061275594 2.1472142835423904e-16 36104.0 1.0 1.0 1.0 1.0 1.0
32 30 baseline 30 73.6661651134491 8300.292335288888 8300.292335288888 0.0 39931.0 1.0 1.0 1.0 1.0
33 31 baseline 31 34.113643169403076 8472.780799342136 8472.780799342136 0.0 17604.0 1.0 1.0 1.0 1.0
34 32 baseline 32 63.027442932128906 8176.089207977811 8176.089207977811 0.0 35832.0 1.0 1.0 1.0 1.0
35 33 baseline 33 54.692622900009155 8349.997774829048 8349.997774829048 0.0 36893.0 1.0 1.0 1.0 1.0
36 34 baseline 34 73.5447518825531 8228.164027545597 8228.164027545597 0.0 46086.0 1.0 1.0 1.0 1.0
37 35 baseline 35 32.710362911224365 8348.576374334334 8348.576374334334 0.0 17965.0 1.0 1.0 1.0 1.0
38 36 baseline 36 70.76628684997559 8200.622970997243 8200.622970997245 2.2181112459126466e-16 37770.0 1.0 1.0 1.0 1.0 1.0
39 37 baseline 37 36.678386926651 8449.787502150532 8449.787502150532 0.0 20885.0 1.0 1.0 1.0 1.0
40 38 baseline 38 86.8393452167511 8323.602064698229 8323.602064698229 0.0 50488.0 1.0 1.0 1.0 1.0
41 39 baseline 39 61.66756081581116 8230.716290385615 8230.716290385615 0.0 34925.0 1.0 1.0 1.0 1.0
42 40 baseline 40 115.80898809432983 8028.769787381955 8028.769787381955 0.0 69443.0 1.0 1.0 1.0 1.0
43 41 baseline 41 59.32002782821655 8214.630250558439 8214.630250558439 0.0 36252.0 1.0 1.0 1.0 1.0
44 42 baseline 42 27.367344856262207 8482.332346423325 8482.332346423327 2.1444448640506932e-16 10937.0 1.0 1.0 1.0 1.0 1.0
45 43 baseline 43 42.98321795463562 8350.150643446867 8350.150643446867 0.0 31065.0 1.0 1.0 1.0 1.0
46 44 baseline 44 64.18663907051086 8325.739376420757 8325.739376420757 0.0 37466.0 1.0 1.0 1.0 1.0
47 45 baseline 45 63.78522491455078 8320.79317232281 8320.793440026451 3.217285123971039e-08 38840.0 1.0 1.0 1.0 1.0 1.0
48 46 baseline 46 31.455862998962402 8341.756982876166 8341.756982876166 0.0 16130.0 1.0 1.0 1.0 1.0
49 47 baseline 47 39.206948041915894 8206.985832918781 8206.985832918781 0.0 25335.0 1.0 1.0 1.0 1.0
50 48 baseline 48 62.641757011413574 8197.315974091358 8197.315974091358 0.0 54514.0 1.0 1.0 1.0 1.0
51 49 baseline 49 49.18351912498474 8090.681320538064 8090.681320538064 0.0 38800.0 1.0 1.0 1.0 1.0

View File

@@ -1,151 +0,0 @@
,Solver,Instance,Wallclock Time,Lower Bound,Upper Bound,Gap,Nodes,Relative Lower Bound,Relative Upper Bound,Relative Wallclock Time,Relative Gap,Relative Nodes
0,baseline,0,89.5249240398407,8160.106459602757,8160.106459602757,0.0,50428.0,0.9999999999999999,1.0,924.2902114943435,,50428.0
1,baseline,1,68.46735715866089,8329.665354500348,8329.665354500348,0.0,36735.0,1.0,1.0090376984767917,344.32872346548237,,816.3333333333334
2,baseline,2,131.6971151828766,8247.871141626507,8247.871141626507,0.0,77216.0,1.0,1.0022162274368718,953.573952433317,,3676.9523809523807
3,baseline,3,32.94829607009888,8386.859108879815,8386.859108879815,0.0,17422.0,1.0,1.0,355.8521179348526,,17422.0
4,baseline,4,80.09613800048828,8197.045478427175,8197.045478427175,0.0,47823.0,1.0,1.0,311.613064562208,,1707.9642857142858
5,baseline,5,70.24885201454164,8184.416683317541,8184.416683317541,0.0,37633.0,0.9999999999999999,1.0,525.1624903084369,,4181.444444444444
6,baseline,6,76.99211096763611,8146.291920190362,8146.291920190362,0.0,38061.0,0.9999999999999999,1.0,769.5512234529302,,38061.0
7,baseline,7,90.94351601600648,8332.628442208696,8332.628442208696,0.0,49185.0,1.0,1.0048882560944687,958.3075896894786,,49185.0
8,baseline,8,91.29237294197084,8189.394992049158,8189.394992049159,1.1105761813368749e-16,52509.0,1.0,1.0000000000000002,1809.7036902252514,inf,52509.0
9,baseline,9,59.57663106918335,8264.94306032112,8264.94306032112,0.0,35568.0,1.0,1.0,592.7777627536799,,35568.0
10,baseline,10,74.44436907768251,8225.694775199881,8225.694775199881,0.0,38905.0,1.0,1.0023427358501626,585.1124155571589,,3536.818181818182
11,baseline,11,47.8407769203186,8380.21322380759,8380.21322380759,0.0,32029.0,1.0,1.0,544.8893215589155,,32029.0
12,baseline,12,40.674242973327644,8335.12040209855,8335.120402098551,2.1823192896984264e-16,31346.0,1.0,1.0019781200731899,345.41153746477056,inf,1362.8695652173913
13,baseline,13,82.80278611183168,8180.950128996085,8180.950128996086,1.111722584091263e-16,45396.0,1.0,1.001565824091247,307.7965272971074,inf,648.5142857142857
14,baseline,14,89.99744701385498,8335.244300219336,8335.244300219336,0.0,47528.0,1.0,1.0,770.7049722222789,,6789.714285714285
15,baseline,15,72.18464493751527,8281.242353501702,8281.242353501702,0.0,38504.0,1.0,1.0,1800.7002920237667,,38504.0
16,baseline,16,42.17434501647949,8269.820198565656,8269.820198565656,0.0,23531.0,1.0,1.000403224416854,1410.6559487069069,,23531.0
17,baseline,17,65.91456389427185,8349.788875581982,8349.788875581982,0.0,35240.0,1.0,1.0,1182.0078197481776,,35240.0
18,baseline,18,49.87329697608948,8354.975512102363,8354.975512102363,0.0,31665.0,1.0,1.0,843.6224093499328,,31665.0
19,baseline,19,80.3313570022583,8148.698058722395,8148.698058722395,0.0,48047.0,1.0,1.000966789196668,580.6596204121938,,1779.5185185185185
20,baseline,20,34.744563817977905,8254.22546708772,8254.22546708772,0.0,19831.0,1.0,1.004384894337412,508.78858964332596,,19831.0
21,baseline,21,40.45663404464722,8337.747084077018,8337.747084077018,0.0,18857.0,1.0,1.0,462.30308297552364,,18857.0
22,baseline,22,59.21903705596924,8372.097133312143,8372.097133312143,0.0,37278.0,1.0,1.0,514.5235539407097,,1433.7692307692307
23,baseline,23,80.84772300720216,8163.180180623385,8163.180180623385,0.0,50384.0,1.0,1.0036539041416717,1353.8923034540035,,50384.0
24,baseline,24,79.59622597694397,8251.926305990946,8251.926305990948,2.20432095015834e-16,45222.0,1.0,1.002542812972684,327.00203830964284,inf,674.955223880597
25,baseline,25,43.39374899864197,8208.77608322561,8208.77608322561,0.0,28242.0,1.0,1.0,203.83643836690354,,641.8636363636364
26,baseline,26,73.4040129184723,8263.930518826672,8263.930518826672,0.0,41508.0,1.0,1.0,1158.157296819456,,41508.0
27,baseline,27,68.43603801727295,8198.51655526816,8198.51655526816,0.0,34134.0,1.0,1.0,465.76709499137564,,2007.8823529411766
28,baseline,28,38.52493691444397,8429.328796791307,8429.328796791307,0.0,23191.0,1.0,1.0,212.3627067143475,,1449.4375
29,baseline,29,63.411077976226814,8471.392061275592,8471.392061275594,2.1472142835423904e-16,36104.0,1.0,1.00713405678795,422.9829560183529,inf,1504.3333333333333
30,baseline,30,73.6661651134491,8300.292335288888,8300.292335288888,0.0,39931.0,1.0,1.0,752.156311010492,,3327.5833333333335
31,baseline,31,34.113643169403076,8472.780799342136,8472.780799342136,0.0,17604.0,0.9999999880202372,1.000000008626077,605.9064481022414,,17604.0
32,baseline,32,63.0274429321289,8176.089207977811,8176.089207977811,0.0,35832.0,1.0,1.0,938.911818608021,,35832.0
33,baseline,33,54.692622900009155,8349.997774829048,8349.997774829048,0.0,36893.0,1.0,1.0,470.66514905927494,,36893.0
34,baseline,34,73.54475188255309,8228.164027545597,8228.164027545597,0.0,46086.0,1.0,1.0,433.74928744081916,,2425.5789473684213
35,baseline,35,32.710362911224365,8348.576374334334,8348.576374334334,0.0,17965.0,1.0,1.0,544.8268431962766,,17965.0
36,baseline,36,70.7662868499756,8200.622970997243,8200.622970997245,2.2181112459126463e-16,37770.0,1.0,1.0039798679142482,716.773814957293,inf,37770.0
37,baseline,37,36.678386926651,8449.787502150532,8449.787502150532,0.0,20885.0,1.0,1.0,239.07327432143046,,835.4
38,baseline,38,86.8393452167511,8323.602064698229,8323.602064698229,0.0,50488.0,1.0,1.0,791.2621177625805,,50488.0
39,baseline,39,61.66756081581116,8230.716290385615,8230.716290385615,0.0,34925.0,1.0,1.0,310.21299967617745,,1343.2692307692307
40,baseline,40,115.80898809432985,8028.769787381955,8028.769787381955,0.0,69443.0,1.0,1.0059686353257602,962.877706084664,,69443.0
41,baseline,41,59.320027828216546,8214.630250558439,8214.630250558439,0.0,36252.0,1.0,1.0,279.402791487412,,2132.470588235294
42,baseline,42,27.36734485626221,8482.332346423325,8482.332346423327,2.1444448640506927e-16,10937.0,1.0,1.0,296.90942199552,1.0,10937.0
43,baseline,43,42.98321795463562,8350.150643446867,8350.150643446867,0.0,31065.0,1.0,1.0,786.6613272710613,,31065.0
44,baseline,44,64.18663907051085,8325.739376420757,8325.739376420757,0.0,37466.0,1.0,1.0,601.5284689994793,,37466.0
45,baseline,45,63.78522491455078,8320.793172322808,8320.793440026451,3.2172851239710385e-08,38840.0,0.9999999999999998,1.0000000321728513,716.7763545319855,inf,38840.0
46,baseline,46,31.4558629989624,8341.756982876166,8341.756982876166,0.0,16130.0,1.0,1.0,613.653265116279,,16130.0
47,baseline,47,39.20694804191589,8206.985832918781,8206.985832918781,0.0,25335.0,1.0,1.0,280.93451131197753,,25335.0
48,baseline,48,62.641757011413574,8197.315974091358,8197.315974091358,0.0,54514.0,1.0,1.0073005569898068,261.05295308194985,,746.7671232876712
49,baseline,49,49.18351912498474,8090.681320538064,8090.681320538064,0.0,38800.0,1.0,1.0026687299831225,375.8371656618988,,38800.0
50,ml-exact,0,34.49193096160889,8160.106459602758,8160.106459602758,0.0,32951.0,1.0,1.0000000000000002,356.1081397753119,,32951.0
51,ml-exact,1,39.43942403793335,8329.665354500348,8329.665354500348,0.0,33716.0,1.0,1.0090376984767917,198.34454105955817,,749.2444444444444
52,ml-exact,2,54.0330810546875,8247.871141626507,8247.871141626507,0.0,48098.0,1.0,1.0022162274368718,391.2351351957892,,2290.3809523809523
53,ml-exact,3,15.311645030975342,8386.859108879815,8386.859108879815,0.0,10140.0,1.0,1.0,165.37065533668084,,10140.0
54,ml-exact,4,24.112047910690308,8197.045478427175,8197.045478427175,0.0,32151.0,1.0,1.0,93.80763330031203,,1148.25
55,ml-exact,5,33.69559407234192,8184.416683317542,8184.416683317542,0.0,31637.0,1.0,1.0000000000000002,251.89966224345207,,3515.222222222222
56,ml-exact,6,25.395578861236572,8146.291920190363,8146.291920190363,0.0,18684.0,1.0,1.0000000000000002,253.83378293361804,,18684.0
57,ml-exact,7,45.65329885482788,8332.628442208696,8332.628442208696,0.0,34261.0,1.0,1.0048882560944687,481.0667621344588,,34261.0
58,ml-exact,8,45.959444999694824,8189.394992049158,8189.394992049158,0.0,32915.0,1.0,1.0,911.0616203340486,,32915.0
59,ml-exact,9,27.292019844055176,8264.94306032112,8264.94306032112,0.0,22256.0,1.0,1.0,271.551146378204,,22256.0
60,ml-exact,10,33.28360414505005,8225.694775199881,8225.694775199883,2.2113504734336021e-16,32743.0,1.0,1.0023427358501629,261.6000412259086,inf,2976.6363636363635
61,ml-exact,11,13.287060976028442,8380.21322380759,8380.21322380759,0.0,15760.0,1.0,1.0,151.33486759210984,,15760.0
62,ml-exact,12,30.385483980178833,8335.12040209855,8335.12040209855,0.0,26800.0,1.0,1.0019781200731896,258.03791222585767,,1165.2173913043478
63,ml-exact,13,53.78090000152588,8180.950128996085,8180.950128996085,0.0,38849.0,1.0,1.0015658240912468,199.91566748763452,,554.9857142857143
64,ml-exact,14,32.64224600791931,8335.244300219336,8335.244300219336,0.0,30763.0,1.0,1.0,279.53616616406106,,4394.714285714285
65,ml-exact,15,33.97071599960327,8281.242353501702,8281.242353501702,0.0,30903.0,1.0,1.0,847.425075979707,,30903.0
66,ml-exact,16,34.40068793296814,8269.820198565656,8269.820198565656,0.0,25773.0,1.0,1.000403224416854,1150.6411078414953,,25773.0
67,ml-exact,17,29.94601798057556,8349.788875581982,8349.788875581982,0.0,26524.0,1.0,1.0,537.0046516599328,,26524.0
68,ml-exact,18,26.21188998222351,8354.975512102363,8354.975512102363,0.0,23595.0,1.0,1.0,443.3823132050057,,23595.0
69,ml-exact,19,44.91053318977356,8148.698058722395,8148.698058722396,1.1161227170509796e-16,36233.0,1.0,1.0009667891966683,324.6270712662061,inf,1341.962962962963
70,ml-exact,20,24.929107904434204,8254.22546708772,8254.225467087721,2.20370695082023e-16,19171.0,1.0,1.0043848943374123,365.0541051029243,inf,19171.0
71,ml-exact,21,23.808892011642456,8337.747084077018,8337.74708407702,2.1816317827865812e-16,16213.0,1.0,1.0000000000000002,272.0672255399839,inf,16213.0
72,ml-exact,22,29.496449947357178,8372.097133312143,8372.097133312145,2.1726807209488663e-16,23405.0,1.0,1.0000000000000002,256.27938261145164,inf,900.1923076923077
73,ml-exact,23,54.53324818611145,8163.180180623385,8163.180180623385,0.0,44205.0,1.0,1.0036539041416717,913.2247916857979,,44205.0
74,ml-exact,24,35.66223120689392,8251.926305990946,8251.926305990948,2.2043209501583402e-16,30816.0,1.0,1.002542812972684,146.50973902584275,inf,459.94029850746267
75,ml-exact,25,29.14737296104431,8208.77608322561,8208.776083225612,2.2159081757180675e-16,25610.0,1.0,1.0000000000000002,136.9159574646799,inf,582.0454545454545
76,ml-exact,26,27.671333074569702,8263.930518826672,8263.930518826672,0.0,19654.0,1.0,1.0,436.59406398705966,,19654.0
77,ml-exact,27,33.428922176361084,8198.51655526816,8198.51655526816,0.0,34427.0,1.0,1.0,227.5130533834623,,2025.1176470588234
78,ml-exact,28,26.386518955230713,8429.328796791307,8429.328796791307,0.0,21051.0,1.0,1.0,145.45157072019325,,1315.6875
79,ml-exact,29,32.452534914016724,8471.392061275592,8471.392061275594,2.1472142835423904e-16,25241.0,1.0,1.00713405678795,216.47430679803114,inf,1051.7083333333333
80,ml-exact,30,33.65191102027893,8300.292335288888,8300.292335288888,0.0,27290.0,1.0,1.0,343.59732466710483,,2274.1666666666665
81,ml-exact,31,26.8163058757782,8472.780900844042,8472.780900844042,0.0,17085.0,1.0,1.00000002060584,476.29543885799944,,17085.0
82,ml-exact,32,27.298824071884155,8176.089207977811,8176.089207977811,0.0,21127.0,1.0,1.0,406.66711773146375,,21127.0
83,ml-exact,33,27.07152795791626,8349.997774829048,8349.997774829048,0.0,20768.0,1.0,1.0,232.96788608711708,,20768.0
84,ml-exact,34,51.8715980052948,8228.164027545597,8228.164027545597,0.0,42602.0,1.0,1.0,305.92622991159624,,2242.2105263157896
85,ml-exact,35,26.559547185897827,8348.576374334334,8348.576374334334,0.0,15315.0,1.0,1.0,442.37828511067517,,15315.0
86,ml-exact,36,42.17573404312134,8200.622970997243,8200.622970997245,2.2181112459126466e-16,32284.0,1.0,1.0039798679142482,427.1873392594525,inf,32284.0
87,ml-exact,37,20.249451875686646,8449.787502150532,8449.787502150532,0.0,13815.0,1.0,1.0,131.9878862943405,,552.6
88,ml-exact,38,34.309616804122925,8323.602064698229,8323.602064698229,0.0,31546.0,1.0,1.0,312.62211828396147,,31546.0
89,ml-exact,39,28.144772052764893,8230.716290385615,8230.716290385615,0.0,22759.0,1.0,1.0,141.57969032969933,,875.3461538461538
90,ml-exact,40,54.61736702919006,8028.769787381955,8028.769787381955,0.0,46343.0,1.0,1.0059686353257602,454.1084931561159,,46343.0
91,ml-exact,41,30.99381184577942,8214.630250558439,8214.630250558439,0.0,28492.0,1.0,1.0,145.98370677815547,,1676.0
92,ml-exact,42,19.046553134918213,8482.332346423325,8482.332346423327,2.1444448640506932e-16,9292.0,1.0,1.0,206.6368188802036,1.0000000000000002,9292.0
93,ml-exact,43,29.105360984802246,8350.150643446867,8350.150643446867,0.0,24245.0,1.0,1.0,532.674448133975,,24245.0
94,ml-exact,44,28.813607215881348,8325.739376420757,8325.739376420757,0.0,22941.0,1.0,1.0,270.0282377440192,,22941.0
95,ml-exact,45,39.90794801712036,8320.79317232281,8320.79317232281,0.0,33304.0,1.0,1.0,448.459240127851,,33304.0
96,ml-exact,46,23.966022968292236,8341.756982876166,8341.756982876166,0.0,16789.0,1.0,1.0,467.53853953488374,,16789.0
97,ml-exact,47,27.642159938812256,8206.985832918781,8206.985832918781,0.0,24637.0,1.0,1.0,198.0678701569822,,24637.0
98,ml-exact,48,33.94082999229431,8197.315974091358,8197.315974091358,0.0,33963.0,1.0,1.0073005569898068,141.44484960609344,,465.24657534246575
99,ml-exact,49,37.428488969802856,8090.681320538064,8090.681320538064,0.0,39891.0,1.0,1.0026687299831225,286.0107910064622,,39891.0
100,ml-heuristic,0,0.09685802459716797,8160.106459602758,8160.106459602758,0.0,1.0,1.0,1.0000000000000002,1.0,,1.0
101,ml-heuristic,1,0.19884300231933594,8255.05862375065,8255.05862375065,0.0,45.0,0.9910432499296759,1.0,1.0,,1.0
102,ml-heuristic,2,0.1381089687347412,8229.632404496291,8229.632404496293,2.210292409357245e-16,21.0,0.9977886733658864,1.0,1.0,inf,1.0
103,ml-heuristic,3,0.0925898551940918,8386.859108879815,8386.859108879815,0.0,1.0,1.0,1.0,1.0,,1.0
104,ml-heuristic,4,0.2570371627807617,8197.045478427175,8197.045478427175,0.0,28.0,1.0,1.0,1.0,,1.0
105,ml-heuristic,5,0.13376593589782715,8184.416683317542,8184.416683317542,0.0,9.0,1.0,1.0000000000000002,1.0,,1.0
106,ml-heuristic,6,0.10004806518554688,8146.291920190363,8146.291920190363,0.0,1.0,1.0,1.0000000000000002,1.0,,1.0
107,ml-heuristic,7,0.09490013122558594,8292.094560437725,8292.094560437725,0.0,1.0,0.9951355227162599,1.0,1.0,,1.0
108,ml-heuristic,8,0.0504460334777832,8189.394992049158,8189.39499204916,2.22115236267375e-16,1.0,1.0,1.0000000000000002,1.0,inf,1.0
109,ml-heuristic,9,0.10050415992736816,8264.94306032112,8265.728238597903,9.500105095127722e-05,1.0,1.0,1.0000950010509513,1.0,inf,1.0
110,ml-heuristic,10,0.12723088264465332,8206.469185635438,8206.469185635438,0.0,11.0,0.9976627397332555,1.0,1.0,,1.0
111,ml-heuristic,11,0.087799072265625,8380.21322380759,8380.213223807592,2.1705765175261136e-16,1.0,1.0,1.0000000000000002,1.0,inf,1.0
112,ml-heuristic,12,0.11775588989257812,8318.665083714313,8318.665083714313,0.0,23.0,0.9980257851608126,1.0,1.0,,1.0
113,ml-heuristic,13,0.26901793479919434,8168.160226931591,8168.160226931591,0.0,70.0,0.9984366238807443,1.0,1.0,,1.0
114,ml-heuristic,14,0.11677289009094238,8335.244300219336,8335.244300219336,0.0,7.0,1.0,1.0,1.0,,1.0
115,ml-heuristic,15,0.040086984634399414,8281.242353501702,8281.242353501702,0.0,1.0,1.0,1.0,1.0,,1.0
116,ml-heuristic,16,0.029896974563598633,8266.48694918614,8266.48694918614,0.0,1.0,0.9995969381075426,1.0,1.0,,1.0
117,ml-heuristic,17,0.05576491355895996,8349.788875581982,8349.788875581982,0.0,1.0,1.0,1.0,1.0,,1.0
118,ml-heuristic,18,0.059118032455444336,8354.975512102363,8354.975512102363,0.0,1.0,1.0,1.0,1.0,,1.0
119,ml-heuristic,19,0.13834500312805176,8140.8275945520445,8140.8275945520445,0.0,27.0,0.9990341445819156,1.0,1.0,,1.0
120,ml-heuristic,20,0.06828880310058594,8218.189574160206,8218.189574160206,0.0,1.0,0.9956342490193416,1.0,1.0,,1.0
121,ml-heuristic,21,0.08751106262207031,8337.747084077018,8337.747084077018,0.0,1.0,1.0,1.0,1.0,,1.0
122,ml-heuristic,22,0.11509490013122559,8372.097133312143,8372.097133312143,0.0,26.0,1.0,1.0,1.0,,1.0
123,ml-heuristic,23,0.05971503257751465,8133.46129271979,8133.46129271979,0.0,1.0,0.9963593982680748,1.0,1.0,,1.0
124,ml-heuristic,24,0.24341201782226562,8230.99642151221,8230.99642151221,0.0,67.0,0.9974636365252632,1.0,1.0,,1.0
125,ml-heuristic,25,0.21288514137268066,8208.77608322561,8208.77608322561,0.0,44.0,1.0,1.0,1.0,,1.0
126,ml-heuristic,26,0.06338000297546387,8263.930518826672,8263.930518826672,0.0,1.0,1.0,1.0,1.0,,1.0
127,ml-heuristic,27,0.14693188667297363,8198.51655526816,8198.51655526816,0.0,17.0,1.0,1.0,1.0,,1.0
128,ml-heuristic,28,0.1814110279083252,8429.328796791307,8429.328796791307,0.0,16.0,1.0,1.0,1.0,,1.0
129,ml-heuristic,29,0.14991402626037598,8411.384764698932,8411.384764698932,0.0,24.0,0.99291647746408,1.0,1.0,,1.0
130,ml-heuristic,30,0.09793996810913086,8300.292335288888,8300.292335288888,0.0,12.0,1.0,1.0,1.0,,1.0
131,ml-heuristic,31,0.05630183219909668,8472.780726255278,8472.780726255278,0.0,1.0,0.9999999793941604,1.0,1.0,,1.0
132,ml-heuristic,32,0.06712818145751953,8176.089207977811,8176.089207977811,0.0,1.0,1.0,1.0,1.0,,1.0
133,ml-heuristic,33,0.11620283126831055,8349.997774829048,8349.997774829048,0.0,1.0,1.0,1.0,1.0,,1.0
134,ml-heuristic,34,0.1695559024810791,8228.164027545597,8228.164027545597,0.0,19.0,1.0,1.0,1.0,,1.0
135,ml-heuristic,35,0.060038089752197266,8348.576374334334,8348.576374334334,0.0,1.0,1.0,1.0,1.0,,1.0
136,ml-heuristic,36,0.09872889518737793,8168.114952378382,8168.114952378382,0.0,1.0,0.9960359086457419,1.0,1.0,,1.0
137,ml-heuristic,37,0.15341901779174805,8449.787502150532,8449.787502150532,0.0,25.0,1.0,1.0,1.0,,1.0
138,ml-heuristic,38,0.10974788665771484,8323.602064698229,8323.602064698229,0.0,1.0,1.0,1.0,1.0,,1.0
139,ml-heuristic,39,0.1987910270690918,8230.716290385615,8230.716290385615,0.0,26.0,1.0,1.0,1.0,,1.0
140,ml-heuristic,40,0.12027382850646973,7981.13331314949,7981.13331314949,0.0,1.0,0.9940667779131829,1.0,1.0,,1.0
141,ml-heuristic,41,0.2123100757598877,8214.630250558439,8214.630250558439,0.0,17.0,1.0,1.0,1.0,,1.0
142,ml-heuristic,42,0.09217405319213867,8482.332346423325,8482.332346423327,2.1444448640506932e-16,1.0,1.0,1.0,1.0,1.0000000000000002,1.0
143,ml-heuristic,43,0.05464005470275879,8350.150643446867,8350.150643446867,0.0,1.0,1.0,1.0,1.0,,1.0
144,ml-heuristic,44,0.10670590400695801,8325.739376420757,8325.739376420757,0.0,1.0,1.0,1.0,1.0,,1.0
145,ml-heuristic,45,0.0889890193939209,8320.79317232281,8320.79317232281,0.0,1.0,1.0,1.0,1.0,,1.0
146,ml-heuristic,46,0.05125999450683594,8341.756982876166,8341.756982876166,0.0,1.0,1.0,1.0,1.0,,1.0
147,ml-heuristic,47,0.13955903053283691,8206.985832918781,8206.985832918781,0.0,1.0,1.0,1.0,1.0,,1.0
148,ml-heuristic,48,0.2399580478668213,8137.904736782855,8137.904736782855,0.0,73.0,0.9927523548566044,1.0,1.0,,1.0
149,ml-heuristic,49,0.13086390495300293,8069.146946144667,8069.146946144667,0.0,1.0,0.9973383731801755,1.0,1.0,,1.0
1 Solver Instance Wallclock Time Lower Bound Upper Bound Gap Nodes Relative Lower Bound Relative Upper Bound Relative Wallclock Time Relative Gap Relative Nodes
2 0 baseline 0 89.5249240398407 8160.106459602757 8160.106459602757 0.0 50428.0 0.9999999999999999 1.0 924.2902114943435 50428.0
3 1 baseline 1 68.46735715866089 8329.665354500348 8329.665354500348 0.0 36735.0 1.0 1.0090376984767917 344.32872346548237 816.3333333333334
4 2 baseline 2 131.6971151828766 8247.871141626507 8247.871141626507 0.0 77216.0 1.0 1.0022162274368718 953.573952433317 3676.9523809523807
5 3 baseline 3 32.94829607009888 8386.859108879815 8386.859108879815 0.0 17422.0 1.0 1.0 355.8521179348526 17422.0
6 4 baseline 4 80.09613800048828 8197.045478427175 8197.045478427175 0.0 47823.0 1.0 1.0 311.613064562208 1707.9642857142858
7 5 baseline 5 70.24885201454164 8184.416683317541 8184.416683317541 0.0 37633.0 0.9999999999999999 1.0 525.1624903084369 4181.444444444444
8 6 baseline 6 76.99211096763611 8146.291920190362 8146.291920190362 0.0 38061.0 0.9999999999999999 1.0 769.5512234529302 38061.0
9 7 baseline 7 90.94351601600648 8332.628442208696 8332.628442208696 0.0 49185.0 1.0 1.0048882560944687 958.3075896894786 49185.0
10 8 baseline 8 91.29237294197084 8189.394992049158 8189.394992049159 1.1105761813368749e-16 52509.0 1.0 1.0000000000000002 1809.7036902252514 inf 52509.0
11 9 baseline 9 59.57663106918335 8264.94306032112 8264.94306032112 0.0 35568.0 1.0 1.0 592.7777627536799 35568.0
12 10 baseline 10 74.44436907768251 8225.694775199881 8225.694775199881 0.0 38905.0 1.0 1.0023427358501626 585.1124155571589 3536.818181818182
13 11 baseline 11 47.8407769203186 8380.21322380759 8380.21322380759 0.0 32029.0 1.0 1.0 544.8893215589155 32029.0
14 12 baseline 12 40.674242973327644 8335.12040209855 8335.120402098551 2.1823192896984264e-16 31346.0 1.0 1.0019781200731899 345.41153746477056 inf 1362.8695652173913
15 13 baseline 13 82.80278611183168 8180.950128996085 8180.950128996086 1.111722584091263e-16 45396.0 1.0 1.001565824091247 307.7965272971074 inf 648.5142857142857
16 14 baseline 14 89.99744701385498 8335.244300219336 8335.244300219336 0.0 47528.0 1.0 1.0 770.7049722222789 6789.714285714285
17 15 baseline 15 72.18464493751527 8281.242353501702 8281.242353501702 0.0 38504.0 1.0 1.0 1800.7002920237667 38504.0
18 16 baseline 16 42.17434501647949 8269.820198565656 8269.820198565656 0.0 23531.0 1.0 1.000403224416854 1410.6559487069069 23531.0
19 17 baseline 17 65.91456389427185 8349.788875581982 8349.788875581982 0.0 35240.0 1.0 1.0 1182.0078197481776 35240.0
20 18 baseline 18 49.87329697608948 8354.975512102363 8354.975512102363 0.0 31665.0 1.0 1.0 843.6224093499328 31665.0
21 19 baseline 19 80.3313570022583 8148.698058722395 8148.698058722395 0.0 48047.0 1.0 1.000966789196668 580.6596204121938 1779.5185185185185
22 20 baseline 20 34.744563817977905 8254.22546708772 8254.22546708772 0.0 19831.0 1.0 1.004384894337412 508.78858964332596 19831.0
23 21 baseline 21 40.45663404464722 8337.747084077018 8337.747084077018 0.0 18857.0 1.0 1.0 462.30308297552364 18857.0
24 22 baseline 22 59.21903705596924 8372.097133312143 8372.097133312143 0.0 37278.0 1.0 1.0 514.5235539407097 1433.7692307692307
25 23 baseline 23 80.84772300720216 8163.180180623385 8163.180180623385 0.0 50384.0 1.0 1.0036539041416717 1353.8923034540035 50384.0
26 24 baseline 24 79.59622597694397 8251.926305990946 8251.926305990948 2.20432095015834e-16 45222.0 1.0 1.002542812972684 327.00203830964284 inf 674.955223880597
27 25 baseline 25 43.39374899864197 8208.77608322561 8208.77608322561 0.0 28242.0 1.0 1.0 203.83643836690354 641.8636363636364
28 26 baseline 26 73.4040129184723 8263.930518826672 8263.930518826672 0.0 41508.0 1.0 1.0 1158.157296819456 41508.0
29 27 baseline 27 68.43603801727295 8198.51655526816 8198.51655526816 0.0 34134.0 1.0 1.0 465.76709499137564 2007.8823529411766
30 28 baseline 28 38.52493691444397 8429.328796791307 8429.328796791307 0.0 23191.0 1.0 1.0 212.3627067143475 1449.4375
31 29 baseline 29 63.411077976226814 8471.392061275592 8471.392061275594 2.1472142835423904e-16 36104.0 1.0 1.00713405678795 422.9829560183529 inf 1504.3333333333333
32 30 baseline 30 73.6661651134491 8300.292335288888 8300.292335288888 0.0 39931.0 1.0 1.0 752.156311010492 3327.5833333333335
33 31 baseline 31 34.113643169403076 8472.780799342136 8472.780799342136 0.0 17604.0 0.9999999880202372 1.000000008626077 605.9064481022414 17604.0
34 32 baseline 32 63.0274429321289 8176.089207977811 8176.089207977811 0.0 35832.0 1.0 1.0 938.911818608021 35832.0
35 33 baseline 33 54.692622900009155 8349.997774829048 8349.997774829048 0.0 36893.0 1.0 1.0 470.66514905927494 36893.0
36 34 baseline 34 73.54475188255309 8228.164027545597 8228.164027545597 0.0 46086.0 1.0 1.0 433.74928744081916 2425.5789473684213
37 35 baseline 35 32.710362911224365 8348.576374334334 8348.576374334334 0.0 17965.0 1.0 1.0 544.8268431962766 17965.0
38 36 baseline 36 70.7662868499756 8200.622970997243 8200.622970997245 2.2181112459126463e-16 37770.0 1.0 1.0039798679142482 716.773814957293 inf 37770.0
39 37 baseline 37 36.678386926651 8449.787502150532 8449.787502150532 0.0 20885.0 1.0 1.0 239.07327432143046 835.4
40 38 baseline 38 86.8393452167511 8323.602064698229 8323.602064698229 0.0 50488.0 1.0 1.0 791.2621177625805 50488.0
41 39 baseline 39 61.66756081581116 8230.716290385615 8230.716290385615 0.0 34925.0 1.0 1.0 310.21299967617745 1343.2692307692307
42 40 baseline 40 115.80898809432985 8028.769787381955 8028.769787381955 0.0 69443.0 1.0 1.0059686353257602 962.877706084664 69443.0
43 41 baseline 41 59.320027828216546 8214.630250558439 8214.630250558439 0.0 36252.0 1.0 1.0 279.402791487412 2132.470588235294
44 42 baseline 42 27.36734485626221 8482.332346423325 8482.332346423327 2.1444448640506927e-16 10937.0 1.0 1.0 296.90942199552 1.0 10937.0
45 43 baseline 43 42.98321795463562 8350.150643446867 8350.150643446867 0.0 31065.0 1.0 1.0 786.6613272710613 31065.0
46 44 baseline 44 64.18663907051085 8325.739376420757 8325.739376420757 0.0 37466.0 1.0 1.0 601.5284689994793 37466.0
47 45 baseline 45 63.78522491455078 8320.793172322808 8320.793440026451 3.2172851239710385e-08 38840.0 0.9999999999999998 1.0000000321728513 716.7763545319855 inf 38840.0
48 46 baseline 46 31.4558629989624 8341.756982876166 8341.756982876166 0.0 16130.0 1.0 1.0 613.653265116279 16130.0
49 47 baseline 47 39.20694804191589 8206.985832918781 8206.985832918781 0.0 25335.0 1.0 1.0 280.93451131197753 25335.0
50 48 baseline 48 62.641757011413574 8197.315974091358 8197.315974091358 0.0 54514.0 1.0 1.0073005569898068 261.05295308194985 746.7671232876712
51 49 baseline 49 49.18351912498474 8090.681320538064 8090.681320538064 0.0 38800.0 1.0 1.0026687299831225 375.8371656618988 38800.0
52 50 ml-exact 0 34.49193096160889 8160.106459602758 8160.106459602758 0.0 32951.0 1.0 1.0000000000000002 356.1081397753119 32951.0
53 51 ml-exact 1 39.43942403793335 8329.665354500348 8329.665354500348 0.0 33716.0 1.0 1.0090376984767917 198.34454105955817 749.2444444444444
54 52 ml-exact 2 54.0330810546875 8247.871141626507 8247.871141626507 0.0 48098.0 1.0 1.0022162274368718 391.2351351957892 2290.3809523809523
55 53 ml-exact 3 15.311645030975342 8386.859108879815 8386.859108879815 0.0 10140.0 1.0 1.0 165.37065533668084 10140.0
56 54 ml-exact 4 24.112047910690308 8197.045478427175 8197.045478427175 0.0 32151.0 1.0 1.0 93.80763330031203 1148.25
57 55 ml-exact 5 33.69559407234192 8184.416683317542 8184.416683317542 0.0 31637.0 1.0 1.0000000000000002 251.89966224345207 3515.222222222222
58 56 ml-exact 6 25.395578861236572 8146.291920190363 8146.291920190363 0.0 18684.0 1.0 1.0000000000000002 253.83378293361804 18684.0
59 57 ml-exact 7 45.65329885482788 8332.628442208696 8332.628442208696 0.0 34261.0 1.0 1.0048882560944687 481.0667621344588 34261.0
60 58 ml-exact 8 45.959444999694824 8189.394992049158 8189.394992049158 0.0 32915.0 1.0 1.0 911.0616203340486 32915.0
61 59 ml-exact 9 27.292019844055176 8264.94306032112 8264.94306032112 0.0 22256.0 1.0 1.0 271.551146378204 22256.0
62 60 ml-exact 10 33.28360414505005 8225.694775199881 8225.694775199883 2.2113504734336021e-16 32743.0 1.0 1.0023427358501629 261.6000412259086 inf 2976.6363636363635
63 61 ml-exact 11 13.287060976028442 8380.21322380759 8380.21322380759 0.0 15760.0 1.0 1.0 151.33486759210984 15760.0
64 62 ml-exact 12 30.385483980178833 8335.12040209855 8335.12040209855 0.0 26800.0 1.0 1.0019781200731896 258.03791222585767 1165.2173913043478
65 63 ml-exact 13 53.78090000152588 8180.950128996085 8180.950128996085 0.0 38849.0 1.0 1.0015658240912468 199.91566748763452 554.9857142857143
66 64 ml-exact 14 32.64224600791931 8335.244300219336 8335.244300219336 0.0 30763.0 1.0 1.0 279.53616616406106 4394.714285714285
67 65 ml-exact 15 33.97071599960327 8281.242353501702 8281.242353501702 0.0 30903.0 1.0 1.0 847.425075979707 30903.0
68 66 ml-exact 16 34.40068793296814 8269.820198565656 8269.820198565656 0.0 25773.0 1.0 1.000403224416854 1150.6411078414953 25773.0
69 67 ml-exact 17 29.94601798057556 8349.788875581982 8349.788875581982 0.0 26524.0 1.0 1.0 537.0046516599328 26524.0
70 68 ml-exact 18 26.21188998222351 8354.975512102363 8354.975512102363 0.0 23595.0 1.0 1.0 443.3823132050057 23595.0
71 69 ml-exact 19 44.91053318977356 8148.698058722395 8148.698058722396 1.1161227170509796e-16 36233.0 1.0 1.0009667891966683 324.6270712662061 inf 1341.962962962963
72 70 ml-exact 20 24.929107904434204 8254.22546708772 8254.225467087721 2.20370695082023e-16 19171.0 1.0 1.0043848943374123 365.0541051029243 inf 19171.0
73 71 ml-exact 21 23.808892011642456 8337.747084077018 8337.74708407702 2.1816317827865812e-16 16213.0 1.0 1.0000000000000002 272.0672255399839 inf 16213.0
74 72 ml-exact 22 29.496449947357178 8372.097133312143 8372.097133312145 2.1726807209488663e-16 23405.0 1.0 1.0000000000000002 256.27938261145164 inf 900.1923076923077
75 73 ml-exact 23 54.53324818611145 8163.180180623385 8163.180180623385 0.0 44205.0 1.0 1.0036539041416717 913.2247916857979 44205.0
76 74 ml-exact 24 35.66223120689392 8251.926305990946 8251.926305990948 2.2043209501583402e-16 30816.0 1.0 1.002542812972684 146.50973902584275 inf 459.94029850746267
77 75 ml-exact 25 29.14737296104431 8208.77608322561 8208.776083225612 2.2159081757180675e-16 25610.0 1.0 1.0000000000000002 136.9159574646799 inf 582.0454545454545
78 76 ml-exact 26 27.671333074569702 8263.930518826672 8263.930518826672 0.0 19654.0 1.0 1.0 436.59406398705966 19654.0
79 77 ml-exact 27 33.428922176361084 8198.51655526816 8198.51655526816 0.0 34427.0 1.0 1.0 227.5130533834623 2025.1176470588234
80 78 ml-exact 28 26.386518955230713 8429.328796791307 8429.328796791307 0.0 21051.0 1.0 1.0 145.45157072019325 1315.6875
81 79 ml-exact 29 32.452534914016724 8471.392061275592 8471.392061275594 2.1472142835423904e-16 25241.0 1.0 1.00713405678795 216.47430679803114 inf 1051.7083333333333
82 80 ml-exact 30 33.65191102027893 8300.292335288888 8300.292335288888 0.0 27290.0 1.0 1.0 343.59732466710483 2274.1666666666665
83 81 ml-exact 31 26.8163058757782 8472.780900844042 8472.780900844042 0.0 17085.0 1.0 1.00000002060584 476.29543885799944 17085.0
84 82 ml-exact 32 27.298824071884155 8176.089207977811 8176.089207977811 0.0 21127.0 1.0 1.0 406.66711773146375 21127.0
85 83 ml-exact 33 27.07152795791626 8349.997774829048 8349.997774829048 0.0 20768.0 1.0 1.0 232.96788608711708 20768.0
86 84 ml-exact 34 51.8715980052948 8228.164027545597 8228.164027545597 0.0 42602.0 1.0 1.0 305.92622991159624 2242.2105263157896
87 85 ml-exact 35 26.559547185897827 8348.576374334334 8348.576374334334 0.0 15315.0 1.0 1.0 442.37828511067517 15315.0
88 86 ml-exact 36 42.17573404312134 8200.622970997243 8200.622970997245 2.2181112459126466e-16 32284.0 1.0 1.0039798679142482 427.1873392594525 inf 32284.0
89 87 ml-exact 37 20.249451875686646 8449.787502150532 8449.787502150532 0.0 13815.0 1.0 1.0 131.9878862943405 552.6
90 88 ml-exact 38 34.309616804122925 8323.602064698229 8323.602064698229 0.0 31546.0 1.0 1.0 312.62211828396147 31546.0
91 89 ml-exact 39 28.144772052764893 8230.716290385615 8230.716290385615 0.0 22759.0 1.0 1.0 141.57969032969933 875.3461538461538
92 90 ml-exact 40 54.61736702919006 8028.769787381955 8028.769787381955 0.0 46343.0 1.0 1.0059686353257602 454.1084931561159 46343.0
93 91 ml-exact 41 30.99381184577942 8214.630250558439 8214.630250558439 0.0 28492.0 1.0 1.0 145.98370677815547 1676.0
94 92 ml-exact 42 19.046553134918213 8482.332346423325 8482.332346423327 2.1444448640506932e-16 9292.0 1.0 1.0 206.6368188802036 1.0000000000000002 9292.0
95 93 ml-exact 43 29.105360984802246 8350.150643446867 8350.150643446867 0.0 24245.0 1.0 1.0 532.674448133975 24245.0
96 94 ml-exact 44 28.813607215881348 8325.739376420757 8325.739376420757 0.0 22941.0 1.0 1.0 270.0282377440192 22941.0
97 95 ml-exact 45 39.90794801712036 8320.79317232281 8320.79317232281 0.0 33304.0 1.0 1.0 448.459240127851 33304.0
98 96 ml-exact 46 23.966022968292236 8341.756982876166 8341.756982876166 0.0 16789.0 1.0 1.0 467.53853953488374 16789.0
99 97 ml-exact 47 27.642159938812256 8206.985832918781 8206.985832918781 0.0 24637.0 1.0 1.0 198.0678701569822 24637.0
100 98 ml-exact 48 33.94082999229431 8197.315974091358 8197.315974091358 0.0 33963.0 1.0 1.0073005569898068 141.44484960609344 465.24657534246575
101 99 ml-exact 49 37.428488969802856 8090.681320538064 8090.681320538064 0.0 39891.0 1.0 1.0026687299831225 286.0107910064622 39891.0
102 100 ml-heuristic 0 0.09685802459716797 8160.106459602758 8160.106459602758 0.0 1.0 1.0 1.0000000000000002 1.0 1.0
103 101 ml-heuristic 1 0.19884300231933594 8255.05862375065 8255.05862375065 0.0 45.0 0.9910432499296759 1.0 1.0 1.0
104 102 ml-heuristic 2 0.1381089687347412 8229.632404496291 8229.632404496293 2.210292409357245e-16 21.0 0.9977886733658864 1.0 1.0 inf 1.0
105 103 ml-heuristic 3 0.0925898551940918 8386.859108879815 8386.859108879815 0.0 1.0 1.0 1.0 1.0 1.0
106 104 ml-heuristic 4 0.2570371627807617 8197.045478427175 8197.045478427175 0.0 28.0 1.0 1.0 1.0 1.0
107 105 ml-heuristic 5 0.13376593589782715 8184.416683317542 8184.416683317542 0.0 9.0 1.0 1.0000000000000002 1.0 1.0
108 106 ml-heuristic 6 0.10004806518554688 8146.291920190363 8146.291920190363 0.0 1.0 1.0 1.0000000000000002 1.0 1.0
109 107 ml-heuristic 7 0.09490013122558594 8292.094560437725 8292.094560437725 0.0 1.0 0.9951355227162599 1.0 1.0 1.0
110 108 ml-heuristic 8 0.0504460334777832 8189.394992049158 8189.39499204916 2.22115236267375e-16 1.0 1.0 1.0000000000000002 1.0 inf 1.0
111 109 ml-heuristic 9 0.10050415992736816 8264.94306032112 8265.728238597903 9.500105095127722e-05 1.0 1.0 1.0000950010509513 1.0 inf 1.0
112 110 ml-heuristic 10 0.12723088264465332 8206.469185635438 8206.469185635438 0.0 11.0 0.9976627397332555 1.0 1.0 1.0
113 111 ml-heuristic 11 0.087799072265625 8380.21322380759 8380.213223807592 2.1705765175261136e-16 1.0 1.0 1.0000000000000002 1.0 inf 1.0
114 112 ml-heuristic 12 0.11775588989257812 8318.665083714313 8318.665083714313 0.0 23.0 0.9980257851608126 1.0 1.0 1.0
115 113 ml-heuristic 13 0.26901793479919434 8168.160226931591 8168.160226931591 0.0 70.0 0.9984366238807443 1.0 1.0 1.0
116 114 ml-heuristic 14 0.11677289009094238 8335.244300219336 8335.244300219336 0.0 7.0 1.0 1.0 1.0 1.0
117 115 ml-heuristic 15 0.040086984634399414 8281.242353501702 8281.242353501702 0.0 1.0 1.0 1.0 1.0 1.0
118 116 ml-heuristic 16 0.029896974563598633 8266.48694918614 8266.48694918614 0.0 1.0 0.9995969381075426 1.0 1.0 1.0
119 117 ml-heuristic 17 0.05576491355895996 8349.788875581982 8349.788875581982 0.0 1.0 1.0 1.0 1.0 1.0
120 118 ml-heuristic 18 0.059118032455444336 8354.975512102363 8354.975512102363 0.0 1.0 1.0 1.0 1.0 1.0
121 119 ml-heuristic 19 0.13834500312805176 8140.8275945520445 8140.8275945520445 0.0 27.0 0.9990341445819156 1.0 1.0 1.0
122 120 ml-heuristic 20 0.06828880310058594 8218.189574160206 8218.189574160206 0.0 1.0 0.9956342490193416 1.0 1.0 1.0
123 121 ml-heuristic 21 0.08751106262207031 8337.747084077018 8337.747084077018 0.0 1.0 1.0 1.0 1.0 1.0
124 122 ml-heuristic 22 0.11509490013122559 8372.097133312143 8372.097133312143 0.0 26.0 1.0 1.0 1.0 1.0
125 123 ml-heuristic 23 0.05971503257751465 8133.46129271979 8133.46129271979 0.0 1.0 0.9963593982680748 1.0 1.0 1.0
126 124 ml-heuristic 24 0.24341201782226562 8230.99642151221 8230.99642151221 0.0 67.0 0.9974636365252632 1.0 1.0 1.0
127 125 ml-heuristic 25 0.21288514137268066 8208.77608322561 8208.77608322561 0.0 44.0 1.0 1.0 1.0 1.0
128 126 ml-heuristic 26 0.06338000297546387 8263.930518826672 8263.930518826672 0.0 1.0 1.0 1.0 1.0 1.0
129 127 ml-heuristic 27 0.14693188667297363 8198.51655526816 8198.51655526816 0.0 17.0 1.0 1.0 1.0 1.0
130 128 ml-heuristic 28 0.1814110279083252 8429.328796791307 8429.328796791307 0.0 16.0 1.0 1.0 1.0 1.0
131 129 ml-heuristic 29 0.14991402626037598 8411.384764698932 8411.384764698932 0.0 24.0 0.99291647746408 1.0 1.0 1.0
132 130 ml-heuristic 30 0.09793996810913086 8300.292335288888 8300.292335288888 0.0 12.0 1.0 1.0 1.0 1.0
133 131 ml-heuristic 31 0.05630183219909668 8472.780726255278 8472.780726255278 0.0 1.0 0.9999999793941604 1.0 1.0 1.0
134 132 ml-heuristic 32 0.06712818145751953 8176.089207977811 8176.089207977811 0.0 1.0 1.0 1.0 1.0 1.0
135 133 ml-heuristic 33 0.11620283126831055 8349.997774829048 8349.997774829048 0.0 1.0 1.0 1.0 1.0 1.0
136 134 ml-heuristic 34 0.1695559024810791 8228.164027545597 8228.164027545597 0.0 19.0 1.0 1.0 1.0 1.0
137 135 ml-heuristic 35 0.060038089752197266 8348.576374334334 8348.576374334334 0.0 1.0 1.0 1.0 1.0 1.0
138 136 ml-heuristic 36 0.09872889518737793 8168.114952378382 8168.114952378382 0.0 1.0 0.9960359086457419 1.0 1.0 1.0
139 137 ml-heuristic 37 0.15341901779174805 8449.787502150532 8449.787502150532 0.0 25.0 1.0 1.0 1.0 1.0
140 138 ml-heuristic 38 0.10974788665771484 8323.602064698229 8323.602064698229 0.0 1.0 1.0 1.0 1.0 1.0
141 139 ml-heuristic 39 0.1987910270690918 8230.716290385615 8230.716290385615 0.0 26.0 1.0 1.0 1.0 1.0
142 140 ml-heuristic 40 0.12027382850646973 7981.13331314949 7981.13331314949 0.0 1.0 0.9940667779131829 1.0 1.0 1.0
143 141 ml-heuristic 41 0.2123100757598877 8214.630250558439 8214.630250558439 0.0 17.0 1.0 1.0 1.0 1.0
144 142 ml-heuristic 42 0.09217405319213867 8482.332346423325 8482.332346423327 2.1444448640506932e-16 1.0 1.0 1.0 1.0 1.0000000000000002 1.0
145 143 ml-heuristic 43 0.05464005470275879 8350.150643446867 8350.150643446867 0.0 1.0 1.0 1.0 1.0 1.0
146 144 ml-heuristic 44 0.10670590400695801 8325.739376420757 8325.739376420757 0.0 1.0 1.0 1.0 1.0 1.0
147 145 ml-heuristic 45 0.0889890193939209 8320.79317232281 8320.79317232281 0.0 1.0 1.0 1.0 1.0 1.0
148 146 ml-heuristic 46 0.05125999450683594 8341.756982876166 8341.756982876166 0.0 1.0 1.0 1.0 1.0 1.0
149 147 ml-heuristic 47 0.13955903053283691 8206.985832918781 8206.985832918781 0.0 1.0 1.0 1.0 1.0 1.0
150 148 ml-heuristic 48 0.2399580478668213 8137.904736782855 8137.904736782855 0.0 73.0 0.9927523548566044 1.0 1.0 1.0
151 149 ml-heuristic 49 0.13086390495300293 8069.146946144667 8069.146946144667 0.0 1.0 0.9973383731801755 1.0 1.0 1.0

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

View File

@@ -1,51 +0,0 @@
,Solver,Instance,Wallclock Time,Lower Bound,Upper Bound,Gap,Nodes,Mode,Sense,Predicted LB,Predicted UB,Relative Lower Bound,Relative Upper Bound,Relative Wallclock Time,Relative Gap,Relative Nodes
0,baseline,0,29.597511053085327,13540.0,13540.0,0.0,1488.0,exact,min,,,1.0,1.0,1.0,,1.0
1,baseline,1,100.47623896598816,13567.0,13567.0,0.0,5209.0,exact,min,,,1.0,1.0,1.0,,1.0
2,baseline,2,95.63535189628601,13562.0,13562.0,0.0,5738.0,exact,min,,,1.0,1.0,1.0,,1.0
3,baseline,3,116.40385484695435,13522.0,13522.0,0.0,4888.0,exact,min,,,1.0,1.0,1.0,,1.0
4,baseline,4,52.82231903076172,13534.0,13534.0,0.0,2432.0,exact,min,,,1.0,1.0,1.0,,1.0
5,baseline,5,130.4400429725647,13532.0,13532.0,0.0,5217.0,exact,min,,,1.0,1.0,1.0,,1.0
6,baseline,6,138.90338110923767,13535.0,13535.0,0.0,5910.0,exact,min,,,1.0,1.0,1.0,,1.0
7,baseline,7,162.50647616386414,13613.0,13613.0,0.0,5152.0,exact,min,,,1.0,1.0,1.0,,1.0
8,baseline,8,135.88944792747498,13579.999997631374,13579.999997631372,-1.3394620057902246e-16,6720.0,exact,min,,,1.0,1.0,1.0,1.0,1.0
9,baseline,9,62.36928915977478,13583.999994506432,13583.999994506432,0.0,3583.0,exact,min,,,1.0,1.0,1.0,,1.0
10,baseline,10,248.86321592330933,13577.0,13578.0,7.365397363187744e-05,13577.0,exact,min,,,1.0,1.0,1.0,1.0,1.0
11,baseline,11,64.44093084335327,13574.999997985586,13574.999997985586,0.0,3149.0,exact,min,,,1.0,1.0,1.0,,1.0
12,baseline,12,74.64304614067078,13544.0,13544.0,0.0,4925.0,exact,min,,,1.0,1.0,1.0,,1.0
13,baseline,13,60.252323150634766,13534.0,13534.0,0.0,4007.0,exact,min,,,1.0,1.0,1.0,,1.0
14,baseline,14,151.05377101898193,13550.0,13551.0,7.380073800738008e-05,5389.0,exact,min,,,1.0,1.0,1.0,1.0,1.0
15,baseline,15,94.33260798454285,13593.0,13594.0,7.356727727506805e-05,4240.0,exact,min,,,1.0,1.0,1.0,1.0,1.0
16,baseline,16,112.65512180328369,13594.0,13594.0,0.0,5678.0,exact,min,,,1.0,1.0,1.0,,1.0
17,baseline,17,94.68812704086304,13543.0,13543.0,0.0,4110.0,exact,min,,,1.0,1.0,1.0,,1.0
18,baseline,18,119.84407782554626,13525.0,13525.0,0.0,4925.0,exact,min,,,1.0,1.0,1.0,,1.0
19,baseline,19,96.70060396194458,13564.0,13564.0,0.0,4242.0,exact,min,,,1.0,1.0,1.0,,1.0
20,baseline,20,206.73002099990845,13569.0,13569.0,0.0,5164.0,exact,min,,,1.0,1.0,1.0,,1.0
21,baseline,21,101.60346388816833,13566.0,13566.0,0.0,3797.0,exact,min,,,1.0,1.0,1.0,,1.0
22,baseline,22,39.24613690376282,13565.0,13565.0,0.0,1434.0,exact,min,,,1.0,1.0,1.0,,1.0
23,baseline,23,89.74621176719666,13580.0,13580.0,0.0,3758.0,exact,min,,,1.0,1.0,1.0,,1.0
24,baseline,24,69.45808696746826,13542.999999999996,13542.999999999998,1.343121467581671e-16,3608.0,exact,min,,,1.0,1.0,1.0,1.0,1.0
25,baseline,25,130.97386503219604,13542.0,13542.0,0.0,6687.0,exact,min,,,1.0,1.0,1.0,,1.0
26,baseline,26,98.3358142375946,13531.999999377458,13531.99999937746,1.3442132749257606e-16,5284.0,exact,min,,,1.0,1.0,1.0,1.0,1.0
27,baseline,27,101.37863302230835,13521.0,13522.0,7.395902669920864e-05,3512.0,exact,min,,,1.0,1.0,1.0,1.0,1.0
28,baseline,28,47.17776012420654,13571.0,13571.0,0.0,2742.0,exact,min,,,1.0,1.0,1.0,,1.0
29,baseline,29,122.19579315185547,13594.0,13594.9999861121,7.356084390904645e-05,5138.0,exact,min,,,1.0,1.0,1.0,1.0,1.0
30,baseline,30,159.65594601631165,13577.0,13577.0,0.0,5170.0,exact,min,,,1.0,1.0,1.0,,1.0
31,baseline,31,64.20995998382568,13582.0,13582.0,0.0,2716.0,exact,min,,,1.0,1.0,1.0,,1.0
32,baseline,32,73.25116801261902,13523.0,13524.0,7.394808844191378e-05,2705.0,exact,min,,,1.0,1.0,1.0,1.0,1.0
33,baseline,33,73.00323796272278,13548.0,13548.0,0.0,3823.0,exact,min,,,1.0,1.0,1.0,,1.0
34,baseline,34,75.30102896690369,13557.0,13557.0,0.0,2495.0,exact,min,,,1.0,1.0,1.0,,1.0
35,baseline,35,95.78053402900696,13567.999997100109,13567.999997100109,0.0,5380.0,exact,min,,,1.0,1.0,1.0,,1.0
36,baseline,36,59.77940106391907,13553.999999666667,13553.999999666667,0.0,2236.0,exact,min,,,1.0,1.0,1.0,,1.0
37,baseline,37,111.62521696090698,13532.0,13532.0,0.0,4730.0,exact,min,,,1.0,1.0,1.0,,1.0
38,baseline,38,101.59809303283691,13514.0,13514.0,0.0,4724.0,exact,min,,,1.0,1.0,1.0,,1.0
39,baseline,39,136.7306661605835,13538.0,13538.0,0.0,5301.0,exact,min,,,1.0,1.0,1.0,,1.0
40,baseline,40,96.18307614326477,13578.0,13578.0,0.0,5286.0,exact,min,,,1.0,1.0,1.0,,1.0
41,baseline,41,193.25571990013123,13526.0,13526.0,0.0,8946.0,exact,min,,,1.0,1.0,1.0,,1.0
42,baseline,42,98.80436420440674,13529.0,13529.0,0.0,2757.0,exact,min,,,1.0,1.0,1.0,,1.0
43,baseline,43,91.02266597747803,13565.0,13565.0,0.0,4119.0,exact,min,,,1.0,1.0,1.0,,1.0
44,baseline,44,44.981120109558105,13553.0,13553.0,0.0,1975.0,exact,min,,,1.0,1.0,1.0,,1.0
45,baseline,45,99.74598288536072,13521.0,13521.0,0.0,5262.0,exact,min,,,1.0,1.0,1.0,,1.0
46,baseline,46,70.65784502029419,13542.99999940547,13542.99999940547,0.0,3270.0,exact,min,,,1.0,1.0,1.0,,1.0
47,baseline,47,62.16441297531128,13564.0,13564.0,0.0,3631.0,exact,min,,,1.0,1.0,1.0,,1.0
48,baseline,48,190.54906916618347,13552.0,13552.0,0.0,9373.0,exact,min,,,1.0,1.0,1.0,,1.0
49,baseline,49,73.46178817749023,13524.0,13524.0,0.0,4053.0,exact,min,,,1.0,1.0,1.0,,1.0
1 Solver Instance Wallclock Time Lower Bound Upper Bound Gap Nodes Mode Sense Predicted LB Predicted UB Relative Lower Bound Relative Upper Bound Relative Wallclock Time Relative Gap Relative Nodes
2 0 baseline 0 29.597511053085327 13540.0 13540.0 0.0 1488.0 exact min 1.0 1.0 1.0 1.0
3 1 baseline 1 100.47623896598816 13567.0 13567.0 0.0 5209.0 exact min 1.0 1.0 1.0 1.0
4 2 baseline 2 95.63535189628601 13562.0 13562.0 0.0 5738.0 exact min 1.0 1.0 1.0 1.0
5 3 baseline 3 116.40385484695435 13522.0 13522.0 0.0 4888.0 exact min 1.0 1.0 1.0 1.0
6 4 baseline 4 52.82231903076172 13534.0 13534.0 0.0 2432.0 exact min 1.0 1.0 1.0 1.0
7 5 baseline 5 130.4400429725647 13532.0 13532.0 0.0 5217.0 exact min 1.0 1.0 1.0 1.0
8 6 baseline 6 138.90338110923767 13535.0 13535.0 0.0 5910.0 exact min 1.0 1.0 1.0 1.0
9 7 baseline 7 162.50647616386414 13613.0 13613.0 0.0 5152.0 exact min 1.0 1.0 1.0 1.0
10 8 baseline 8 135.88944792747498 13579.999997631374 13579.999997631372 -1.3394620057902246e-16 6720.0 exact min 1.0 1.0 1.0 1.0 1.0
11 9 baseline 9 62.36928915977478 13583.999994506432 13583.999994506432 0.0 3583.0 exact min 1.0 1.0 1.0 1.0
12 10 baseline 10 248.86321592330933 13577.0 13578.0 7.365397363187744e-05 13577.0 exact min 1.0 1.0 1.0 1.0 1.0
13 11 baseline 11 64.44093084335327 13574.999997985586 13574.999997985586 0.0 3149.0 exact min 1.0 1.0 1.0 1.0
14 12 baseline 12 74.64304614067078 13544.0 13544.0 0.0 4925.0 exact min 1.0 1.0 1.0 1.0
15 13 baseline 13 60.252323150634766 13534.0 13534.0 0.0 4007.0 exact min 1.0 1.0 1.0 1.0
16 14 baseline 14 151.05377101898193 13550.0 13551.0 7.380073800738008e-05 5389.0 exact min 1.0 1.0 1.0 1.0 1.0
17 15 baseline 15 94.33260798454285 13593.0 13594.0 7.356727727506805e-05 4240.0 exact min 1.0 1.0 1.0 1.0 1.0
18 16 baseline 16 112.65512180328369 13594.0 13594.0 0.0 5678.0 exact min 1.0 1.0 1.0 1.0
19 17 baseline 17 94.68812704086304 13543.0 13543.0 0.0 4110.0 exact min 1.0 1.0 1.0 1.0
20 18 baseline 18 119.84407782554626 13525.0 13525.0 0.0 4925.0 exact min 1.0 1.0 1.0 1.0
21 19 baseline 19 96.70060396194458 13564.0 13564.0 0.0 4242.0 exact min 1.0 1.0 1.0 1.0
22 20 baseline 20 206.73002099990845 13569.0 13569.0 0.0 5164.0 exact min 1.0 1.0 1.0 1.0
23 21 baseline 21 101.60346388816833 13566.0 13566.0 0.0 3797.0 exact min 1.0 1.0 1.0 1.0
24 22 baseline 22 39.24613690376282 13565.0 13565.0 0.0 1434.0 exact min 1.0 1.0 1.0 1.0
25 23 baseline 23 89.74621176719666 13580.0 13580.0 0.0 3758.0 exact min 1.0 1.0 1.0 1.0
26 24 baseline 24 69.45808696746826 13542.999999999996 13542.999999999998 1.343121467581671e-16 3608.0 exact min 1.0 1.0 1.0 1.0 1.0
27 25 baseline 25 130.97386503219604 13542.0 13542.0 0.0 6687.0 exact min 1.0 1.0 1.0 1.0
28 26 baseline 26 98.3358142375946 13531.999999377458 13531.99999937746 1.3442132749257606e-16 5284.0 exact min 1.0 1.0 1.0 1.0 1.0
29 27 baseline 27 101.37863302230835 13521.0 13522.0 7.395902669920864e-05 3512.0 exact min 1.0 1.0 1.0 1.0 1.0
30 28 baseline 28 47.17776012420654 13571.0 13571.0 0.0 2742.0 exact min 1.0 1.0 1.0 1.0
31 29 baseline 29 122.19579315185547 13594.0 13594.9999861121 7.356084390904645e-05 5138.0 exact min 1.0 1.0 1.0 1.0 1.0
32 30 baseline 30 159.65594601631165 13577.0 13577.0 0.0 5170.0 exact min 1.0 1.0 1.0 1.0
33 31 baseline 31 64.20995998382568 13582.0 13582.0 0.0 2716.0 exact min 1.0 1.0 1.0 1.0
34 32 baseline 32 73.25116801261902 13523.0 13524.0 7.394808844191378e-05 2705.0 exact min 1.0 1.0 1.0 1.0 1.0
35 33 baseline 33 73.00323796272278 13548.0 13548.0 0.0 3823.0 exact min 1.0 1.0 1.0 1.0
36 34 baseline 34 75.30102896690369 13557.0 13557.0 0.0 2495.0 exact min 1.0 1.0 1.0 1.0
37 35 baseline 35 95.78053402900696 13567.999997100109 13567.999997100109 0.0 5380.0 exact min 1.0 1.0 1.0 1.0
38 36 baseline 36 59.77940106391907 13553.999999666667 13553.999999666667 0.0 2236.0 exact min 1.0 1.0 1.0 1.0
39 37 baseline 37 111.62521696090698 13532.0 13532.0 0.0 4730.0 exact min 1.0 1.0 1.0 1.0
40 38 baseline 38 101.59809303283691 13514.0 13514.0 0.0 4724.0 exact min 1.0 1.0 1.0 1.0
41 39 baseline 39 136.7306661605835 13538.0 13538.0 0.0 5301.0 exact min 1.0 1.0 1.0 1.0
42 40 baseline 40 96.18307614326477 13578.0 13578.0 0.0 5286.0 exact min 1.0 1.0 1.0 1.0
43 41 baseline 41 193.25571990013123 13526.0 13526.0 0.0 8946.0 exact min 1.0 1.0 1.0 1.0
44 42 baseline 42 98.80436420440674 13529.0 13529.0 0.0 2757.0 exact min 1.0 1.0 1.0 1.0
45 43 baseline 43 91.02266597747803 13565.0 13565.0 0.0 4119.0 exact min 1.0 1.0 1.0 1.0
46 44 baseline 44 44.981120109558105 13553.0 13553.0 0.0 1975.0 exact min 1.0 1.0 1.0 1.0
47 45 baseline 45 99.74598288536072 13521.0 13521.0 0.0 5262.0 exact min 1.0 1.0 1.0 1.0
48 46 baseline 46 70.65784502029419 13542.99999940547 13542.99999940547 0.0 3270.0 exact min 1.0 1.0 1.0 1.0
49 47 baseline 47 62.16441297531128 13564.0 13564.0 0.0 3631.0 exact min 1.0 1.0 1.0 1.0
50 48 baseline 48 190.54906916618347 13552.0 13552.0 0.0 9373.0 exact min 1.0 1.0 1.0 1.0
51 49 baseline 49 73.46178817749023 13524.0 13524.0 0.0 4053.0 exact min 1.0 1.0 1.0 1.0

View File

@@ -1,151 +0,0 @@
,Solver,Instance,Wallclock Time,Lower Bound,Upper Bound,Gap,Nodes,Mode,Sense,Predicted LB,Predicted UB,Relative Lower Bound,Relative Upper Bound,Relative Wallclock Time,Relative Gap,Relative Nodes
0,baseline,0,29.597511053085327,13540.0,13540.0,0.0,1488.0,exact,min,,,1.0,1.0,26.86009340160744,,1488.0
1,baseline,1,100.47623896598816,13567.0,13567.0,0.0,5209.0,exact,min,,,1.0,1.0,199.7436260690364,,5209.0
2,baseline,2,95.635351896286,13562.0,13562.0,0.0,5738.0,exact,min,,,0.9999262699992627,1.0,52.66283965751092,,14.133004926108374
3,baseline,3,116.40385484695436,13522.0,13522.0,0.0,4888.0,exact,min,,,1.0,1.0,32.32019218153368,,4888.0
4,baseline,4,52.82231903076172,13534.0,13534.0,0.0,2432.0,exact,min,,,1.0,1.0,58.92863357290153,,10.57391304347826
5,baseline,5,130.4400429725647,13532.0,13532.0,0.0,5217.0,exact,min,,,1.0,1.0,77.40752944139346,,115.93333333333334
6,baseline,6,138.9033811092377,13535.0,13535.0,0.0,5910.0,exact,min,,,1.0,1.0000000000688192,77.01429912590677,,8.288920056100983
7,baseline,7,162.5064761638641,13613.0,13613.0,0.0,5152.0,exact,min,,,1.0,1.0,99.48098992115096,,20.363636363636363
8,baseline,8,135.88944792747498,13579.999997631374,13579.999997631372,-1.3394620057902246e-16,6720.0,exact,min,,,0.9999999998255799,1.0,59.017368737577044,1.0,11.893805309734514
9,baseline,9,62.36928915977478,13583.999994506434,13583.999994506434,0.0,3583.0,exact,min,,,0.9999999995955855,1.0,10.957478805772942,,13.941634241245136
10,baseline,10,248.86321592330933,13577.0,13578.0,7.365397363187744e-05,13577.0,exact,min,,,0.9999263514508764,1.0,84.90020779976263,inf,20.95216049382716
11,baseline,11,64.44093084335327,13574.999997985586,13574.999997985586,0.0,3149.0,exact,min,,,1.0,1.0,40.90397152451879,,3149.0
12,baseline,12,74.64304614067079,13544.0,13544.0,0.0,4925.0,exact,min,,,1.0,1.0,86.04782361214066,,4925.0
13,baseline,13,60.25232315063477,13534.0,13534.0,0.0,4007.0,exact,min,,,1.0,1.0,25.318911215893348,,4007.0
14,baseline,14,151.0537710189819,13550.0,13551.0,7.380073800738008e-05,5389.0,exact,min,,,0.9999262047081396,1.0,66.91125769399989,inf,74.84722222222223
15,baseline,15,94.33260798454285,13593.0,13594.0,7.356727727506805e-05,4240.0,exact,min,,,0.9999264381344711,1.0,38.51179927436251,inf,12.011331444759207
16,baseline,16,112.65512180328369,13594.0,13594.0,0.0,5678.0,exact,min,,,1.0,1.0,77.31705863554674,,14.159600997506235
17,baseline,17,94.68812704086305,13543.0,13543.0,0.0,4110.0,exact,min,,,1.0,1.0,42.677072694980595,,37.706422018348626
18,baseline,18,119.84407782554626,13525.0,13525.0,0.0,4925.0,exact,min,,,1.0,1.0,277.4617640532422,,4925.0
19,baseline,19,96.70060396194458,13564.0,13564.0,0.0,4242.0,exact,min,,,1.0,1.0,65.06829984584466,,212.1
20,baseline,20,206.73002099990845,13569.0,13569.0,0.0,5164.0,exact,min,,,1.0,1.0,165.1539992327885,,5164.0
21,baseline,21,101.60346388816832,13566.0,13566.0,0.0,3797.0,exact,min,,,1.0,1.0,70.84710384515891,,15.434959349593496
22,baseline,22,39.246136903762824,13565.0,13565.0,0.0,1434.0,exact,min,,,1.0,1.0,30.91644064571905,,1434.0
23,baseline,23,89.74621176719666,13580.0,13580.0,0.0,3758.0,exact,min,,,1.0,1.0,44.49311196559062,,67.10714285714286
24,baseline,24,69.45808696746826,13542.999999999995,13542.999999999998,1.343121467581671e-16,3608.0,exact,min,,,0.9999999999999996,1.0,42.55165506627291,inf,3608.0
25,baseline,25,130.97386503219604,13542.0,13542.0,0.0,6687.0,exact,min,,,1.0,1.0,96.75237348307111,,6687.0
26,baseline,26,98.33581423759459,13531.999999377458,13531.999999377458,1.3442132749257606e-16,5284.0,exact,min,,,0.9999999999539948,1.0,43.94880897162418,inf,5284.0
27,baseline,27,101.37863302230836,13521.0,13522.0,7.395902669920864e-05,3512.0,exact,min,,,0.9999260464428339,1.0,71.28055311506921,inf,3512.0
28,baseline,28,47.17776012420654,13571.0,13571.0,0.0,2742.0,exact,min,,,1.0,1.0,42.99836374991145,,182.8
29,baseline,29,122.19579315185548,13594.0,13594.9999861121,7.356084390904645e-05,5138.0,exact,min,,,0.9999264435454212,1.0,105.57202248105209,inf,5138.0
30,baseline,30,159.65594601631162,13577.0,13577.0,0.0,5170.0,exact,min,,,1.0,1.0,86.70719628520685,,206.8
31,baseline,31,64.20995998382568,13582.0,13582.0,0.0,2716.0,exact,min,,,1.0,1.0,22.238063526379513,,90.53333333333333
32,baseline,32,73.25116801261902,13523.0,13524.0,7.394808844191378e-05,2705.0,exact,min,,,0.9999260573794735,1.0,97.84536245319715,inf,40.984848484848484
33,baseline,33,73.00323796272278,13548.0,13548.0,0.0,3823.0,exact,min,,,1.0,1.0,35.56819949839414,,4.0115424973767055
34,baseline,34,75.30102896690369,13557.0,13557.0,0.0,2495.0,exact,min,,,1.0,1.0,33.9975137664807,,31.1875
35,baseline,35,95.78053402900696,13567.999997100107,13567.999997100107,0.0,5380.0,exact,min,,,0.9999999997862696,1.0,91.7838074273266,,12.089887640449438
36,baseline,36,59.77940106391907,13553.999999666668,13553.999999666668,0.0,2236.0,exact,min,,,0.9999999999754071,1.000000000068143,80.66210177722816,,17.746031746031747
37,baseline,37,111.62521696090698,13532.0,13532.0,0.0,4730.0,exact,min,,,1.0,1.0,44.52758005552085,,44.205607476635514
38,baseline,38,101.59809303283691,13514.0,13514.0,0.0,4724.0,exact,min,,,1.0,1.0,67.8739169651946,,4724.0
39,baseline,39,136.7306661605835,13538.0,13538.0,0.0,5301.0,exact,min,,,1.0,1.0,80.14397099282577,,35.10596026490066
40,baseline,40,96.18307614326477,13578.0,13578.0,0.0,5286.0,exact,min,,,1.0,1.0,51.5351421556022,,5286.0
41,baseline,41,193.25571990013125,13526.0,13526.0,0.0,8946.0,exact,min,,,1.0,1.0,76.43245706873643,,8946.0
42,baseline,42,98.80436420440674,13529.0,13529.0,0.0,2757.0,exact,min,,,0.9999999999999999,1.0,35.10803321142842,,58.659574468085104
43,baseline,43,91.02266597747804,13565.0,13565.0,0.0,4119.0,exact,min,,,1.0,1.0,12.480728782988091,,15.426966292134832
44,baseline,44,44.981120109558105,13553.0,13553.0,0.0,1975.0,exact,min,,,1.0,1.0,25.092447494113404,,1975.0
45,baseline,45,99.74598288536072,13521.0,13521.0,0.0,5262.0,exact,min,,,1.0,1.0,39.85221202580209,,5262.0
46,baseline,46,70.65784502029419,13542.99999940547,13542.99999940547,0.0,3270.0,exact,min,,,0.9999999999561006,1.0,45.453685199539756,,3270.0
47,baseline,47,62.16441297531128,13564.0,13564.0,0.0,3631.0,exact,min,,,1.0,1.0,20.033164659276355,,3631.0
48,baseline,48,190.54906916618347,13552.0,13552.0,0.0,9373.0,exact,min,,,1.0,1.0,103.71179496429484,,9373.0
49,baseline,49,73.46178817749023,13524.0,13524.0,0.0,4053.0,exact,min,,,1.0,1.0,26.241432260088718,,8.966814159292035
50,ml-exact,0,11.3649320602417,13540.0,13540.0,0.0,1.0,exact,min,13534.675569817877,13534.83622755677,1.0,1.0,10.31381105301398,,1.0
51,ml-exact,1,10.329864025115967,13567.0,13567.0,0.0,1.0,exact,min,13566.029921819729,13566.142424385062,1.0,1.0,20.535447170501705,,1.0
52,ml-exact,2,12.315430164337158,13562.0,13562.0,0.0,406.0,exact,min,13545.26825630499,13545.412645404165,0.9999262699992627,1.0,6.78165041689932,,1.0
53,ml-exact,3,12.996630907058716,13522.0,13522.0,0.0,37.0,exact,min,13513.490196843653,13513.683391861978,1.0,1.0,3.6085884714116796,,37.0
54,ml-exact,4,11.032249212265015,13534.0,13534.0,0.0,230.0,exact,min,13552.471283116225,13552.604609540394,1.0,1.0,12.307588595947369,,1.0
55,ml-exact,5,13.653040885925293,13532.0,13532.0,0.0,45.0,exact,min,13557.55577263004,13557.681290107144,1.0,1.0,8.102175836940628,,1.0
56,ml-exact,6,16.461652040481567,13535.0,13535.0,0.0,1805.0,exact,min,13536.370399655816,13536.528454412353,1.0,1.0000000000688192,9.127082323181316,,2.5315568022440393
57,ml-exact,7,13.48779296875,13613.0,13613.0,0.0,253.0,exact,min,13595.689443983643,13595.75639435777,1.0,1.0,8.256772456439219,,1.0
58,ml-exact,8,14.816275835037231,13580.0,13580.0,0.0,565.0,exact,min,13588.910124631891,13588.987486935437,1.0,1.0000000001744203,6.434771997460219,-0.0,1.0
59,ml-exact,9,14.60462999343872,13584.0,13584.0,0.0,257.0,exact,min,13569.84328895509,13569.949934810124,1.0,1.0000000004044145,2.565844917829724,,1.0
60,ml-exact,10,14.660763025283813,13578.0,13578.0,0.0,648.0,exact,min,13568.148459117152,13568.25770795454,1.0,1.0,5.0015500391718986,,1.0
61,ml-exact,11,10.747740983963013,13574.0,13574.999999323794,7.36702021360194e-05,1.0,exact,min,13564.758799441275,13564.873254243374,0.9999263353233345,1.000000000098579,6.822143712194244,inf,1.0
62,ml-exact,12,11.216827154159546,13544.0,13544.0,0.0,1.0,exact,min,13538.912644412721,13539.06679469573,1.0,1.0,12.930656160923881,,1.0
63,ml-exact,13,10.66540789604187,13534.0,13534.0,0.0,1.0,exact,min,13559.674309927463,13559.796573676624,1.0,1.0,4.4817610588402195,,1.0
64,ml-exact,14,12.637185096740723,13551.0,13551.0,0.0,72.0,exact,min,13548.657915980866,13548.797099115332,1.0,1.0,5.597807607388606,,1.0
65,ml-exact,15,15.559112071990967,13594.0,13594.0,0.0,353.0,exact,min,13560.52172484643,13560.642687104417,1.0,1.0,6.352091962749635,,1.0
66,ml-exact,16,14.185301065444946,13594.0,13594.0,0.0,500.0,exact,min,13552.89499057571,13553.027666254291,1.0,1.0,9.735604885812853,,1.2468827930174564
67,ml-exact,17,12.099143028259277,13543.0,13543.0,0.0,109.0,exact,min,13535.522984736846,13535.682340984562,1.0,1.0,5.453228643345678,,1.0
68,ml-exact,18,9.592709064483643,13525.0,13525.0,0.0,1.0,exact,min,13525.777713168703,13525.952036564957,1.0,1.0,22.208940377976713,,1.0
69,ml-exact,19,15.68299388885498,13564.0,13564.0,0.0,20.0,exact,min,13560.098017386947,13560.21963039052,1.0,1.0,10.55283738705663,,1.0
70,ml-exact,20,11.181609153747559,13569.0,13569.0,0.0,1.0,exact,min,13549.92903835932,13550.06626925702,1.0,1.0,8.932846137524376,,1.0
71,ml-exact,21,12.961982011795044,13566.0,13566.0,0.0,246.0,exact,min,13553.742405494679,13553.873779682082,1.0,1.0,9.038263563922284,,1.0
72,ml-exact,22,10.162704944610596,13564.0,13565.0,7.372456502506635e-05,1.0,exact,min,13551.200160737772,13551.335439398708,0.9999262808698858,1.0,8.005747546324356,inf,1.0
73,ml-exact,23,14.439340114593506,13580.0,13580.0,0.0,56.0,exact,min,13560.945432305916,13561.065743818312,1.0,1.0,7.158532530536033,,1.0
74,ml-exact,24,8.9430251121521,13543.0,13543.0,0.0,1.0,exact,min,13545.691963764473,13545.835702118062,1.0,1.0000000000000002,5.478707180627428,,1.0
75,ml-exact,25,11.13078498840332,13542.0,13542.0,0.0,1.0,exact,min,13528.31995792561,13528.490376848333,1.0,1.0,8.222479088427512,,1.0
76,ml-exact,26,10.45563006401062,13532.0,13532.0,0.0,1.0,exact,min,13538.488936953237,13538.643737981833,1.0,1.0000000000460052,4.67289046136253,,1.0
77,ml-exact,27,12.658456087112427,13522.0,13522.0,0.0,1.0,exact,min,13537.641522034268,13537.797624554041,1.0,1.0,8.900314835312852,,1.0
78,ml-exact,28,11.49683690071106,13571.0,13571.0,0.0,15.0,exact,min,13560.098017386947,13560.21963039052,1.0,1.0,10.478351955003774,,1.0
79,ml-exact,29,10.038163900375366,13594.0,13595.0,7.356186552890981e-05,1.0,exact,min,13571.961826252513,13572.065218379603,0.9999264435454212,1.0000000010215446,8.672551138007995,inf,1.0
80,ml-exact,30,10.994755983352661,13576.0,13577.0,7.365939893930465e-05,25.0,exact,min,13559.250602467977,13559.373516962729,0.9999263460263681,1.0,5.971117825195893,inf,1.0
81,ml-exact,31,12.409696102142334,13581.0,13582.0,7.363228039172373e-05,30.0,exact,min,13583.401927658593,13583.48774965479,0.9999263731409218,1.0,4.297894132499397,inf,1.0
82,ml-exact,32,11.40560007095337,13524.0,13524.0,0.0,98.0,exact,min,13535.522984736846,13535.682340984562,1.0,1.0,15.235048166691241,,1.4848484848484849
83,ml-exact,33,14.968575954437256,13548.0,13548.0,0.0,953.0,exact,min,13552.89499057571,13553.027666254291,1.0,1.0,7.292899748174851,,1.0
84,ml-exact,34,10.275269031524658,13557.0,13557.0,0.0,80.0,exact,min,13543.573426467052,13543.720418548583,1.0,1.0,4.63916104661852,,1.0
85,ml-exact,35,13.136114120483398,13568.0,13568.0,0.0,445.0,exact,min,13544.42084138602,13544.566531976374,1.0,1.0000000002137304,12.587970833538003,,1.0
86,ml-exact,36,11.606173992156982,13553.999998743056,13553.999998743056,0.0,164.0,exact,min,13525.354005709218,13525.528979851062,0.9999999999072641,1.0,15.660551479908223,,1.3015873015873016
87,ml-exact,37,15.051767110824585,13532.0,13532.0,0.0,107.0,exact,min,13539.336351872207,13539.489851409624,1.0,1.0,6.004187792432415,,1.0
88,ml-exact,38,10.445327997207642,13514.0,13514.0,0.0,1.0,exact,min,13529.591080304062,13529.75954699002,1.0,1.0,6.978136144027363,,1.0
89,ml-exact,39,12.747802019119263,13538.0,13538.0,0.0,151.0,exact,min,13537.217814574784,13537.374567840145,1.0,1.0,7.472058053477855,,1.0
90,ml-exact,40,14.315036058425903,13578.0,13578.0,0.0,1.0,exact,min,13566.877336738698,13566.988537812853,1.0,1.0,7.670033521642671,,1.0
91,ml-exact,41,10.27357292175293,13525.0,13526.0,7.393715341959335e-05,1.0,exact,min,13521.540638573859,13521.721469426,0.9999260683128789,1.0,4.063188513593281,inf,1.0
92,ml-exact,42,12.76089596748352,13529.0,13529.0,0.0,47.0,exact,min,13530.014787763548,13530.182603703915,0.9999999999999999,1.0,4.534313469262858,,1.0
93,ml-exact,43,16.610208988189697,13565.0,13565.0,0.0,267.0,exact,min,13560.945432305916,13561.065743818312,1.0,1.0,2.2775372615612164,,1.0
94,ml-exact,44,9.052951097488403,13553.0,13553.0,0.0,1.0,exact,min,13571.114411333543,13571.219104951811,1.0,1.0,5.050134356975125,,1.0
95,ml-exact,45,12.605960130691528,13521.0,13521.0,0.0,1.0,exact,min,13518.998393816952,13519.183129142624,1.0,1.0,5.036547652194804,,1.0
96,ml-exact,46,12.235252141952515,13543.0,13543.0,0.0,1.0,exact,min,13543.149719007566,13543.297361834688,1.0,1.0000000000438993,7.870849996027641,,1.0
97,ml-exact,47,11.854049921035767,13564.0,13564.0,0.0,1.0,exact,min,13567.301044198182,13567.41159452675,1.0,1.0,3.8200977469489614,,1.0
98,ml-exact,48,11.03400993347168,13551.999999999978,13552.000000000004,1.8791212846548136e-15,1.0,exact,min,13558.403187549009,13558.527403534938,0.9999999999999983,1.0000000000000002,6.005576310930073,inf,1.0
99,ml-exact,49,10.517628908157349,13524.0,13524.0,0.0,547.0,exact,min,13501.626387978087,13501.837803872895,1.0,1.0,3.757023254910798,,1.2101769911504425
100,ml-heuristic,0,1.1019139289855957,13540.0,13540.0,0.0,787.0,heuristic,min,13534.675569817877,13534.83622755677,1.0,1.0,1.0,,787.0
101,ml-heuristic,1,0.503026008605957,13567.0,13567.0,0.0,142.0,heuristic,min,13566.029921819729,13566.142424385062,1.0,1.0,1.0,,142.0
102,ml-heuristic,2,1.815993070602417,13563.0,13563.0,0.0,1640.0,heuristic,min,13545.26825630499,13545.412645404165,1.0,1.000073735437251,1.0,,4.039408866995074
103,ml-heuristic,3,3.6015830039978027,13522.0,13522.0,0.0,1.0,heuristic,min,13513.490196843653,13513.683391861978,1.0,1.0,1.0,,1.0
104,ml-heuristic,4,0.8963778018951416,13534.0,13534.0,0.0,261.0,heuristic,min,13552.471283116225,13552.604609540394,1.0,1.0,1.0,,1.1347826086956523
105,ml-heuristic,5,1.685107946395874,13532.0,13532.0,0.0,265.0,heuristic,min,13557.55577263004,13557.681290107144,1.0,1.0,1.0,,5.888888888888889
106,ml-heuristic,6,1.803605079650879,13534.999999068532,13534.999999068534,1.343915333336563e-16,713.0,heuristic,min,13536.370399655816,13536.528454412353,0.9999999999311808,1.0,1.0,inf,1.0
107,ml-heuristic,7,1.6335430145263672,13613.0,13613.0,0.0,519.0,heuristic,min,13595.689443983643,13595.75639435777,1.0,1.0,1.0,,2.0513833992094863
108,ml-heuristic,8,2.3025331497192383,13580.0,13580.0,0.0,1442.0,heuristic,min,13588.910124631891,13588.987486935437,1.0,1.0000000001744203,1.0,-0.0,2.552212389380531
109,ml-heuristic,9,5.6919379234313965,13584.0,13584.0,0.0,1142.0,heuristic,min,13569.84328895509,13569.949934810124,1.0,1.0000000004044145,1.0,,4.443579766536965
110,ml-heuristic,10,2.931243896484375,13577.0,13578.0,7.365397363187744e-05,1123.0,heuristic,min,13568.148459117152,13568.25770795454,0.9999263514508764,1.0,1.0,inf,1.7330246913580247
111,ml-heuristic,11,1.5754199028015137,13574.0,13574.999998324447,7.367012851385044e-05,3.0,heuristic,min,13564.758799441275,13564.873254243374,0.9999263353233345,1.0000000000249623,1.0,inf,3.0
112,ml-heuristic,12,0.8674600124359131,13544.0,13544.0,0.0,200.0,heuristic,min,13538.912644412721,13539.06679469573,1.0,1.0,1.0,,200.0
113,ml-heuristic,13,2.3797359466552734,13534.0,13534.0,0.0,39.0,heuristic,min,13559.674309927463,13559.796573676624,1.0,1.0,1.0,,39.0
114,ml-heuristic,14,2.257524013519287,13551.0,13551.0,0.0,690.0,heuristic,min,13548.657915980866,13548.797099115332,1.0,1.0,1.0,,9.583333333333334
115,ml-heuristic,15,2.4494469165802,13593.0,13594.0,7.356727727506805e-05,1161.0,heuristic,min,13560.52172484643,13560.642687104417,0.9999264381344711,1.0,1.0,inf,3.2889518413597734
116,ml-heuristic,16,1.4570538997650146,13594.0,13594.0,0.0,401.0,heuristic,min,13552.89499057571,13553.027666254291,1.0,1.0,1.0,,1.0
117,ml-heuristic,17,2.2187118530273438,13543.0,13543.0,0.0,234.0,heuristic,min,13535.522984736846,13535.682340984562,1.0,1.0,1.0,,2.146788990825688
118,ml-heuristic,18,0.4319300651550293,13525.0,13525.0,0.0,1.0,heuristic,min,13525.777713168703,13525.952036564957,1.0,1.0,1.0,,1.0
119,ml-heuristic,19,1.4861400127410889,13564.0,13564.0,0.0,466.0,heuristic,min,13560.098017386947,13560.21963039052,1.0,1.0,1.0,,23.3
120,ml-heuristic,20,1.2517409324645996,13569.0,13569.0,0.0,274.0,heuristic,min,13549.92903835932,13550.06626925702,1.0,1.0,1.0,,274.0
121,ml-heuristic,21,1.4341230392456055,13566.0,13566.0,0.0,476.0,heuristic,min,13553.742405494679,13553.873779682082,1.0,1.0,1.0,,1.934959349593496
122,ml-heuristic,22,1.2694261074066162,13564.0,13565.0,7.372456502506635e-05,22.0,heuristic,min,13551.200160737772,13551.335439398708,0.9999262808698858,1.0,1.0,inf,22.0
123,ml-heuristic,23,2.0170810222625732,13580.0,13580.0,0.0,306.0,heuristic,min,13560.945432305916,13561.065743818312,1.0,1.0,1.0,,5.464285714285714
124,ml-heuristic,24,1.632323980331421,13543.0,13543.0,0.0,328.0,heuristic,min,13545.691963764473,13545.835702118062,1.0,1.0000000000000002,1.0,,328.0
125,ml-heuristic,25,1.3537018299102783,13542.0,13542.0,0.0,153.0,heuristic,min,13528.31995792561,13528.490376848333,1.0,1.0,1.0,,153.0
126,ml-heuristic,26,2.2375080585479736,13532.0,13532.0,0.0,1.0,heuristic,min,13538.488936953237,13538.643737981833,1.0,1.0000000000460052,1.0,,1.0
127,ml-heuristic,27,1.422248125076294,13522.0,13522.0,0.0,258.0,heuristic,min,13537.641522034268,13537.797624554041,1.0,1.0,1.0,,258.0
128,ml-heuristic,28,1.0971989631652832,13570.0,13571.0,7.369196757553427e-05,130.0,heuristic,min,13560.098017386947,13560.21963039052,0.9999263134625304,1.0,1.0,inf,8.666666666666666
129,ml-heuristic,29,1.157463788986206,13595.0,13595.0,0.0,1.0,heuristic,min,13571.961826252513,13572.065218379603,1.0,1.0000000010215446,1.0,,1.0
130,ml-heuristic,30,1.841322898864746,13577.0,13577.0,0.0,207.0,heuristic,min,13559.250602467977,13559.373516962729,1.0,1.0,1.0,,8.28
131,ml-heuristic,31,2.887389898300171,13582.0,13582.0,0.0,1061.0,heuristic,min,13583.401927658593,13583.48774965479,1.0,1.0,1.0,,35.36666666666667
132,ml-heuristic,32,0.7486422061920166,13523.0,13524.0,7.394808844191378e-05,66.0,heuristic,min,13535.522984736846,13535.682340984562,0.9999260573794735,1.0,1.0,inf,1.0
133,ml-heuristic,33,2.0524861812591553,13548.0,13548.0,0.0,1437.0,heuristic,min,13552.89499057571,13553.027666254291,1.0,1.0,1.0,,1.5078698845750262
134,ml-heuristic,34,2.214898109436035,13557.0,13557.0,0.0,373.0,heuristic,min,13543.573426467052,13543.720418548583,1.0,1.0,1.0,,4.6625
135,ml-heuristic,35,1.0435450077056885,13568.0,13568.0,0.0,623.0,heuristic,min,13544.42084138602,13544.566531976374,1.0,1.0000000002137304,1.0,,1.4
136,ml-heuristic,36,0.7411088943481445,13554.0,13554.0,0.0,126.0,heuristic,min,13525.354005709218,13525.528979851062,1.0,1.000000000092736,1.0,,1.0
137,ml-heuristic,37,2.506878137588501,13532.0,13532.0,0.0,733.0,heuristic,min,13539.336351872207,13539.489851409624,1.0,1.0,1.0,,6.850467289719626
138,ml-heuristic,38,1.4968650341033936,13514.0,13514.0,0.0,87.0,heuristic,min,13529.591080304062,13529.75954699002,1.0,1.0,1.0,,87.0
139,ml-heuristic,39,1.7060630321502686,13538.0,13538.0,0.0,235.0,heuristic,min,13537.217814574784,13537.374567840145,1.0,1.0,1.0,,1.5562913907284768
140,ml-heuristic,40,1.866358995437622,13577.0,13578.000000000002,7.365397363201142e-05,15.0,heuristic,min,13566.877336738698,13566.988537812853,0.9999263514508764,1.0000000000000002,1.0,inf,15.0
141,ml-heuristic,41,2.5284509658813477,13526.0,13526.0,0.0,217.0,heuristic,min,13521.540638573859,13521.721469426,1.0,1.0,1.0,,217.0
142,ml-heuristic,42,2.8142950534820557,13529.000000000002,13529.000000000002,0.0,201.0,heuristic,min,13530.014787763548,13530.182603703915,1.0,1.0000000000000002,1.0,,4.276595744680851
143,ml-heuristic,43,7.293056964874268,13565.0,13565.0,0.0,1485.0,heuristic,min,13560.945432305916,13561.065743818312,1.0,1.0,1.0,,5.561797752808989
144,ml-heuristic,44,1.7926158905029297,13553.0,13553.0,0.0,1.0,heuristic,min,13571.114411333543,13571.219104951811,1.0,1.0,1.0,,1.0
145,ml-heuristic,45,2.502897024154663,13521.0,13521.0,0.0,68.0,heuristic,min,13518.998393816952,13519.183129142624,1.0,1.0,1.0,,68.0
146,ml-heuristic,46,1.554502010345459,13543.0,13543.0,0.0,157.0,heuristic,min,13543.149719007566,13543.297361834688,1.0,1.0000000000438993,1.0,,157.0
147,ml-heuristic,47,3.1030750274658203,13564.0,13564.0,0.0,137.0,heuristic,min,13567.301044198182,13567.41159452675,1.0,1.0,1.0,,137.0
148,ml-heuristic,48,1.837294101715088,13552.0,13552.0,0.0,48.0,heuristic,min,13558.403187549009,13558.527403534938,1.0,1.0,1.0,,48.0
149,ml-heuristic,49,2.7994580268859863,13524.0,13524.0,0.0,452.0,heuristic,min,13501.626387978087,13501.837803872895,1.0,1.0,1.0,,1.0
1 Solver Instance Wallclock Time Lower Bound Upper Bound Gap Nodes Mode Sense Predicted LB Predicted UB Relative Lower Bound Relative Upper Bound Relative Wallclock Time Relative Gap Relative Nodes
2 0 baseline 0 29.597511053085327 13540.0 13540.0 0.0 1488.0 exact min 1.0 1.0 26.86009340160744 1488.0
3 1 baseline 1 100.47623896598816 13567.0 13567.0 0.0 5209.0 exact min 1.0 1.0 199.7436260690364 5209.0
4 2 baseline 2 95.635351896286 13562.0 13562.0 0.0 5738.0 exact min 0.9999262699992627 1.0 52.66283965751092 14.133004926108374
5 3 baseline 3 116.40385484695436 13522.0 13522.0 0.0 4888.0 exact min 1.0 1.0 32.32019218153368 4888.0
6 4 baseline 4 52.82231903076172 13534.0 13534.0 0.0 2432.0 exact min 1.0 1.0 58.92863357290153 10.57391304347826
7 5 baseline 5 130.4400429725647 13532.0 13532.0 0.0 5217.0 exact min 1.0 1.0 77.40752944139346 115.93333333333334
8 6 baseline 6 138.9033811092377 13535.0 13535.0 0.0 5910.0 exact min 1.0 1.0000000000688192 77.01429912590677 8.288920056100983
9 7 baseline 7 162.5064761638641 13613.0 13613.0 0.0 5152.0 exact min 1.0 1.0 99.48098992115096 20.363636363636363
10 8 baseline 8 135.88944792747498 13579.999997631374 13579.999997631372 -1.3394620057902246e-16 6720.0 exact min 0.9999999998255799 1.0 59.017368737577044 1.0 11.893805309734514
11 9 baseline 9 62.36928915977478 13583.999994506434 13583.999994506434 0.0 3583.0 exact min 0.9999999995955855 1.0 10.957478805772942 13.941634241245136
12 10 baseline 10 248.86321592330933 13577.0 13578.0 7.365397363187744e-05 13577.0 exact min 0.9999263514508764 1.0 84.90020779976263 inf 20.95216049382716
13 11 baseline 11 64.44093084335327 13574.999997985586 13574.999997985586 0.0 3149.0 exact min 1.0 1.0 40.90397152451879 3149.0
14 12 baseline 12 74.64304614067079 13544.0 13544.0 0.0 4925.0 exact min 1.0 1.0 86.04782361214066 4925.0
15 13 baseline 13 60.25232315063477 13534.0 13534.0 0.0 4007.0 exact min 1.0 1.0 25.318911215893348 4007.0
16 14 baseline 14 151.0537710189819 13550.0 13551.0 7.380073800738008e-05 5389.0 exact min 0.9999262047081396 1.0 66.91125769399989 inf 74.84722222222223
17 15 baseline 15 94.33260798454285 13593.0 13594.0 7.356727727506805e-05 4240.0 exact min 0.9999264381344711 1.0 38.51179927436251 inf 12.011331444759207
18 16 baseline 16 112.65512180328369 13594.0 13594.0 0.0 5678.0 exact min 1.0 1.0 77.31705863554674 14.159600997506235
19 17 baseline 17 94.68812704086305 13543.0 13543.0 0.0 4110.0 exact min 1.0 1.0 42.677072694980595 37.706422018348626
20 18 baseline 18 119.84407782554626 13525.0 13525.0 0.0 4925.0 exact min 1.0 1.0 277.4617640532422 4925.0
21 19 baseline 19 96.70060396194458 13564.0 13564.0 0.0 4242.0 exact min 1.0 1.0 65.06829984584466 212.1
22 20 baseline 20 206.73002099990845 13569.0 13569.0 0.0 5164.0 exact min 1.0 1.0 165.1539992327885 5164.0
23 21 baseline 21 101.60346388816832 13566.0 13566.0 0.0 3797.0 exact min 1.0 1.0 70.84710384515891 15.434959349593496
24 22 baseline 22 39.246136903762824 13565.0 13565.0 0.0 1434.0 exact min 1.0 1.0 30.91644064571905 1434.0
25 23 baseline 23 89.74621176719666 13580.0 13580.0 0.0 3758.0 exact min 1.0 1.0 44.49311196559062 67.10714285714286
26 24 baseline 24 69.45808696746826 13542.999999999995 13542.999999999998 1.343121467581671e-16 3608.0 exact min 0.9999999999999996 1.0 42.55165506627291 inf 3608.0
27 25 baseline 25 130.97386503219604 13542.0 13542.0 0.0 6687.0 exact min 1.0 1.0 96.75237348307111 6687.0
28 26 baseline 26 98.33581423759459 13531.999999377458 13531.999999377458 1.3442132749257606e-16 5284.0 exact min 0.9999999999539948 1.0 43.94880897162418 inf 5284.0
29 27 baseline 27 101.37863302230836 13521.0 13522.0 7.395902669920864e-05 3512.0 exact min 0.9999260464428339 1.0 71.28055311506921 inf 3512.0
30 28 baseline 28 47.17776012420654 13571.0 13571.0 0.0 2742.0 exact min 1.0 1.0 42.99836374991145 182.8
31 29 baseline 29 122.19579315185548 13594.0 13594.9999861121 7.356084390904645e-05 5138.0 exact min 0.9999264435454212 1.0 105.57202248105209 inf 5138.0
32 30 baseline 30 159.65594601631162 13577.0 13577.0 0.0 5170.0 exact min 1.0 1.0 86.70719628520685 206.8
33 31 baseline 31 64.20995998382568 13582.0 13582.0 0.0 2716.0 exact min 1.0 1.0 22.238063526379513 90.53333333333333
34 32 baseline 32 73.25116801261902 13523.0 13524.0 7.394808844191378e-05 2705.0 exact min 0.9999260573794735 1.0 97.84536245319715 inf 40.984848484848484
35 33 baseline 33 73.00323796272278 13548.0 13548.0 0.0 3823.0 exact min 1.0 1.0 35.56819949839414 4.0115424973767055
36 34 baseline 34 75.30102896690369 13557.0 13557.0 0.0 2495.0 exact min 1.0 1.0 33.9975137664807 31.1875
37 35 baseline 35 95.78053402900696 13567.999997100107 13567.999997100107 0.0 5380.0 exact min 0.9999999997862696 1.0 91.7838074273266 12.089887640449438
38 36 baseline 36 59.77940106391907 13553.999999666668 13553.999999666668 0.0 2236.0 exact min 0.9999999999754071 1.000000000068143 80.66210177722816 17.746031746031747
39 37 baseline 37 111.62521696090698 13532.0 13532.0 0.0 4730.0 exact min 1.0 1.0 44.52758005552085 44.205607476635514
40 38 baseline 38 101.59809303283691 13514.0 13514.0 0.0 4724.0 exact min 1.0 1.0 67.8739169651946 4724.0
41 39 baseline 39 136.7306661605835 13538.0 13538.0 0.0 5301.0 exact min 1.0 1.0 80.14397099282577 35.10596026490066
42 40 baseline 40 96.18307614326477 13578.0 13578.0 0.0 5286.0 exact min 1.0 1.0 51.5351421556022 5286.0
43 41 baseline 41 193.25571990013125 13526.0 13526.0 0.0 8946.0 exact min 1.0 1.0 76.43245706873643 8946.0
44 42 baseline 42 98.80436420440674 13529.0 13529.0 0.0 2757.0 exact min 0.9999999999999999 1.0 35.10803321142842 58.659574468085104
45 43 baseline 43 91.02266597747804 13565.0 13565.0 0.0 4119.0 exact min 1.0 1.0 12.480728782988091 15.426966292134832
46 44 baseline 44 44.981120109558105 13553.0 13553.0 0.0 1975.0 exact min 1.0 1.0 25.092447494113404 1975.0
47 45 baseline 45 99.74598288536072 13521.0 13521.0 0.0 5262.0 exact min 1.0 1.0 39.85221202580209 5262.0
48 46 baseline 46 70.65784502029419 13542.99999940547 13542.99999940547 0.0 3270.0 exact min 0.9999999999561006 1.0 45.453685199539756 3270.0
49 47 baseline 47 62.16441297531128 13564.0 13564.0 0.0 3631.0 exact min 1.0 1.0 20.033164659276355 3631.0
50 48 baseline 48 190.54906916618347 13552.0 13552.0 0.0 9373.0 exact min 1.0 1.0 103.71179496429484 9373.0
51 49 baseline 49 73.46178817749023 13524.0 13524.0 0.0 4053.0 exact min 1.0 1.0 26.241432260088718 8.966814159292035
52 50 ml-exact 0 11.3649320602417 13540.0 13540.0 0.0 1.0 exact min 13534.675569817877 13534.83622755677 1.0 1.0 10.31381105301398 1.0
53 51 ml-exact 1 10.329864025115967 13567.0 13567.0 0.0 1.0 exact min 13566.029921819729 13566.142424385062 1.0 1.0 20.535447170501705 1.0
54 52 ml-exact 2 12.315430164337158 13562.0 13562.0 0.0 406.0 exact min 13545.26825630499 13545.412645404165 0.9999262699992627 1.0 6.78165041689932 1.0
55 53 ml-exact 3 12.996630907058716 13522.0 13522.0 0.0 37.0 exact min 13513.490196843653 13513.683391861978 1.0 1.0 3.6085884714116796 37.0
56 54 ml-exact 4 11.032249212265015 13534.0 13534.0 0.0 230.0 exact min 13552.471283116225 13552.604609540394 1.0 1.0 12.307588595947369 1.0
57 55 ml-exact 5 13.653040885925293 13532.0 13532.0 0.0 45.0 exact min 13557.55577263004 13557.681290107144 1.0 1.0 8.102175836940628 1.0
58 56 ml-exact 6 16.461652040481567 13535.0 13535.0 0.0 1805.0 exact min 13536.370399655816 13536.528454412353 1.0 1.0000000000688192 9.127082323181316 2.5315568022440393
59 57 ml-exact 7 13.48779296875 13613.0 13613.0 0.0 253.0 exact min 13595.689443983643 13595.75639435777 1.0 1.0 8.256772456439219 1.0
60 58 ml-exact 8 14.816275835037231 13580.0 13580.0 0.0 565.0 exact min 13588.910124631891 13588.987486935437 1.0 1.0000000001744203 6.434771997460219 -0.0 1.0
61 59 ml-exact 9 14.60462999343872 13584.0 13584.0 0.0 257.0 exact min 13569.84328895509 13569.949934810124 1.0 1.0000000004044145 2.565844917829724 1.0
62 60 ml-exact 10 14.660763025283813 13578.0 13578.0 0.0 648.0 exact min 13568.148459117152 13568.25770795454 1.0 1.0 5.0015500391718986 1.0
63 61 ml-exact 11 10.747740983963013 13574.0 13574.999999323794 7.36702021360194e-05 1.0 exact min 13564.758799441275 13564.873254243374 0.9999263353233345 1.000000000098579 6.822143712194244 inf 1.0
64 62 ml-exact 12 11.216827154159546 13544.0 13544.0 0.0 1.0 exact min 13538.912644412721 13539.06679469573 1.0 1.0 12.930656160923881 1.0
65 63 ml-exact 13 10.66540789604187 13534.0 13534.0 0.0 1.0 exact min 13559.674309927463 13559.796573676624 1.0 1.0 4.4817610588402195 1.0
66 64 ml-exact 14 12.637185096740723 13551.0 13551.0 0.0 72.0 exact min 13548.657915980866 13548.797099115332 1.0 1.0 5.597807607388606 1.0
67 65 ml-exact 15 15.559112071990967 13594.0 13594.0 0.0 353.0 exact min 13560.52172484643 13560.642687104417 1.0 1.0 6.352091962749635 1.0
68 66 ml-exact 16 14.185301065444946 13594.0 13594.0 0.0 500.0 exact min 13552.89499057571 13553.027666254291 1.0 1.0 9.735604885812853 1.2468827930174564
69 67 ml-exact 17 12.099143028259277 13543.0 13543.0 0.0 109.0 exact min 13535.522984736846 13535.682340984562 1.0 1.0 5.453228643345678 1.0
70 68 ml-exact 18 9.592709064483643 13525.0 13525.0 0.0 1.0 exact min 13525.777713168703 13525.952036564957 1.0 1.0 22.208940377976713 1.0
71 69 ml-exact 19 15.68299388885498 13564.0 13564.0 0.0 20.0 exact min 13560.098017386947 13560.21963039052 1.0 1.0 10.55283738705663 1.0
72 70 ml-exact 20 11.181609153747559 13569.0 13569.0 0.0 1.0 exact min 13549.92903835932 13550.06626925702 1.0 1.0 8.932846137524376 1.0
73 71 ml-exact 21 12.961982011795044 13566.0 13566.0 0.0 246.0 exact min 13553.742405494679 13553.873779682082 1.0 1.0 9.038263563922284 1.0
74 72 ml-exact 22 10.162704944610596 13564.0 13565.0 7.372456502506635e-05 1.0 exact min 13551.200160737772 13551.335439398708 0.9999262808698858 1.0 8.005747546324356 inf 1.0
75 73 ml-exact 23 14.439340114593506 13580.0 13580.0 0.0 56.0 exact min 13560.945432305916 13561.065743818312 1.0 1.0 7.158532530536033 1.0
76 74 ml-exact 24 8.9430251121521 13543.0 13543.0 0.0 1.0 exact min 13545.691963764473 13545.835702118062 1.0 1.0000000000000002 5.478707180627428 1.0
77 75 ml-exact 25 11.13078498840332 13542.0 13542.0 0.0 1.0 exact min 13528.31995792561 13528.490376848333 1.0 1.0 8.222479088427512 1.0
78 76 ml-exact 26 10.45563006401062 13532.0 13532.0 0.0 1.0 exact min 13538.488936953237 13538.643737981833 1.0 1.0000000000460052 4.67289046136253 1.0
79 77 ml-exact 27 12.658456087112427 13522.0 13522.0 0.0 1.0 exact min 13537.641522034268 13537.797624554041 1.0 1.0 8.900314835312852 1.0
80 78 ml-exact 28 11.49683690071106 13571.0 13571.0 0.0 15.0 exact min 13560.098017386947 13560.21963039052 1.0 1.0 10.478351955003774 1.0
81 79 ml-exact 29 10.038163900375366 13594.0 13595.0 7.356186552890981e-05 1.0 exact min 13571.961826252513 13572.065218379603 0.9999264435454212 1.0000000010215446 8.672551138007995 inf 1.0
82 80 ml-exact 30 10.994755983352661 13576.0 13577.0 7.365939893930465e-05 25.0 exact min 13559.250602467977 13559.373516962729 0.9999263460263681 1.0 5.971117825195893 inf 1.0
83 81 ml-exact 31 12.409696102142334 13581.0 13582.0 7.363228039172373e-05 30.0 exact min 13583.401927658593 13583.48774965479 0.9999263731409218 1.0 4.297894132499397 inf 1.0
84 82 ml-exact 32 11.40560007095337 13524.0 13524.0 0.0 98.0 exact min 13535.522984736846 13535.682340984562 1.0 1.0 15.235048166691241 1.4848484848484849
85 83 ml-exact 33 14.968575954437256 13548.0 13548.0 0.0 953.0 exact min 13552.89499057571 13553.027666254291 1.0 1.0 7.292899748174851 1.0
86 84 ml-exact 34 10.275269031524658 13557.0 13557.0 0.0 80.0 exact min 13543.573426467052 13543.720418548583 1.0 1.0 4.63916104661852 1.0
87 85 ml-exact 35 13.136114120483398 13568.0 13568.0 0.0 445.0 exact min 13544.42084138602 13544.566531976374 1.0 1.0000000002137304 12.587970833538003 1.0
88 86 ml-exact 36 11.606173992156982 13553.999998743056 13553.999998743056 0.0 164.0 exact min 13525.354005709218 13525.528979851062 0.9999999999072641 1.0 15.660551479908223 1.3015873015873016
89 87 ml-exact 37 15.051767110824585 13532.0 13532.0 0.0 107.0 exact min 13539.336351872207 13539.489851409624 1.0 1.0 6.004187792432415 1.0
90 88 ml-exact 38 10.445327997207642 13514.0 13514.0 0.0 1.0 exact min 13529.591080304062 13529.75954699002 1.0 1.0 6.978136144027363 1.0
91 89 ml-exact 39 12.747802019119263 13538.0 13538.0 0.0 151.0 exact min 13537.217814574784 13537.374567840145 1.0 1.0 7.472058053477855 1.0
92 90 ml-exact 40 14.315036058425903 13578.0 13578.0 0.0 1.0 exact min 13566.877336738698 13566.988537812853 1.0 1.0 7.670033521642671 1.0
93 91 ml-exact 41 10.27357292175293 13525.0 13526.0 7.393715341959335e-05 1.0 exact min 13521.540638573859 13521.721469426 0.9999260683128789 1.0 4.063188513593281 inf 1.0
94 92 ml-exact 42 12.76089596748352 13529.0 13529.0 0.0 47.0 exact min 13530.014787763548 13530.182603703915 0.9999999999999999 1.0 4.534313469262858 1.0
95 93 ml-exact 43 16.610208988189697 13565.0 13565.0 0.0 267.0 exact min 13560.945432305916 13561.065743818312 1.0 1.0 2.2775372615612164 1.0
96 94 ml-exact 44 9.052951097488403 13553.0 13553.0 0.0 1.0 exact min 13571.114411333543 13571.219104951811 1.0 1.0 5.050134356975125 1.0
97 95 ml-exact 45 12.605960130691528 13521.0 13521.0 0.0 1.0 exact min 13518.998393816952 13519.183129142624 1.0 1.0 5.036547652194804 1.0
98 96 ml-exact 46 12.235252141952515 13543.0 13543.0 0.0 1.0 exact min 13543.149719007566 13543.297361834688 1.0 1.0000000000438993 7.870849996027641 1.0
99 97 ml-exact 47 11.854049921035767 13564.0 13564.0 0.0 1.0 exact min 13567.301044198182 13567.41159452675 1.0 1.0 3.8200977469489614 1.0
100 98 ml-exact 48 11.03400993347168 13551.999999999978 13552.000000000004 1.8791212846548136e-15 1.0 exact min 13558.403187549009 13558.527403534938 0.9999999999999983 1.0000000000000002 6.005576310930073 inf 1.0
101 99 ml-exact 49 10.517628908157349 13524.0 13524.0 0.0 547.0 exact min 13501.626387978087 13501.837803872895 1.0 1.0 3.757023254910798 1.2101769911504425
102 100 ml-heuristic 0 1.1019139289855957 13540.0 13540.0 0.0 787.0 heuristic min 13534.675569817877 13534.83622755677 1.0 1.0 1.0 787.0
103 101 ml-heuristic 1 0.503026008605957 13567.0 13567.0 0.0 142.0 heuristic min 13566.029921819729 13566.142424385062 1.0 1.0 1.0 142.0
104 102 ml-heuristic 2 1.815993070602417 13563.0 13563.0 0.0 1640.0 heuristic min 13545.26825630499 13545.412645404165 1.0 1.000073735437251 1.0 4.039408866995074
105 103 ml-heuristic 3 3.6015830039978027 13522.0 13522.0 0.0 1.0 heuristic min 13513.490196843653 13513.683391861978 1.0 1.0 1.0 1.0
106 104 ml-heuristic 4 0.8963778018951416 13534.0 13534.0 0.0 261.0 heuristic min 13552.471283116225 13552.604609540394 1.0 1.0 1.0 1.1347826086956523
107 105 ml-heuristic 5 1.685107946395874 13532.0 13532.0 0.0 265.0 heuristic min 13557.55577263004 13557.681290107144 1.0 1.0 1.0 5.888888888888889
108 106 ml-heuristic 6 1.803605079650879 13534.999999068532 13534.999999068534 1.343915333336563e-16 713.0 heuristic min 13536.370399655816 13536.528454412353 0.9999999999311808 1.0 1.0 inf 1.0
109 107 ml-heuristic 7 1.6335430145263672 13613.0 13613.0 0.0 519.0 heuristic min 13595.689443983643 13595.75639435777 1.0 1.0 1.0 2.0513833992094863
110 108 ml-heuristic 8 2.3025331497192383 13580.0 13580.0 0.0 1442.0 heuristic min 13588.910124631891 13588.987486935437 1.0 1.0000000001744203 1.0 -0.0 2.552212389380531
111 109 ml-heuristic 9 5.6919379234313965 13584.0 13584.0 0.0 1142.0 heuristic min 13569.84328895509 13569.949934810124 1.0 1.0000000004044145 1.0 4.443579766536965
112 110 ml-heuristic 10 2.931243896484375 13577.0 13578.0 7.365397363187744e-05 1123.0 heuristic min 13568.148459117152 13568.25770795454 0.9999263514508764 1.0 1.0 inf 1.7330246913580247
113 111 ml-heuristic 11 1.5754199028015137 13574.0 13574.999998324447 7.367012851385044e-05 3.0 heuristic min 13564.758799441275 13564.873254243374 0.9999263353233345 1.0000000000249623 1.0 inf 3.0
114 112 ml-heuristic 12 0.8674600124359131 13544.0 13544.0 0.0 200.0 heuristic min 13538.912644412721 13539.06679469573 1.0 1.0 1.0 200.0
115 113 ml-heuristic 13 2.3797359466552734 13534.0 13534.0 0.0 39.0 heuristic min 13559.674309927463 13559.796573676624 1.0 1.0 1.0 39.0
116 114 ml-heuristic 14 2.257524013519287 13551.0 13551.0 0.0 690.0 heuristic min 13548.657915980866 13548.797099115332 1.0 1.0 1.0 9.583333333333334
117 115 ml-heuristic 15 2.4494469165802 13593.0 13594.0 7.356727727506805e-05 1161.0 heuristic min 13560.52172484643 13560.642687104417 0.9999264381344711 1.0 1.0 inf 3.2889518413597734
118 116 ml-heuristic 16 1.4570538997650146 13594.0 13594.0 0.0 401.0 heuristic min 13552.89499057571 13553.027666254291 1.0 1.0 1.0 1.0
119 117 ml-heuristic 17 2.2187118530273438 13543.0 13543.0 0.0 234.0 heuristic min 13535.522984736846 13535.682340984562 1.0 1.0 1.0 2.146788990825688
120 118 ml-heuristic 18 0.4319300651550293 13525.0 13525.0 0.0 1.0 heuristic min 13525.777713168703 13525.952036564957 1.0 1.0 1.0 1.0
121 119 ml-heuristic 19 1.4861400127410889 13564.0 13564.0 0.0 466.0 heuristic min 13560.098017386947 13560.21963039052 1.0 1.0 1.0 23.3
122 120 ml-heuristic 20 1.2517409324645996 13569.0 13569.0 0.0 274.0 heuristic min 13549.92903835932 13550.06626925702 1.0 1.0 1.0 274.0
123 121 ml-heuristic 21 1.4341230392456055 13566.0 13566.0 0.0 476.0 heuristic min 13553.742405494679 13553.873779682082 1.0 1.0 1.0 1.934959349593496
124 122 ml-heuristic 22 1.2694261074066162 13564.0 13565.0 7.372456502506635e-05 22.0 heuristic min 13551.200160737772 13551.335439398708 0.9999262808698858 1.0 1.0 inf 22.0
125 123 ml-heuristic 23 2.0170810222625732 13580.0 13580.0 0.0 306.0 heuristic min 13560.945432305916 13561.065743818312 1.0 1.0 1.0 5.464285714285714
126 124 ml-heuristic 24 1.632323980331421 13543.0 13543.0 0.0 328.0 heuristic min 13545.691963764473 13545.835702118062 1.0 1.0000000000000002 1.0 328.0
127 125 ml-heuristic 25 1.3537018299102783 13542.0 13542.0 0.0 153.0 heuristic min 13528.31995792561 13528.490376848333 1.0 1.0 1.0 153.0
128 126 ml-heuristic 26 2.2375080585479736 13532.0 13532.0 0.0 1.0 heuristic min 13538.488936953237 13538.643737981833 1.0 1.0000000000460052 1.0 1.0
129 127 ml-heuristic 27 1.422248125076294 13522.0 13522.0 0.0 258.0 heuristic min 13537.641522034268 13537.797624554041 1.0 1.0 1.0 258.0
130 128 ml-heuristic 28 1.0971989631652832 13570.0 13571.0 7.369196757553427e-05 130.0 heuristic min 13560.098017386947 13560.21963039052 0.9999263134625304 1.0 1.0 inf 8.666666666666666
131 129 ml-heuristic 29 1.157463788986206 13595.0 13595.0 0.0 1.0 heuristic min 13571.961826252513 13572.065218379603 1.0 1.0000000010215446 1.0 1.0
132 130 ml-heuristic 30 1.841322898864746 13577.0 13577.0 0.0 207.0 heuristic min 13559.250602467977 13559.373516962729 1.0 1.0 1.0 8.28
133 131 ml-heuristic 31 2.887389898300171 13582.0 13582.0 0.0 1061.0 heuristic min 13583.401927658593 13583.48774965479 1.0 1.0 1.0 35.36666666666667
134 132 ml-heuristic 32 0.7486422061920166 13523.0 13524.0 7.394808844191378e-05 66.0 heuristic min 13535.522984736846 13535.682340984562 0.9999260573794735 1.0 1.0 inf 1.0
135 133 ml-heuristic 33 2.0524861812591553 13548.0 13548.0 0.0 1437.0 heuristic min 13552.89499057571 13553.027666254291 1.0 1.0 1.0 1.5078698845750262
136 134 ml-heuristic 34 2.214898109436035 13557.0 13557.0 0.0 373.0 heuristic min 13543.573426467052 13543.720418548583 1.0 1.0 1.0 4.6625
137 135 ml-heuristic 35 1.0435450077056885 13568.0 13568.0 0.0 623.0 heuristic min 13544.42084138602 13544.566531976374 1.0 1.0000000002137304 1.0 1.4
138 136 ml-heuristic 36 0.7411088943481445 13554.0 13554.0 0.0 126.0 heuristic min 13525.354005709218 13525.528979851062 1.0 1.000000000092736 1.0 1.0
139 137 ml-heuristic 37 2.506878137588501 13532.0 13532.0 0.0 733.0 heuristic min 13539.336351872207 13539.489851409624 1.0 1.0 1.0 6.850467289719626
140 138 ml-heuristic 38 1.4968650341033936 13514.0 13514.0 0.0 87.0 heuristic min 13529.591080304062 13529.75954699002 1.0 1.0 1.0 87.0
141 139 ml-heuristic 39 1.7060630321502686 13538.0 13538.0 0.0 235.0 heuristic min 13537.217814574784 13537.374567840145 1.0 1.0 1.0 1.5562913907284768
142 140 ml-heuristic 40 1.866358995437622 13577.0 13578.000000000002 7.365397363201142e-05 15.0 heuristic min 13566.877336738698 13566.988537812853 0.9999263514508764 1.0000000000000002 1.0 inf 15.0
143 141 ml-heuristic 41 2.5284509658813477 13526.0 13526.0 0.0 217.0 heuristic min 13521.540638573859 13521.721469426 1.0 1.0 1.0 217.0
144 142 ml-heuristic 42 2.8142950534820557 13529.000000000002 13529.000000000002 0.0 201.0 heuristic min 13530.014787763548 13530.182603703915 1.0 1.0000000000000002 1.0 4.276595744680851
145 143 ml-heuristic 43 7.293056964874268 13565.0 13565.0 0.0 1485.0 heuristic min 13560.945432305916 13561.065743818312 1.0 1.0 1.0 5.561797752808989
146 144 ml-heuristic 44 1.7926158905029297 13553.0 13553.0 0.0 1.0 heuristic min 13571.114411333543 13571.219104951811 1.0 1.0 1.0 1.0
147 145 ml-heuristic 45 2.502897024154663 13521.0 13521.0 0.0 68.0 heuristic min 13518.998393816952 13519.183129142624 1.0 1.0 1.0 68.0
148 146 ml-heuristic 46 1.554502010345459 13543.0 13543.0 0.0 157.0 heuristic min 13543.149719007566 13543.297361834688 1.0 1.0000000000438993 1.0 157.0
149 147 ml-heuristic 47 3.1030750274658203 13564.0 13564.0 0.0 137.0 heuristic min 13567.301044198182 13567.41159452675 1.0 1.0 1.0 137.0
150 148 ml-heuristic 48 1.837294101715088 13552.0 13552.0 0.0 48.0 heuristic min 13558.403187549009 13558.527403534938 1.0 1.0 1.0 48.0
151 149 ml-heuristic 49 2.7994580268859863 13524.0 13524.0 0.0 452.0 heuristic min 13501.626387978087 13501.837803872895 1.0 1.0 1.0 1.0

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

20
docs/Makefile Normal file
View File

@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

130
docs/_static/custom.css vendored Normal file
View File

@@ -0,0 +1,130 @@
h1.site-logo {
font-size: 30px !important;
}
h1.site-logo small {
font-size: 20px !important;
}
code {
display: inline-block;
border-radius: 4px;
padding: 0 4px;
background-color: #eee;
color: rgb(232, 62, 140);
}
.right-next, .left-prev {
border-radius: 8px;
border-width: 0px !important;
box-shadow: 2px 2px 6px rgba(0, 0, 0, 0.2);
}
.right-next:hover, .left-prev:hover {
text-decoration: none;
}
.admonition {
border-radius: 8px;
border-width: 0;
box-shadow: 0 0 0 !important;
}
.note { background-color: rgba(0, 123, 255, 0.1); }
.note * { color: rgb(69 94 121); }
.warning { background-color: rgb(220 150 40 / 10%); }
.warning * { color: rgb(105 72 28); }
.input_area, .output_area, .output_area img {
border-radius: 8px !important;
border-width: 0 !important;
margin: 8px 0 8px 0;
}
.output_area {
padding: 4px;
background-color: hsl(227 60% 11% / 0.7) !important;
}
.output_area pre {
color: #fff;
line-height: 20px !important;
}
.input_area pre {
background-color: rgba(0 0 0 / 3%) !important;
padding: 12px !important;
line-height: 20px;
}
.ansi-green-intense-fg {
color: #64d88b !important;
}
#site-navigation {
background-color: #fafafa;
}
.container, .container-lg, .container-md, .container-sm, .container-xl {
max-width: inherit !important;
}
h1, h2 {
font-weight: bold !important;
}
#main-content .section {
max-width: 900px !important;
margin: 0 auto !important;
font-size: 16px;
}
p.caption {
font-weight: bold;
}
h2 {
padding-bottom: 5px;
border-bottom: 1px solid #ccc;
}
h3 {
margin-top: 1.5rem;
}
tbody, thead, pre {
border: 1px solid rgba(0, 0, 0, 0.25);
}
table td, th {
padding: 8px;
}
table p {
margin-bottom: 0;
}
table td code {
white-space: nowrap;
}
table tr,
table th {
border-bottom: 1px solid rgba(0, 0, 0, 0.1);
}
table tr:last-child {
border-bottom: 0;
}
@media (min-width: 960px) {
.bd-page-width {
max-width: 100rem;
}
}
.bd-sidebar-primary .sidebar-primary-items__end {
margin-bottom: 0;
margin-top: 0;
}

View File

@@ -1,43 +0,0 @@
# About
### Authors
* **Alinson S. Xavier,** Argonne National Laboratory <<axavier@anl.gov>>
* **Feng Qiu,** Argonne National Laboratory <<fqiu@anl.gov>>
### Acknowledgments
* Based upon work supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357.
### References
* **Learning to Solve Large-Scale Security-Constrained Unit Commitment Problems.** *Alinson S. Xavier, Feng Qiu, Shabbir Ahmed*. INFORMS Journal on Computing (to appear). [ArXiv:1902:01696](https://arxiv.org/abs/1902.01697)
### License
```text
MIPLearn, an extensible framework for Learning-Enhanced Mixed-Integer Optimization
Copyright © 2020, UChicago Argonne, LLC. All Rights Reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted
provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of
conditions and the following disclaimer in the documentation and/or other materials provided
with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to
endorse or promote products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
```

42
docs/api/collectors.rst Normal file
View File

@@ -0,0 +1,42 @@
Collectors & Extractors
=======================
miplearn.classifiers.minprob
----------------------------
.. automodule:: miplearn.classifiers.minprob
:members:
:undoc-members:
:show-inheritance:
miplearn.classifiers.singleclass
--------------------------------
.. automodule:: miplearn.classifiers.singleclass
:members:
:undoc-members:
:show-inheritance:
miplearn.collectors.basic
-------------------------
.. automodule:: miplearn.collectors.basic
:members:
:undoc-members:
:show-inheritance:
miplearn.extractors.fields
--------------------------
.. automodule:: miplearn.extractors.fields
:members:
:undoc-members:
:show-inheritance:
miplearn.extractors.AlvLouWeh2017
---------------------------------
.. automodule:: miplearn.extractors.AlvLouWeh2017
:members:
:undoc-members:
:show-inheritance:

44
docs/api/components.rst Normal file
View File

@@ -0,0 +1,44 @@
Components
==========
miplearn.components.primal.actions
----------------------------------
.. automodule:: miplearn.components.primal.actions
:members:
:undoc-members:
:show-inheritance:
miplearn.components.primal.expert
----------------------------------
.. automodule:: miplearn.components.primal.expert
:members:
:undoc-members:
:show-inheritance:
miplearn.components.primal.indep
----------------------------------
.. automodule:: miplearn.components.primal.indep
:members:
:undoc-members:
:show-inheritance:
miplearn.components.primal.joint
----------------------------------
.. automodule:: miplearn.components.primal.joint
:members:
:undoc-members:
:show-inheritance:
miplearn.components.primal.mem
----------------------------------
.. automodule:: miplearn.components.primal.mem
:members:
:undoc-members:
:show-inheritance:

18
docs/api/helpers.rst Normal file
View File

@@ -0,0 +1,18 @@
Helpers
=======
miplearn.io
-----------
.. automodule:: miplearn.io
:members:
:undoc-members:
:show-inheritance:
miplearn.h5
-----------
.. automodule:: miplearn.h5
:members:
:undoc-members:
:show-inheritance:

63
docs/api/problems.rst Normal file
View File

@@ -0,0 +1,63 @@
Benchmark Problems
==================
miplearn.problems.binpack
-------------------------
.. automodule:: miplearn.problems.binpack
:members:
miplearn.problems.multiknapsack
-------------------------------
.. automodule:: miplearn.problems.multiknapsack
:members:
miplearn.problems.pmedian
-------------------------
.. automodule:: miplearn.problems.pmedian
:members:
miplearn.problems.setcover
--------------------------
.. automodule:: miplearn.problems.setcover
:members:
miplearn.problems.setpack
-------------------------
.. automodule:: miplearn.problems.setpack
:members:
miplearn.problems.stab
----------------------
.. automodule:: miplearn.problems.stab
:members:
miplearn.problems.tsp
---------------------
.. automodule:: miplearn.problems.tsp
:members:
miplearn.problems.uc
--------------------
.. automodule:: miplearn.problems.uc
:members:
miplearn.problems.vertexcover
-----------------------------
.. automodule:: miplearn.problems.vertexcover
:members:
miplearn.problems.maxcut
-----------------------------
.. automodule:: miplearn.problems.maxcut
:members:

26
docs/api/solvers.rst Normal file
View File

@@ -0,0 +1,26 @@
Solvers
=======
miplearn.solvers.abstract
-------------------------
.. automodule:: miplearn.solvers.abstract
:members:
:undoc-members:
:show-inheritance:
miplearn.solvers.gurobi
-------------------------
.. automodule:: miplearn.solvers.gurobi
:members:
:undoc-members:
:show-inheritance:
miplearn.solvers.learning
-------------------------
.. automodule:: miplearn.solvers.learning
:members:
:undoc-members:
:show-inheritance:

View File

@@ -1,61 +0,0 @@
# Benchmarks Utilities
### Using `BenchmarkRunner`
MIPLearn provides the utility class `BenchmarkRunner`, which simplifies the task of comparing the performance of different solvers. The snippet below shows its basic usage:
```python
from miplearn import BenchmarkRunner, LearningSolver
# Create train and test instances
train_instances = [...]
test_instances = [...]
# Training phase...
training_solver = LearningSolver(...)
training_solver.parallel_solve(train_instances, n_jobs=10)
# Test phase...
test_solvers = {
"Baseline": LearningSolver(...), # each solver may have different parameters
"Strategy A": LearningSolver(...),
"Strategy B": LearningSolver(...),
"Strategy C": LearningSolver(...),
}
benchmark = BenchmarkRunner(test_solvers)
benchmark.fit(train_instances)
benchmark.parallel_solve(test_instances, n_jobs=2)
print(benchmark.raw_results())
```
The method `fit` trains the ML models for each individual solver. The method `parallel_solve` solves the test instances in parallel, and collects solver statistics such as running time and optimal value. Finally, `raw_results` produces a table of results (Pandas DataFrame) with the following columns:
* **Solver,** the name of the solver.
* **Instance,** the sequence number identifying the instance.
* **Wallclock Time,** the wallclock running time (in seconds) spent by the solver;
* **Lower Bound,** the best lower bound obtained by the solver;
* **Upper Bound,** the best upper bound obtained by the solver;
* **Gap,** the relative MIP integrality gap at the end of the optimization;
* **Nodes,** the number of explored branch-and-bound nodes.
In addition to the above, there is also a "Relative" version of most columns, where the raw number is compared to the solver which provided the best performance. The *Relative Wallclock Time* for example, indicates how many times slower this run was when compared to the best time achieved by any solver when processing this instance. For example, if this run took 10 seconds, but the fastest solver took only 5 seconds to solve the same instance, the relative wallclock time would be 2.
### Saving and loading benchmark results
When iteratively exploring new formulations, encoding and solver parameters, it is often desirable to avoid repeating parts of the benchmark suite. For example, if the baseline solver has not been changed, there is no need to evaluate its performance again and again when making small changes to the remaining solvers. `BenchmarkRunner` provides the methods `save_results` and `load_results`, which can be used to avoid this repetition, as the next example shows:
```python
# Benchmark baseline solvers and save results to a file.
benchmark = BenchmarkRunner(baseline_solvers)
benchmark.parallel_solve(test_instances)
benchmark.save_results("baseline_results.csv")
# Benchmark remaining solvers, loading baseline results from file.
benchmark = BenchmarkRunner(alternative_solvers)
benchmark.load_results("baseline_results.csv")
benchmark.fit(training_instances)
benchmark.parallel_solve(test_instances)
```

25
docs/conf.py Normal file
View File

@@ -0,0 +1,25 @@
project = "MIPLearn"
copyright = "2020-2023, UChicago Argonne, LLC"
author = ""
release = "0.4"
extensions = [
"myst_parser",
"nbsphinx",
"sphinx_multitoc_numbering",
"sphinx.ext.autodoc",
"sphinx.ext.napoleon",
]
templates_path = ["_templates"]
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
html_theme = "sphinx_book_theme"
html_static_path = ["_static"]
html_css_files = [
"custom.css",
]
html_theme_options = {
"repository_url": "https://github.com/ANL-CEEESA/MIPLearn/",
"use_repository_button": False,
"extra_navbar": "",
}
html_title = f"MIPLearn {release}"
nbsphinx_execute = "never"

View File

@@ -1,28 +0,0 @@
.navbar-default {
border-bottom: 0px;
background-color: #fff;
box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.2);
}
a, .navbar-default a {
color: #06a !important;
font-weight: normal;
}
.disabled > a {
color: #999 !important;
}
.navbar-default a:hover,
.navbar-default .active,
.active > a {
background-color: #f0f0f0 !important;
}
.icon-bar {
background-color: #666 !important;
}
.navbar-collapse {
border-color: #fff !important;
}

View File

@@ -1,164 +0,0 @@
# Customization
## Customizing solver parameters
### Selecting the internal MIP solver
By default, `LearningSolver` uses [Gurobi](https://www.gurobi.com/) as its internal MIP solver. Another supported solver is [IBM ILOG CPLEX](https://www.ibm.com/products/ilog-cplex-optimization-studio). To switch between solvers, use the `solver` constructor argument, as shown below. It is also possible to specify a time limit (in seconds) and a relative MIP gap tolerance.
```python
from miplearn import LearningSolver
solver = LearningSolver(solver="cplex",
time_limit=300,
gap_tolerance=1e-3)
```
## Customizing solver components
`LearningSolver` is composed by a number of individual machine-learning components, each targeting a different part of the solution process. Each component can be individually enabled, disabled or customized. The following components are enabled by default:
* `LazyConstraintComponent`: Predicts which lazy constraint to initially enforce.
* `ObjectiveValueComponent`: Predicts the optimal value of the optimization problem, given the optimal solution to the LP relaxation.
* `PrimalSolutionComponent`: Predicts optimal values for binary decision variables. In heuristic mode, this component fixes the variables to their predicted values. In exact mode, the predicted values are provided to the solver as a (partial) MIP start.
The following components are also available, but not enabled by default:
* `BranchPriorityComponent`: Predicts good branch priorities for decision variables.
### Selecting components
To create a `LearningSolver` with a specific set of components, the `components` constructor argument may be used, as the next example shows:
```python
# Create a solver without any components
solver1 = LearningSolver(components=[])
# Create a solver with only two components
solver2 = LearningSolver(components=[
LazyConstraintComponent(...),
PrimalSolutionComponent(...),
])
```
It is also possible to add components to an existing solver using the `solver.add` method, as shown below. If the solver already holds another component of that type, the new component will replace the previous one.
```python
# Create solver with default components
solver = LearningSolver()
# Replace the default LazyConstraintComponent by one with custom parameters
solver.add(LazyConstraintComponent(...))
```
### Adjusting component aggressiveness
The aggressiveness of classification components (such as `PrimalSolutionComponent` and `LazyConstraintComponent`) can
be adjusted through the `threshold` constructor argument. Internally, these components ask the ML models how confident
they are on each prediction (through the `predict_proba` method in the sklearn API), and only take into account
predictions which have probabilities above the threshold. Lowering a component's threshold increases its aggressiveness,
while raising a component's threshold makes it more conservative.
MIPLearn also includes `MinPrecisionThreshold`, a dynamic threshold which adjusts itself automatically during training
to achieve a minimum desired true positive rate (also known as precision). The example below shows how to initialize
a `PrimalSolutionComponent` which achieves 95% precision, possibly at the cost of a lower recall. To make the component
more aggressive, this precision may be lowered.
```python
PrimalSolutionComponent(threshold=MinPrecisionThreshold(0.95))
```
### Evaluating component performance
MIPLearn allows solver components to be modified, trained and evaluated in isolation. In the following example, we build and
fit `PrimalSolutionComponent` outside the solver, then evaluate its performance.
```python
from miplearn import PrimalSolutionComponent
# User-provided set of previously-solved instances
train_instances = [...]
# Construct and fit component on a subset of training instances
comp = PrimalSolutionComponent()
comp.fit(train_instances[:100])
# Evaluate performance on an additional set of training instances
ev = comp.evaluate(train_instances[100:150])
```
The method `evaluate` returns a dictionary with performance evaluation statistics for each training instance provided,
and for each type of prediction the component makes. To obtain a summary across all instances, pandas may be used, as below:
```python
import pandas as pd
pd.DataFrame(ev["Fix one"]).mean(axis=1)
```
```text
Predicted positive 3.120000
Predicted negative 196.880000
Condition positive 62.500000
Condition negative 137.500000
True positive 3.060000
True negative 137.440000
False positive 0.060000
False negative 59.440000
Accuracy 0.702500
F1 score 0.093050
Recall 0.048921
Precision 0.981667
Predicted positive (%) 1.560000
Predicted negative (%) 98.440000
Condition positive (%) 31.250000
Condition negative (%) 68.750000
True positive (%) 1.530000
True negative (%) 68.720000
False positive (%) 0.030000
False negative (%) 29.720000
dtype: float64
```
Regression components (such as `ObjectiveValueComponent`) can also be trained and evaluated similarly,
as the next example shows:
```python
from miplearn import ObjectiveValueComponent
comp = ObjectiveValueComponent()
comp.fit(train_instances[:100])
ev = comp.evaluate(train_instances[100:150])
import pandas as pd
pd.DataFrame(ev).mean(axis=1)
```
```text
Mean squared error 7001.977827
Explained variance 0.519790
Max error 242.375804
Mean absolute error 65.843924
R2 0.517612
Median absolute error 65.843924
dtype: float64
```
### Using customized ML classifiers and regressors
By default, given a training set of instantes, MIPLearn trains a fixed set of ML classifiers and regressors, then
selects the best one based on cross-validation performance. Alternatively, the user may specify which ML model a component
should use through the `classifier` or `regressor` contructor parameters. The provided classifiers and regressors must
follow the sklearn API. In particular, classifiers must provide the methods `fit`, `predict_proba` and `predict`,
while regressors must provide the methods `fit` and `predict`
!!! danger
MIPLearn must be able to generate a copy of any custom ML classifiers and regressors through
the standard `copy.deepcopy` method. This currently makes it incompatible with Keras and TensorFlow
predictors. This is a known limitation, which will be addressed in a future version.
The example below shows how to construct a `PrimalSolutionComponent` which internally uses
sklearn's `KNeighborsClassifiers`. Any other sklearn classifier or pipeline can be used.
```python
from miplearn import PrimalSolutionComponent
from sklearn.neighbors import KNeighborsClassifier
comp = PrimalSolutionComponent(classifier=KNeighborsClassifier(n_neighbors=5))
comp.fit(train_instances)
```

View File

@@ -1 +0,0 @@
../../benchmark/knapsack/ChallengeA/performance.png

View File

@@ -1 +0,0 @@
../../benchmark/stab/ChallengeA/performance.png

View File

@@ -1 +0,0 @@
../../benchmark/tsp/ChallengeA/performance.png

282
docs/guide/collectors.ipynb Normal file
View File

@@ -0,0 +1,282 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "505cea0b-5f5d-478a-9107-42bb5515937d",
"metadata": {},
"source": [
"# Training Data Collectors\n",
"The first step in solving mixed-integer optimization problems with the assistance of supervised machine learning methods is solving a large set of training instances and collecting the raw training data. In this section, we describe the various training data collectors included in MIPLearn. Additionally, the framework follows the convention of storing all training data in files with a specific data format (namely, HDF5). In this section, we briefly describe this format and the rationale for choosing it.\n",
"\n",
"## Overview\n",
"\n",
"In MIPLearn, a **collector** is a class that solves or analyzes the problem and collects raw data which may be later useful for machine learning methods. Collectors, by convention, take as input: (i) a list of problem data filenames, in gzipped pickle format, ending with `.pkl.gz`; (ii) a function that builds the optimization model, such as `build_tsp_model`. After processing is done, collectors store the training data in a HDF5 file located alongside with the problem data. For example, if the problem data is stored in file `problem.pkl.gz`, then the collector writes to `problem.h5`. Collectors are, in general, very time consuming, as they may need to solve the problem to optimality, potentially multiple times.\n",
"\n",
"## HDF5 Format\n",
"\n",
"MIPLearn stores all training data in [HDF5][HDF5] (Hierarchical Data Format, Version 5) files. The HDF format was originally developed by the [National Center for Supercomputing Applications][NCSA] (NCSA) for storing and organizing large amounts of data, and supports a variety of data types, including integers, floating-point numbers, strings, and arrays. Compared to other formats, such as CSV, JSON or SQLite, the HDF5 format provides several advantages for MIPLearn, including:\n",
"\n",
"- *Storage of multiple scalars, vectors and matrices in a single file* --- This allows MIPLearn to store all training data related to a given problem instance in a single file, which makes training data easier to store, organize and transfer.\n",
"- *High-performance partial I/O* --- Partial I/O allows MIPLearn to read a single element from the training data (e.g. value of the optimal solution) without loading the entire file to memory or reading it from beginning to end, which dramatically improves performance and reduces memory requirements. This is especially important when processing a large number of training data files.\n",
"- *On-the-fly compression* --- HDF5 files can be transparently compressed, using the gzip method, which reduces storage requirements and accelerates network transfers.\n",
"- *Stable, portable and well-supported data format* --- Training data files are typically expensive to generate. Having a stable and well supported data format ensures that these files remain usable in the future, potentially even by other non-Python MIP/ML frameworks.\n",
"\n",
"MIPLearn currently uses HDF5 as simple key-value storage for numerical data; more advanced features of the format, such as metadata, are not currently used. Although files generated by MIPLearn can be read with any HDF5 library, such as [h5py][h5py], some convenience functions are provided to make the access more simple and less error-prone. Specifically, the class [H5File][H5File], which is built on top of h5py, provides the methods [put_scalar][put_scalar], [put_array][put_array], [put_sparse][put_sparse], [put_bytes][put_bytes] to store, respectively, scalar values, dense multi-dimensional arrays, sparse multi-dimensional arrays and arbitrary binary data. The corresponding *get* methods are also provided. Compared to pure h5py methods, these methods automatically perform type-checking and gzip compression. The example below shows their usage.\n",
"\n",
"[HDF5]: https://en.wikipedia.org/wiki/Hierarchical_Data_Format\n",
"[NCSA]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications\n",
"[h5py]: https://www.h5py.org/\n",
"[H5File]: ../../api/helpers/#miplearn.h5.H5File\n",
"[put_scalar]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n",
"[put_array]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n",
"[put_sparse]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n",
"[put_bytes]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n",
"\n",
"\n",
"### Example"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f906fe9c",
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-30T22:19:30.826123021Z",
"start_time": "2024-01-30T22:19:30.766066926Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"x1 = 1\n",
"x2 = hello world\n",
"x3 = [1 2 3]\n",
"x4 = [[0.37454012 0.95071431 0.73199394]\n",
" [0.59865848 0.15601864 0.15599452]\n",
" [0.05808361 0.86617615 0.60111501]]\n",
"x5 = (3, 2)\t0.6803075385877797\n",
" (2, 3)\t0.450499251969543\n",
" (0, 4)\t0.013264961159866528\n",
" (2, 0)\t0.9422017556848528\n",
" (2, 4)\t0.5632882178455393\n",
" (1, 2)\t0.3854165025399161\n",
" (1, 1)\t0.015966252220214194\n",
" (0, 3)\t0.230893825622149\n",
" (4, 4)\t0.24102546602601171\n",
" (3, 1)\t0.6832635188254582\n",
" (1, 3)\t0.6099966577826209\n",
" (3, 0)\t0.8331949117361643\n"
]
}
],
"source": [
"import numpy as np\n",
"import scipy.sparse\n",
"\n",
"from miplearn.h5 import H5File\n",
"\n",
"# Set random seed to make example reproducible\n",
"np.random.seed(42)\n",
"\n",
"# Create a new empty HDF5 file\n",
"with H5File(\"test.h5\", \"w\") as h5:\n",
" # Store a scalar\n",
" h5.put_scalar(\"x1\", 1)\n",
" h5.put_scalar(\"x2\", \"hello world\")\n",
"\n",
" # Store a dense array and a dense matrix\n",
" h5.put_array(\"x3\", np.array([1, 2, 3]))\n",
" h5.put_array(\"x4\", np.random.rand(3, 3))\n",
"\n",
" # Store a sparse matrix\n",
" h5.put_sparse(\"x5\", scipy.sparse.random(5, 5, 0.5))\n",
"\n",
"# Re-open the file we just created and print\n",
"# previously-stored data\n",
"with H5File(\"test.h5\", \"r\") as h5:\n",
" print(\"x1 =\", h5.get_scalar(\"x1\"))\n",
" print(\"x2 =\", h5.get_scalar(\"x2\"))\n",
" print(\"x3 =\", h5.get_array(\"x3\"))\n",
" print(\"x4 =\", h5.get_array(\"x4\"))\n",
" print(\"x5 =\", h5.get_sparse(\"x5\"))"
]
},
{
"cell_type": "markdown",
"id": "d0000c8d",
"metadata": {},
"source": [
"## Basic collector\n",
"\n",
"[BasicCollector][BasicCollector] is the most fundamental collector, and performs the following steps:\n",
"\n",
"1. Extracts all model data, such as objective function and constraint right-hand sides into numpy arrays, which can later be easily and efficiently accessed without rebuilding the model or invoking the solver;\n",
"2. Solves the linear relaxation of the problem and stores its optimal solution, basis status and sensitivity information, among other information;\n",
"3. Solves the original mixed-integer optimization problem to optimality and stores its optimal solution, along with solve statistics, such as number of explored nodes and wallclock time.\n",
"\n",
"Data extracted in Phases 1, 2 and 3 above are prefixed, respectively as `static_`, `lp_` and `mip_`. The entire set of fields is shown in the table below.\n",
"\n",
"[BasicCollector]: ../../api/collectors/#miplearn.collectors.basic.BasicCollector\n"
]
},
{
"cell_type": "markdown",
"id": "6529f667",
"metadata": {},
"source": [
"### Data fields\n",
"\n",
"| Field | Type | Description |\n",
"|-----------------------------------|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------|\n",
"| `static_constr_lhs` | `(nconstrs, nvars)` | Constraint left-hand sides, in sparse matrix format |\n",
"| `static_constr_names` | `(nconstrs,)` | Constraint names |\n",
"| `static_constr_rhs` | `(nconstrs,)` | Constraint right-hand sides |\n",
"| `static_constr_sense` | `(nconstrs,)` | Constraint senses (`\"<\"`, `\">\"` or `\"=\"`) |\n",
"| `static_obj_offset` | `float` | Constant value added to the objective function |\n",
"| `static_sense` | `str` | `\"min\"` if minimization problem or `\"max\"` otherwise |\n",
"| `static_var_lower_bounds` | `(nvars,)` | Variable lower bounds |\n",
"| `static_var_names` | `(nvars,)` | Variable names |\n",
"| `static_var_obj_coeffs` | `(nvars,)` | Objective coefficients |\n",
"| `static_var_types` | `(nvars,)` | Types of the decision variables (`\"C\"`, `\"B\"` and `\"I\"` for continuous, binary and integer, respectively) |\n",
"| `static_var_upper_bounds` | `(nvars,)` | Variable upper bounds |\n",
"| `lp_constr_basis_status` | `(nconstr,)` | Constraint basis status (`0` for basic, `-1` for non-basic) |\n",
"| `lp_constr_dual_values` | `(nconstr,)` | Constraint dual value (or shadow price) |\n",
"| `lp_constr_sa_rhs_{up,down}` | `(nconstr,)` | Sensitivity information for the constraint RHS |\n",
"| `lp_constr_slacks` | `(nconstr,)` | Constraint slack in the solution to the LP relaxation |\n",
"| `lp_obj_value` | `float` | Optimal value of the LP relaxation |\n",
"| `lp_var_basis_status` | `(nvars,)` | Variable basis status (`0`, `-1`, `-2` or `-3` for basic, non-basic at lower bound, non-basic at upper bound, and superbasic, respectively) |\n",
"| `lp_var_reduced_costs` | `(nvars,)` | Variable reduced costs |\n",
"| `lp_var_sa_{obj,ub,lb}_{up,down}` | `(nvars,)` | Sensitivity information for the variable objective coefficient, lower and upper bound. |\n",
"| `lp_var_values` | `(nvars,)` | Optimal solution to the LP relaxation |\n",
"| `lp_wallclock_time` | `float` | Time taken to solve the LP relaxation (in seconds) |\n",
"| `mip_constr_slacks` | `(nconstrs,)` | Constraint slacks in the best MIP solution |\n",
"| `mip_gap` | `float` | Relative MIP optimality gap |\n",
"| `mip_node_count` | `float` | Number of explored branch-and-bound nodes |\n",
"| `mip_obj_bound` | `float` | Dual bound |\n",
"| `mip_obj_value` | `float` | Value of the best MIP solution |\n",
"| `mip_var_values` | `(nvars,)` | Best MIP solution |\n",
"| `mip_wallclock_time` | `float` | Time taken to solve the MIP (in seconds) |"
]
},
{
"cell_type": "markdown",
"id": "f2894594",
"metadata": {},
"source": [
"### Example\n",
"\n",
"The example below shows how to generate a few random instances of the traveling salesman problem, store its problem data, run the collector and print some of the training data to screen."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "ac6f8c6f",
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-30T22:19:30.826707866Z",
"start_time": "2024-01-30T22:19:30.825940503Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"lp_obj_value = 2909.0\n",
"mip_obj_value = 2921.0\n"
]
}
],
"source": [
"import random\n",
"import numpy as np\n",
"from scipy.stats import uniform, randint\n",
"from glob import glob\n",
"\n",
"from miplearn.problems.tsp import (\n",
" TravelingSalesmanGenerator,\n",
" build_tsp_model_gurobipy,\n",
")\n",
"from miplearn.io import write_pkl_gz\n",
"from miplearn.h5 import H5File\n",
"from miplearn.collectors.basic import BasicCollector\n",
"\n",
"# Set random seed to make example reproducible.\n",
"random.seed(42)\n",
"np.random.seed(42)\n",
"\n",
"# Generate a few instances of the traveling salesman problem.\n",
"data = TravelingSalesmanGenerator(\n",
" n=randint(low=10, high=11),\n",
" x=uniform(loc=0.0, scale=1000.0),\n",
" y=uniform(loc=0.0, scale=1000.0),\n",
" gamma=uniform(loc=0.90, scale=0.20),\n",
" fix_cities=True,\n",
" round=True,\n",
").generate(10)\n",
"\n",
"# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...\n",
"write_pkl_gz(data, \"data/tsp\")\n",
"\n",
"# Solve all instances and collect basic solution information.\n",
"# Process at most four instances in parallel.\n",
"bc = BasicCollector()\n",
"bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model_gurobipy, n_jobs=4)\n",
"\n",
"# Read and print some training data for the first instance.\n",
"with H5File(\"data/tsp/00000.h5\", \"r\") as h5:\n",
" print(\"lp_obj_value = \", h5.get_scalar(\"lp_obj_value\"))\n",
" print(\"mip_obj_value = \", h5.get_scalar(\"mip_obj_value\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "78f0b07a",
"metadata": {
"ExecuteTime": {
"start_time": "2024-01-30T22:19:30.826179789Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

334
docs/guide/features.ipynb Normal file
View File

@@ -0,0 +1,334 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cdc6ebe9-d1d4-4de1-9b5a-4fc8ef57b11b",
"metadata": {},
"source": [
"# Feature Extractors\n",
"\n",
"In the previous page, we introduced *training data collectors*, which solve the optimization problem and collect raw training data, such as the optimal solution. In this page, we introduce **feature extractors**, which take the raw training data, stored in HDF5 files, and extract relevant information in order to train a machine learning model."
]
},
{
"cell_type": "markdown",
"id": "b4026de5",
"metadata": {},
"source": [
"\n",
"## Overview\n",
"\n",
"Feature extraction is an important step of the process of building a machine learning model because it helps to reduce the complexity of the data and convert it into a format that is more easily processed. Previous research has proposed converting absolute variable coefficients, for example, into relative values which are invariant to various transformations, such as problem scaling, making them more amenable to learning. Various other transformations have also been described.\n",
"\n",
"In the framework, we treat data collection and feature extraction as two separate steps to accelerate the model development cycle. Specifically, collectors are typically time-consuming, as they often need to solve the problem to optimality, and therefore focus on collecting and storing all data that may or may not be relevant, in its raw format. Feature extractors, on the other hand, focus entirely on filtering the data and improving its representation, and are therefore much faster to run. Experimenting with new data representations, therefore, can be done without resolving the instances.\n",
"\n",
"In MIPLearn, extractors implement the abstract class [FeatureExtractor][FeatureExtractor], which has methods that take as input an [H5File][H5File] and produce either: (i) instance features, which describe the entire instances; (ii) variable features, which describe a particular decision variables; or (iii) constraint features, which describe a particular constraint. The extractor is free to implement only a subset of these methods, if it is known that it will not be used with a machine learning component that requires the other types of features.\n",
"\n",
"[FeatureExtractor]: ../../api/collectors/#miplearn.features.fields.FeaturesExtractor\n",
"[H5File]: ../../api/helpers/#miplearn.h5.H5File"
]
},
{
"cell_type": "markdown",
"id": "b2d9736c",
"metadata": {},
"source": [
"\n",
"## H5FieldsExtractor\n",
"\n",
"[H5FieldsExtractor][H5FieldsExtractor], the most simple extractor in MIPLearn, simple extracts data that is already available in the HDF5 file, assembles it into a matrix and returns it as-is. The fields used to build instance, variable and constraint features are user-specified. The class also performs checks to ensure that the shapes of the returned matrices make sense."
]
},
{
"cell_type": "markdown",
"id": "e8184dff",
"metadata": {},
"source": [
"### Example\n",
"\n",
"The example below demonstrates the usage of H5FieldsExtractor in a randomly generated instance of the multi-dimensional knapsack problem."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ed9a18c8",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"instance features (11,) \n",
" [-1531.24308771 -350. -692. -454.\n",
" -709. -605. -543. -321.\n",
" -674. -571. -341. ]\n",
"variable features (10, 4) \n",
" [[-1.53124309e+03 -3.50000000e+02 0.00000000e+00 9.43467993e+01]\n",
" [-1.53124309e+03 -6.92000000e+02 2.51703329e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -4.54000000e+02 0.00000000e+00 8.25504181e+01]\n",
" [-1.53124309e+03 -7.09000000e+02 1.11373019e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -6.05000000e+02 1.00000000e+00 -1.26055279e+02]\n",
" [-1.53124309e+03 -5.43000000e+02 0.00000000e+00 1.68693775e+02]\n",
" [-1.53124309e+03 -3.21000000e+02 1.07488781e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -6.74000000e+02 8.82293687e-01 0.00000000e+00]\n",
" [-1.53124309e+03 -5.71000000e+02 0.00000000e+00 1.41129074e+02]\n",
" [-1.53124309e+03 -3.41000000e+02 1.28830116e-01 0.00000000e+00]]\n",
"constraint features (5, 3) \n",
" [[ 1.31000000e+03 -1.59783068e-01 0.00000000e+00]\n",
" [ 9.88000000e+02 -3.28816327e-01 0.00000000e+00]\n",
" [ 1.00400000e+03 -4.06013164e-01 0.00000000e+00]\n",
" [ 1.26900000e+03 -1.36597720e-01 0.00000000e+00]\n",
" [ 1.00700000e+03 -2.88005696e-01 0.00000000e+00]]\n"
]
}
],
"source": [
"from glob import glob\n",
"from shutil import rmtree\n",
"\n",
"import numpy as np\n",
"from scipy.stats import uniform, randint\n",
"\n",
"from miplearn.collectors.basic import BasicCollector\n",
"from miplearn.extractors.fields import H5FieldsExtractor\n",
"from miplearn.h5 import H5File\n",
"from miplearn.io import write_pkl_gz\n",
"from miplearn.problems.multiknapsack import (\n",
" MultiKnapsackGenerator,\n",
" build_multiknapsack_model_gurobipy,\n",
")\n",
"\n",
"# Set random seed to make example reproducible\n",
"np.random.seed(42)\n",
"\n",
"# Generate some random multiknapsack instances\n",
"rmtree(\"data/multiknapsack/\", ignore_errors=True)\n",
"write_pkl_gz(\n",
" MultiKnapsackGenerator(\n",
" n=randint(low=10, high=11),\n",
" m=randint(low=5, high=6),\n",
" w=uniform(loc=0, scale=1000),\n",
" K=uniform(loc=100, scale=0),\n",
" u=uniform(loc=1, scale=0),\n",
" alpha=uniform(loc=0.25, scale=0),\n",
" w_jitter=uniform(loc=0.95, scale=0.1),\n",
" p_jitter=uniform(loc=0.75, scale=0.5),\n",
" fix_w=True,\n",
" ).generate(10),\n",
" \"data/multiknapsack\",\n",
")\n",
"\n",
"# Run the basic collector\n",
"BasicCollector().collect(\n",
" glob(\"data/multiknapsack/*\"),\n",
" build_multiknapsack_model_gurobipy,\n",
" n_jobs=4,\n",
")\n",
"\n",
"ext = H5FieldsExtractor(\n",
" # Use as instance features the value of the LP relaxation and the\n",
" # vector of objective coefficients.\n",
" instance_fields=[\n",
" \"lp_obj_value\",\n",
" \"static_var_obj_coeffs\",\n",
" ],\n",
" # For each variable, use as features the optimal value of the LP\n",
" # relaxation, the variable objective coefficient, the variable's\n",
" # value its reduced cost.\n",
" var_fields=[\n",
" \"lp_obj_value\",\n",
" \"static_var_obj_coeffs\",\n",
" \"lp_var_values\",\n",
" \"lp_var_reduced_costs\",\n",
" ],\n",
" # For each constraint, use as features the RHS, dual value and slack.\n",
" constr_fields=[\n",
" \"static_constr_rhs\",\n",
" \"lp_constr_dual_values\",\n",
" \"lp_constr_slacks\",\n",
" ],\n",
")\n",
"\n",
"with H5File(\"data/multiknapsack/00000.h5\") as h5:\n",
" # Extract and print instance features\n",
" x1 = ext.get_instance_features(h5)\n",
" print(\"instance features\", x1.shape, \"\\n\", x1)\n",
"\n",
" # Extract and print variable features\n",
" x2 = ext.get_var_features(h5)\n",
" print(\"variable features\", x2.shape, \"\\n\", x2)\n",
"\n",
" # Extract and print constraint features\n",
" x3 = ext.get_constr_features(h5)\n",
" print(\"constraint features\", x3.shape, \"\\n\", x3)"
]
},
{
"cell_type": "markdown",
"id": "2da2e74e",
"metadata": {},
"source": [
"\n",
"[H5FieldsExtractor]: ../../api/collectors/#miplearn.features.fields.H5FieldsExtractor"
]
},
{
"cell_type": "markdown",
"id": "d879c0d3",
"metadata": {},
"source": [
"<div class=\"alert alert-warning\">\n",
"Warning\n",
"\n",
"You should ensure that the number of features remains the same for all relevant HDF5 files. In the previous example, to illustrate this issue, we used variable objective coefficients as instance features. While this is allowed, note that this requires all problem instances to have the same number of variables; otherwise the number of features would vary from instance to instance and MIPLearn would be unable to concatenate the matrices.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "cd0ba071",
"metadata": {},
"source": [
"## AlvLouWeh2017Extractor\n",
"\n",
"Alvarez, Louveaux and Wehenkel (2017) proposed a set features to describe a particular decision variable in a given node of the branch-and-bound tree, and applied it to the problem of mimicking strong branching decisions. The class [AlvLouWeh2017Extractor][] implements a subset of these features (40 out of 64), which are available outside of the branch-and-bound tree. Some features are derived from the static defintion of the problem (i.e. from objective function and constraint data), while some features are derived from the solution to the LP relaxation. The features have been designed to be: (i) independent of the size of the problem; (ii) invariant with respect to irrelevant problem transformations, such as row and column permutation; and (iii) independent of the scale of the problem. We refer to the paper for a more complete description.\n",
"\n",
"### Example"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a1bc38fe",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"x1 (10, 40) \n",
" [[-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 6.00e-01\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 6.00e-01 1.00e+00 1.75e+01 1.00e+00 2.00e-01\n",
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n",
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 1.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 7.00e-01 1.00e+00 5.10e+00 1.00e+00 2.00e-01\n",
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 3.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 9.00e-01\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 5.00e-01 1.00e+00 1.30e+01 1.00e+00 2.00e-01\n",
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n",
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 2.00e-01 1.00e+00 9.00e-01\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 8.00e-01 1.00e+00 3.40e+00 1.00e+00 2.00e-01\n",
" 1.00e+00 1.00e-01 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 7.00e-01\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 6.00e-01 1.00e+00 3.80e+00 1.00e+00 2.00e-01\n",
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 8.00e-01\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 7.00e-01 1.00e+00 3.30e+00 1.00e+00 2.00e-01\n",
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n",
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 3.00e-01\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 1.00e+00 1.00e+00 5.70e+00 1.00e+00 1.00e-01\n",
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 6.00e-01\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 8.00e-01 1.00e+00 6.80e+00 1.00e+00 2.00e-01\n",
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 4.00e-01 1.00e+00 6.00e-01\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 8.00e-01 1.00e+00 1.40e+00 1.00e+00 1.00e-01\n",
" 1.00e+00 1.00e-01 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n",
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 5.00e-01\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 1.00e+00 5.00e-01 1.00e+00 7.60e+00 1.00e+00 1.00e-01\n",
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
" 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]]\n"
]
}
],
"source": [
"from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n",
"from miplearn.h5 import H5File\n",
"\n",
"# Build the extractor\n",
"ext = AlvLouWeh2017Extractor()\n",
"\n",
"# Open previously-created multiknapsack training data\n",
"with H5File(\"data/multiknapsack/00000.h5\") as h5:\n",
" # Extract and print variable features\n",
" x1 = ext.get_var_features(h5)\n",
" print(\"x1\", x1.shape, \"\\n\", x1.round(1))"
]
},
{
"cell_type": "markdown",
"id": "286c9927",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
"References\n",
"\n",
"* **Alvarez, Alejandro Marcos.** *Computational and theoretical synergies between linear optimization and supervised machine learning.* (2016). University of Liège.\n",
"* **Alvarez, Alejandro Marcos, Quentin Louveaux, and Louis Wehenkel.** *A machine learning-based approximation of strong branching.* INFORMS Journal on Computing 29.1 (2017): 185-195.\n",
"\n",
"</div>"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

291
docs/guide/primal.ipynb Normal file
View File

@@ -0,0 +1,291 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "880cf4c7-d3c4-4b92-85c7-04a32264cdae",
"metadata": {},
"source": [
"# Primal Components\n",
"\n",
"In MIPLearn, a **primal component** is class that uses machine learning to predict a (potentially partial) assignment of values to the decision variables of the problem. Predicting high-quality primal solutions may be beneficial, as they allow the MIP solver to prune potentially large portions of the search space. Alternatively, if proof of optimality is not required, the MIP solver can be used to complete the partial solution generated by the machine learning model and and double-check its feasibility. MIPLearn allows both of these usage patterns.\n",
"\n",
"In this page, we describe the four primal components currently included in MIPLearn, which employ machine learning in different ways. Each component is highly configurable, and accepts an user-provided machine learning model, which it uses for all predictions. Each component can also be configured to provide the solution to the solver in multiple ways, depending on whether proof of optimality is required.\n",
"\n",
"## Primal component actions\n",
"\n",
"Before presenting the primal components themselves, we briefly discuss the three ways a solution may be provided to the solver. Each approach has benefits and limitations, which we also discuss in this section. All primal components can be configured to use any of the following approaches.\n",
"\n",
"The first approach is to provide the solution to the solver as a **warm start**. This is implemented by the class [SetWarmStart][SetWarmStart]. The main advantage is that this method maintains all optimality and feasibility guarantees of the MIP solver, while still providing significant performance benefits for various classes of problems. If the machine learning model is able to predict multiple solutions, it is also possible to set multiple warm starts. In this case, the solver evaluates each warm start, discards the infeasible ones, then proceeds with the one that has the best objective value. The main disadvantage of this approach, compared to the next two, is that it provides relatively modest speedups for most problem classes, and no speedup at all for many others, even when the machine learning predictions are 100% accurate.\n",
"\n",
"[SetWarmStart]: ../../api/components/#miplearn.components.primal.actions.SetWarmStart\n",
"\n",
"The second approach is to **fix the decision variables** to their predicted values, then solve a restricted optimization problem on the remaining variables. This approach is implemented by the class `FixVariables`. The main advantage is its potential speedup: if machine learning can accurately predict values for a significant portion of the decision variables, then the MIP solver can typically complete the solution in a small fraction of the time it would take to find the same solution from scratch. The main disadvantage of this approach is that it loses optimality guarantees; that is, the complete solution found by the MIP solver may no longer be globally optimal. Also, if the machine learning predictions are not sufficiently accurate, there might not even be a feasible assignment for the variables that were left free.\n",
"\n",
"Finally, the third approach, which tries to strike a balance between the two previous ones, is to **enforce proximity** to a given solution. This strategy is implemented by the class `EnforceProximity`. More precisely, given values $\\bar{x}_1,\\ldots,\\bar{x}_n$ for a subset of binary decision variables $x_1,\\ldots,x_n$, this approach adds the constraint\n",
"\n",
"$$\n",
"\\sum_{i : \\bar{x}_i=0} x_i + \\sum_{i : \\bar{x}_i=1} \\left(1 - x_i\\right) \\leq k,\n",
"$$\n",
"to the problem, where $k$ is a user-defined parameter, which indicates how many of the predicted variables are allowed to deviate from the machine learning suggestion. The main advantage of this approach, compared to fixing variables, is its tolerance to lower-quality machine learning predictions. Its main disadvantage is that it typically leads to smaller speedups, especially for larger values of $k$. This approach also loses optimality guarantees.\n",
"\n",
"## Memorizing primal component\n",
"\n",
"A simple machine learning strategy for the prediction of primal solutions is to memorize all distinct solutions seen during training, then try to predict, during inference time, which of those memorized solutions are most likely to be feasible and to provide a good objective value for the current instance. The most promising solutions may alternatively be combined into a single partial solution, which is then provided to the MIP solver. Both variations of this strategy are implemented by the `MemorizingPrimalComponent` class. Note that it is only applicable if the problem size, and in fact if the meaning of the decision variables, remains the same across problem instances.\n",
"\n",
"More precisely, let $I_1,\\ldots,I_n$ be the training instances, and let $\\bar{x}^1,\\ldots,\\bar{x}^n$ be their respective optimal solutions. Given a new instance $I_{n+1}$, `MemorizingPrimalComponent` expects a user-provided binary classifier that assigns (through the `predict_proba` method, following scikit-learn's conventions) a score $\\delta_i$ to each solution $\\bar{x}^i$, such that solutions with higher score are more likely to be good solutions for $I_{n+1}$. The features provided to the classifier are the instance features computed by an user-provided extractor. Given these scores, the component then performs one of the following to actions, as decided by the user:\n",
"\n",
"1. Selects the top $k$ solutions with the highest scores and provides them to the solver; this is implemented by `SelectTopSolutions`, and it is typically used with the `SetWarmStart` action.\n",
"\n",
"2. Merges the top $k$ solutions into a single partial solution, then provides it to the solver. This is implemented by `MergeTopSolutions`. More precisely, suppose that the machine learning regressor ordered the solutions in the sequence $\\bar{x}^{i_1},\\ldots,\\bar{x}^{i_n}$, with the most promising solutions appearing first, and with ties being broken arbitrarily. The component starts by keeping only the $k$ most promising solutions $\\bar{x}^{i_1},\\ldots,\\bar{x}^{i_k}$. Then it computes, for each binary decision variable $x_l$, its average assigned value $\\tilde{x}_l$:\n",
"$$\n",
" \\tilde{x}_l = \\frac{1}{k} \\sum_{j=1}^k \\bar{x}^{i_j}_l.\n",
"$$\n",
" Finally, the component constructs a merged solution $y$, defined as:\n",
"$$\n",
" y_j = \\begin{cases}\n",
" 0 & \\text{ if } \\tilde{x}_l \\le \\theta_0 \\\\\n",
" 1 & \\text{ if } \\tilde{x}_l \\ge \\theta_1 \\\\\n",
" \\square & \\text{otherwise,}\n",
" \\end{cases}\n",
"$$\n",
" where $\\theta_0$ and $\\theta_1$ are user-specified parameters, and where $\\square$ indicates that the variable is left undefined. The solution $y$ is then provided by the solver using any of the three approaches defined in the previous section.\n",
"\n",
"The above specification of `MemorizingPrimalComponent` is meant to be as general as possible. Simpler strategies can be implemented by configuring this component in specific ways. For example, a simpler approach employed in the literature is to collect all optimal solutions, then provide the entire list of solutions to the solver as warm starts, without any filtering or post-processing. This strategy can be implemented with `MemorizingPrimalComponent` by using a model that returns a constant value for all solutions (e.g. [scikit-learn's DummyClassifier][DummyClassifier]), then selecting the top $n$ (instead of $k$) solutions. See example below. Another simple approach is taking the solution to the most similar instance, and using it, by itself, as a warm start. This can be implemented by using a model that computes distances between the current instance and the training ones (e.g. [scikit-learn's KNeighborsClassifier][KNeighborsClassifier]), then select the solution to the nearest one. See also example below. More complex strategies, of course, can also be configured.\n",
"\n",
"[DummyClassifier]: https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html\n",
"[KNeighborsClassifier]: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html\n",
"\n",
"### Examples"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "253adbf4",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from sklearn.dummy import DummyClassifier\n",
"from sklearn.neighbors import KNeighborsClassifier\n",
"\n",
"from miplearn.components.primal.actions import (\n",
" SetWarmStart,\n",
" FixVariables,\n",
" EnforceProximity,\n",
")\n",
"from miplearn.components.primal.mem import (\n",
" MemorizingPrimalComponent,\n",
" SelectTopSolutions,\n",
" MergeTopSolutions,\n",
")\n",
"from miplearn.extractors.dummy import DummyExtractor\n",
"from miplearn.extractors.fields import H5FieldsExtractor\n",
"\n",
"# Configures a memorizing primal component that collects\n",
"# all distinct solutions seen during training and provides\n",
"# them to the solver without any filtering or post-processing.\n",
"comp1 = MemorizingPrimalComponent(\n",
" clf=DummyClassifier(),\n",
" extractor=DummyExtractor(),\n",
" constructor=SelectTopSolutions(1_000_000),\n",
" action=SetWarmStart(),\n",
")\n",
"\n",
"# Configures a memorizing primal component that finds the\n",
"# training instance with the closest objective function, then\n",
"# fixes the decision variables to the values they assumed\n",
"# at the optimal solution for that instance.\n",
"comp2 = MemorizingPrimalComponent(\n",
" clf=KNeighborsClassifier(n_neighbors=1),\n",
" extractor=H5FieldsExtractor(\n",
" instance_fields=[\"static_var_obj_coeffs\"],\n",
" ),\n",
" constructor=SelectTopSolutions(1),\n",
" action=FixVariables(),\n",
")\n",
"\n",
"# Configures a memorizing primal component that finds the distinct\n",
"# solutions to the 10 most similar training problem instances,\n",
"# selects the 3 solutions that were most often optimal to these\n",
"# training instances, combines them into a single partial solution,\n",
"# then enforces proximity, allowing at most 3 variables to deviate\n",
"# from the machine learning suggestion.\n",
"comp3 = MemorizingPrimalComponent(\n",
" clf=KNeighborsClassifier(n_neighbors=10),\n",
" extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n",
" constructor=MergeTopSolutions(k=3, thresholds=[0.25, 0.75]),\n",
" action=EnforceProximity(3),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f194a793",
"metadata": {},
"source": [
"## Independent vars primal component\n",
"\n",
"Instead of memorizing previously-seen primal solutions, it is also natural to use machine learning models to directly predict the values of the decision variables, constructing a solution from scratch. This approach has the benefit of potentially constructing novel high-quality solutions, never observed in the training data. Two variations of this strategy are supported by MIPLearn: (i) predicting the values of the decision variables independently, using multiple ML models; or (ii) predicting the values jointly, with a single model. We describe the first variation in this section, and the second variation in the next section.\n",
"\n",
"Let $I_1,\\ldots,I_n$ be the training instances, and let $\\bar{x}^1,\\ldots,\\bar{x}^n$ be their respective optimal solutions. For each binary decision variable $x_j$, the component `IndependentVarsPrimalComponent` creates a copy of a user-provided binary classifier and trains it to predict the optimal value of $x_j$, given $\\bar{x}^1_j,\\ldots,\\bar{x}^n_j$ as training labels. The features provided to the model are the variable features computed by an user-provided extractor. During inference time, the component uses these $n$ binary classifiers to construct a solution and provides it to the solver using one of the available actions.\n",
"\n",
"Three issues often arise in practice when using this approach:\n",
"\n",
" 1. For certain binary variables $x_j$, it is frequently the case that its optimal value is either always zero or always one in the training dataset, which poses problems to some standard scikit-learn classifiers, since they do not expect a single class. The wrapper `SingleClassFix` can be used to fix this issue (see example below).\n",
"2. It is also frequently the case that machine learning classifier can only reliably predict the values of some variables with high accuracy, not all of them. In this situation, instead of computing a complete primal solution, it may be more beneficial to construct a partial solution containing values only for the variables for which the ML made a high-confidence prediction. The meta-classifier `MinProbabilityClassifier` can be used for this purpose. It asks the base classifier for the probability of the value being zero or one (using the `predict_proba` method) and erases from the primal solution all values whose probabilities are below a given threshold.\n",
"3. To make multiple copies of the provided ML classifier, MIPLearn uses the standard `sklearn.base.clone` method, which may not be suitable for classifiers from other frameworks. To handle this, it is possible to override the clone function using the `clone_fn` constructor argument.\n",
"\n",
"### Examples"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3fc0b5d1",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from sklearn.linear_model import LogisticRegression\n",
"from miplearn.classifiers.minprob import MinProbabilityClassifier\n",
"from miplearn.classifiers.singleclass import SingleClassFix\n",
"from miplearn.components.primal.indep import IndependentVarsPrimalComponent\n",
"from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n",
"from miplearn.components.primal.actions import SetWarmStart\n",
"\n",
"# Configures a primal component that independently predicts the value of each\n",
"# binary variable using logistic regression and provides it to the solver as\n",
"# warm start. Erases predictions with probability less than 99%; applies\n",
"# single-class fix; and uses AlvLouWeh2017 features.\n",
"comp = IndependentVarsPrimalComponent(\n",
" base_clf=SingleClassFix(\n",
" MinProbabilityClassifier(\n",
" base_clf=LogisticRegression(),\n",
" thresholds=[0.99, 0.99],\n",
" ),\n",
" ),\n",
" extractor=AlvLouWeh2017Extractor(),\n",
" action=SetWarmStart(),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "45107a0c",
"metadata": {},
"source": [
"## Joint vars primal component\n",
"In the previous subsection, we used multiple machine learning models to independently predict the values of the binary decision variables. When these values are correlated, an alternative approach is to jointly predict the values of all binary variables using a single machine learning model. This strategy is implemented by `JointVarsPrimalComponent`. Compared to the previous ones, this component is much more straightforwad. It simply extracts instance features, using the user-provided feature extractor, then directly trains the user-provided binary classifier (using the `fit` method), without making any copies. The trained classifier is then used to predict entire solutions (using the `predict` method), which are given to the solver using one of the previously discussed methods. In the example below, we illustrate the usage of this component with a simple feed-forward neural network.\n",
"\n",
"`JointVarsPrimalComponent` can also be used to implement strategies that use multiple machine learning models, but not indepedently. For example, a common strategy in multioutput prediction is building a *classifier chain*. In this approach, the first decision variable is predicted using the instance features alone; but the $n$-th decision variable is predicted using the instance features plus the predicted values of the $n-1$ previous variables. This can be easily implemented using scikit-learn's `ClassifierChain` estimator, as shown in the example below.\n",
"\n",
"### Examples"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cf9b52dd",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from sklearn.multioutput import ClassifierChain\n",
"from sklearn.neural_network import MLPClassifier\n",
"from miplearn.components.primal.joint import JointVarsPrimalComponent\n",
"from miplearn.extractors.fields import H5FieldsExtractor\n",
"from miplearn.components.primal.actions import SetWarmStart\n",
"\n",
"# Configures a primal component that uses a feedforward neural network\n",
"# to jointly predict the values of the binary variables, based on the\n",
"# objective cost function, and provides the solution to the solver as\n",
"# a warm start.\n",
"comp = JointVarsPrimalComponent(\n",
" clf=MLPClassifier(),\n",
" extractor=H5FieldsExtractor(\n",
" instance_fields=[\"static_var_obj_coeffs\"],\n",
" ),\n",
" action=SetWarmStart(),\n",
")\n",
"\n",
"# Configures a primal component that uses a chain of logistic regression\n",
"# models to jointly predict the values of the binary variables, based on\n",
"# the objective function.\n",
"comp = JointVarsPrimalComponent(\n",
" clf=ClassifierChain(SingleClassFix(LogisticRegression())),\n",
" extractor=H5FieldsExtractor(\n",
" instance_fields=[\"static_var_obj_coeffs\"],\n",
" ),\n",
" action=SetWarmStart(),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "dddf7be4",
"metadata": {},
"source": [
"## Expert primal component\n",
"\n",
"Before spending time and effort choosing a machine learning strategy and tweaking its parameters, it is usually a good idea to evaluate what would be the performance impact of the model if its predictions were 100% accurate. This is especially important for the prediction of warm starts, since they are not always very beneficial. To simplify this task, MIPLearn provides `ExpertPrimalComponent`, a component which simply loads the optimal solution from the HDF5 file, assuming that it has already been computed, then directly provides it to the solver using one of the available methods. This component is useful in benchmarks, to evaluate how close to the best theoretical performance the machine learning components are.\n",
"\n",
"### Example"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "9e2e81b9",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from miplearn.components.primal.expert import ExpertPrimalComponent\n",
"from miplearn.components.primal.actions import SetWarmStart\n",
"\n",
"# Configures an expert primal component, which reads a pre-computed\n",
"# optimal solution from the HDF5 file and provides it to the solver\n",
"# as warm start.\n",
"comp = ExpertPrimalComponent(action=SetWarmStart())"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

1778
docs/guide/problems.ipynb Normal file

File diff suppressed because it is too large Load Diff

244
docs/guide/solvers.ipynb Normal file
View File

@@ -0,0 +1,244 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "9ec1907b-db93-4840-9439-c9005902b968",
"metadata": {},
"source": [
"# Learning Solver\n",
"\n",
"On previous pages, we discussed various components of the MIPLearn framework, including training data collectors, feature extractors, and individual machine learning components. In this page, we introduce **LearningSolver**, the main class of the framework which integrates all the aforementioned components into a cohesive whole. Using **LearningSolver** involves three steps: (i) configuring the solver; (ii) training the ML components; and (iii) solving new MIP instances. In the following, we describe each of these steps, then conclude with a complete runnable example.\n",
"\n",
"### Configuring the solver\n",
"\n",
"**LearningSolver** is composed by multiple individual machine learning components, each targeting a different part of the solution process, or implementing a different machine learning strategy. This architecture allows strategies to be easily enabled, disabled or customized, making the framework flexible. By default, no components are provided and **LearningSolver** is equivalent to a traditional MIP solver. To specify additional components, the `components` constructor argument may be used:\n",
"\n",
"```python\n",
"solver = LearningSolver(\n",
" components=[\n",
" comp1,\n",
" comp2,\n",
" comp3,\n",
" ]\n",
")\n",
"```\n",
"\n",
"In this example, three components `comp1`, `comp2` and `comp3` are provided. The strategies implemented by these components are applied sequentially when solving the problem. For example, `comp1` and `comp2` could fix a subset of decision variables, while `comp3` constructs a warm start for the remaining problem.\n",
"\n",
"### Training and solving new instances\n",
"\n",
"Once a solver is configured, its ML components need to be trained. This can be achieved by the `solver.fit` method, as illustrated below. The method accepts a list of HDF5 files and trains each individual component sequentially. Once the solver is trained, new instances can be solved using `solver.optimize`. The method returns a dictionary of statistics collected by each component, such as the number of variables fixed.\n",
"\n",
"```python\n",
"# Build instances\n",
"train_data = ...\n",
"test_data = ...\n",
"\n",
"# Collect training data\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_model)\n",
"\n",
"# Build solver\n",
"solver = LearningSolver(...)\n",
"\n",
"# Train components\n",
"solver.fit(train_data)\n",
"\n",
"# Solve a new test instance\n",
"stats = solver.optimize(test_data[0], build_model)\n",
"\n",
"```\n",
"\n",
"### Complete example\n",
"\n",
"In the example below, we illustrate the usage of **LearningSolver** by building instances of the Traveling Salesman Problem, collecting training data, training the ML components, then solving a new instance."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "92b09b98",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
"\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x6ddcd141\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [4e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Presolve time: 0.00s\n",
"Presolved: 10 rows, 45 columns, 90 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 6.3600000e+02 1.700000e+01 0.000000e+00 0s\n",
" 15 2.7610000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 15 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 2.761000000e+03\n",
"\n",
"User-callback calls 56, time in user-callback 0.00 sec\n",
"Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 20 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"LazyConstraints 1\n",
"\n",
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
"Model fingerprint: 0x74ca3d0a\n",
"Variable types: 0 continuous, 45 integer (45 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [4e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"\n",
"User MIP start produced solution with objective 2796 (0.00s)\n",
"Loaded user MIP start with objective 2796\n",
"\n",
"Presolve time: 0.00s\n",
"Presolved: 10 rows, 45 columns, 90 nonzeros\n",
"Variable types: 0 continuous, 45 integer (45 binary)\n",
"\n",
"Root relaxation: objective 2.761000e+03, 14 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 2761.00000 0 - 2796.00000 2761.00000 1.25% - 0s\n",
"\n",
"Cutting planes:\n",
" Lazy constraints: 3\n",
"\n",
"Explored 1 nodes (14 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 20 (of 20 available processors)\n",
"\n",
"Solution count 1: 2796 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 114, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"import random\n",
"\n",
"import numpy as np\n",
"from scipy.stats import uniform, randint\n",
"from sklearn.linear_model import LogisticRegression\n",
"\n",
"from miplearn.classifiers.minprob import MinProbabilityClassifier\n",
"from miplearn.classifiers.singleclass import SingleClassFix\n",
"from miplearn.collectors.basic import BasicCollector\n",
"from miplearn.components.primal.actions import SetWarmStart\n",
"from miplearn.components.primal.indep import IndependentVarsPrimalComponent\n",
"from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n",
"from miplearn.io import write_pkl_gz\n",
"from miplearn.problems.tsp import (\n",
" TravelingSalesmanGenerator,\n",
" build_tsp_model_gurobipy,\n",
")\n",
"from miplearn.solvers.learning import LearningSolver\n",
"\n",
"# Set random seed to make example reproducible.\n",
"random.seed(42)\n",
"np.random.seed(42)\n",
"\n",
"# Generate a few instances of the traveling salesman problem.\n",
"data = TravelingSalesmanGenerator(\n",
" n=randint(low=10, high=11),\n",
" x=uniform(loc=0.0, scale=1000.0),\n",
" y=uniform(loc=0.0, scale=1000.0),\n",
" gamma=uniform(loc=0.90, scale=0.20),\n",
" fix_cities=True,\n",
" round=True,\n",
").generate(50)\n",
"\n",
"# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...\n",
"all_data = write_pkl_gz(data, \"data/tsp\")\n",
"\n",
"# Split train/test data\n",
"train_data = all_data[:40]\n",
"test_data = all_data[40:]\n",
"\n",
"# Collect training data\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_tsp_model_gurobipy, n_jobs=4)\n",
"\n",
"# Build learning solver\n",
"solver = LearningSolver(\n",
" components=[\n",
" IndependentVarsPrimalComponent(\n",
" base_clf=SingleClassFix(\n",
" MinProbabilityClassifier(\n",
" base_clf=LogisticRegression(),\n",
" thresholds=[0.95, 0.95],\n",
" ),\n",
" ),\n",
" extractor=AlvLouWeh2017Extractor(),\n",
" action=SetWarmStart(),\n",
" )\n",
" ]\n",
")\n",
"\n",
"# Train ML models\n",
"solver.fit(train_data)\n",
"\n",
"# Solve a test instance\n",
"solver.optimize(test_data[0], build_tsp_model_gurobipy);"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e27d2cbd-5341-461d-bbc1-8131aee8d949",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,28 +0,0 @@
# MIPLearn
**MIPLearn** is an extensible framework for **Learning-Enhanced Mixed-Integer Optimization**, an approach targeted at discrete optimization problems that need to be repeatedly solved with only minor changes to input data.
The package uses Machine Learning (ML) to automatically identify patterns in previously solved instances of the problem, or in the solution process itself, and produces hints that can guide a conventional MIP solver towards the optimal solution faster. For particular classes of problems, this approach has been shown to provide significant performance benefits (see [benchmark results](problems.md) and [references](about.md#references) for more details).
### Features
* **MIPLearn proposes a flexible problem specification format,** which allows users to describe their particular optimization problems to a Learning-Enhanced MIP solver, both from the MIP perspective and from the ML perspective, without making any assumptions on the problem being modeled, the mathematical formulation of the problem, or ML encoding. While the format is very flexible, some constraints are enforced to ensure that it is usable by an actual solver.
* **MIPLearn provides a reference implementation of a *Learning-Enhanced Solver*,** which can use the above problem specification format to automatically predict, based on previously solved instances, a number of hints to accelerate MIP performance. Currently, the reference solver is able to predict: (i) partial solutions which are likely to work well as MIP starts; (ii) an initial set of lazy constraints to enforce; (iii) variable branching priorities to accelerate the exploration of the branch-and-bound tree; (iv) the optimal objective value based on the solution to the LP relaxation. The usage of the solver is very straightforward. The most suitable ML models are automatically selected, trained, cross-validated and applied to the problem with no user intervention.
* **MIPLearn provides a set of benchmark problems and random instance generators,** covering applications from different domains, which can be used to quickly evaluate new learning-enhanced MIP techniques in a measurable and reproducible way.
* **MIPLearn is customizable and extensible**. For MIP and ML researchers exploring new techniques to accelerate MIP performance based on historical data, each component of the reference solver can be individually replaced, extended or customized.
### Documentation
* [Installation and typical usage](usage.md)
* [Benchmark utilities](benchmark.md)
* [Benchmark problems, challenges and results](problems.md)
* [Customizing the solver](customization.md)
* [License, authors, references and acknowledgments](about.md)
### Source Code
* [https://github.com/ANL-CEEESA/MIPLearn](https://github.com/ANL-CEEESA/MIPLearn)

68
docs/index.rst Normal file
View File

@@ -0,0 +1,68 @@
MIPLearn
========
**MIPLearn** is an extensible framework for solving discrete optimization problems using a combination of Mixed-Integer Programming (MIP) and Machine Learning (ML). MIPLearn uses ML methods to automatically identify patterns in previously solved instances of the problem, then uses these patterns to accelerate the performance of conventional state-of-the-art MIP solvers such as CPLEX, Gurobi or XPRESS.
Unlike pure ML methods, MIPLearn is not only able to find high-quality solutions to discrete optimization problems, but it can also prove the optimality and feasibility of these solutions. Unlike conventional MIP solvers, MIPLearn can take full advantage of very specific observations that happen to be true in a particular family of instances (such as the observation that a particular constraint is typically redundant, or that a particular variable typically assumes a certain value). For certain classes of problems, this approach may provide significant performance benefits.
Contents
--------
.. toctree::
:maxdepth: 1
:caption: Tutorials
:numbered: 2
tutorials/getting-started-pyomo
tutorials/getting-started-gurobipy
tutorials/getting-started-jump
tutorials/cuts-gurobipy
.. toctree::
:maxdepth: 2
:caption: User Guide
:numbered: 2
guide/problems
guide/collectors
guide/features
guide/primal
guide/solvers
.. toctree::
:maxdepth: 1
:caption: Python API Reference
:numbered: 2
api/problems
api/collectors
api/components
api/solvers
api/helpers
Authors
-------
- **Alinson S. Xavier** (Argonne National Laboratory)
- **Feng Qiu** (Argonne National Laboratory)
- **Xiaoyi Gu** (Georgia Institute of Technology)
- **Berkay Becu** (Georgia Institute of Technology)
- **Santanu S. Dey** (Georgia Institute of Technology)
Acknowledgments
---------------
* Based upon work supported by **Laboratory Directed Research and Development** (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy.
* Based upon work supported by the **U.S. Department of Energy Advanced Grid Modeling Program**.
Citing MIPLearn
---------------
If you use MIPLearn in your research (either the solver or the included problem generators), we kindly request that you cite the package as follows:
* **Alinson S. Xavier, Feng Qiu, Xiaoyi Gu, Berkay Becu, Santanu S. Dey.** *MIPLearn: An Extensible Framework for Learning-Enhanced Optimization (Version 0.4)*. Zenodo (2024). DOI: https://doi.org/10.5281/zenodo.4287567
If you use MIPLearn in the field of power systems optimization, we kindly request that you cite the reference below, in which the main techniques implemented in MIPLearn were first developed:
* **Alinson S. Xavier, Feng Qiu, Shabbir Ahmed.** *Learning to Solve Large-Scale Unit Commitment Problems.* INFORMS Journal on Computing (2020). DOI: https://doi.org/10.1287/ijoc.2020.0976

View File

@@ -1,8 +0,0 @@
MathJax.Hub.Config({
"tex2jax": { inlineMath: [ [ '$', '$' ] ] }
});
MathJax.Hub.Config({
config: ["MMLorHTML.js"],
jax: ["input/TeX", "output/HTML-CSS", "output/NativeMML"],
extensions: ["MathMenu.js", "MathZoom.js"]
});

35
docs/make.bat Normal file
View File

@@ -0,0 +1,35 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=_build
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)
if "%1" == "" goto help
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

View File

@@ -1,167 +0,0 @@
# Benchmark Problems, Challenges and Results
MIPLearn provides a selection of benchmark problems and random instance generators, covering applications from different fields, that can be used to evaluate new learning-enhanced MIP techniques in a measurable and reproducible way. In this page, we describe these problems, the included instance generators, and we present some benchmark results for `LearningSolver` with default parameters.
## Preliminaries
### Benchmark challenges
When evaluating the performance of a conventional MIP solver, *benchmark sets*, such as MIPLIB and TSPLIB, are typically used. The performance of newly proposed solvers or solution techniques are typically measured as the average (or total) running time the solver takes to solve the entire benchmark set. For Learning-Enhanced MIP solvers, it is also necessary to specify what instances should the solver be trained on (the *training instances*) before solving the actual set of instances we are interested in (the *test instances*). If the training instances are very similar to the test instances, we would expect a Learning-Enhanced Solver to present stronger perfomance benefits.
In MIPLearn, each optimization problem comes with a set of **benchmark challenges**, which specify how should the training and test instances be generated. The first challenges are typically easier, in the sense that training and test instances are very similar. Later challenges gradually make the sets more distinct, and therefore harder to learn from.
### Baseline results
To illustrate the performance of `LearningSolver`, and to set a baseline for newly proposed techniques, we present in this page, for each benchmark challenge, a small set of computational results measuring the solution speed of the solver and the solution quality with default parameters. For more detailed computational studies, see [references](about.md#references). We compare three solvers:
* **baseline:** Gurobi 9.0 with default settings (a conventional state-of-the-art MIP solver)
* **ml-exact:** `LearningSolver` with default settings, using Gurobi 9.0 as internal MIP solver
* **ml-heuristic:** Same as above, but with `mode="heuristic"`
All experiments presented here were performed on a Linux server (Ubuntu Linux 18.04 LTS) with Intel Xeon Gold 6230s (2 processors, 40 cores, 80 threads) and 256 GB RAM (DDR4, 2933 MHz). All solvers were restricted to use 4 threads, with no time limits, and 10 instances were solved simultaneously at a time.
## Maximum Weight Stable Set Problem
### Problem definition
Given a simple undirected graph $G=(V,E)$ and weights $w \in \mathbb{R}^V$, the problem is to find a stable set $S \subseteq V$ that maximizes $ \sum_{v \in V} w_v$. We recall that a subset $S \subseteq V$ is a *stable set* if no two vertices of $S$ are adjacent. This is one of Karp's 21 NP-complete problems.
### Random instance generator
The class `MaxWeightStableSetGenerator` can be used to generate random instances of this problem, with user-specified probability distributions. When the constructor parameter `fix_graph=True` is provided, one random Erdős-Rényi graph $G_{n,p}$ is generated during the constructor, where $n$ and $p$ are sampled from user-provided probability distributions `n` and `p`. To generate each instance, the generator independently samples each $w_v$ from the user-provided probability distribution `w`. When `fix_graph=False`, a new random graph is generated for each instance, while the remaining parameters are sampled in the same way.
### Challenge A
* Fixed random Erdős-Rényi graph $G_{n,p}$ with $n=200$ and $p=5\%$
* Random vertex weights $w_v \sim U(100, 150)$
* 500 training instances, 50 test instances
```python
MaxWeightStableSetGenerator(w=uniform(loc=100., scale=50.),
n=randint(low=200, high=201),
p=uniform(loc=0.05, scale=0.0),
fix_graph=True)
```
![alt](figures/benchmark_stab_a.png)
## Traveling Salesman Problem
### Problem definition
Given a list of cities and the distance between each pair of cities, the problem asks for the
shortest route starting at the first city, visiting each other city exactly once, then returning
to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's
21 NP-complete problems.
### Random problem generator
The class `TravelingSalesmanGenerator` can be used to generate random instances of this
problem. Initially, the generator creates $n$ cities $(x_1,y_1),\ldots,(x_n,y_n) \in \mathbb{R}^2$,
where $n, x_i$ and $y_i$ are sampled independently from the provided probability distributions `n`,
`x` and `y`. For each pair of cities $(i,j)$, the distance $d_{i,j}$ between them is set to:
$$
d_{i,j} = \gamma_{i,j} \sqrt{(x_i-x_j)^2 + (y_i - y_j)^2}
$$
where $\gamma_{i,j}$ is sampled from the distribution `gamma`.
If `fix_cities=True` is provided, the list of cities is kept the same for all generated instances.
The $gamma$ values, and therefore also the distances, are still different.
By default, all distances $d_{i,j}$ are rounded to the nearest integer. If `round=False`
is provided, this rounding will be disabled.
### Challenge A
* Fixed list of 350 cities in the $[0, 1000]^2$ square
* $\gamma_{i,j} \sim U(0.95, 1.05)$
* 500 training instances, 50 test instances
```python
TravelingSalesmanGenerator(x=uniform(loc=0.0, scale=1000.0),
y=uniform(loc=0.0, scale=1000.0),
n=randint(low=350, high=351),
gamma=uniform(loc=0.95, scale=0.1),
fix_cities=True,
round=True,
)
```
![alt](figures/benchmark_tsp_a.png)
## Multidimensional 0-1 Knapsack Problem
### Problem definition
Given a set of $n$ items and $m$ types of resources (also called *knapsacks*), the problem is to find a subset of items that maximizes profit without consuming more resources than it is available. More precisely, the problem is:
\begin{align*}
\text{maximize}
& \sum_{j=1}^n p_j x_j
\\
\text{subject to}
& \sum_{j=1}^n w_{ij} x_j \leq b_i
& \forall i=1,\ldots,m \\
& x_j \in \{0,1\}
& \forall j=1,\ldots,n
\end{align*}
### Random instance generator
The class `MultiKnapsackGenerator` can be used to generate random instances of this problem. The number of items $n$ and knapsacks $m$ are sampled from the user-provided probability distributions `n` and `m`. The weights $w_{ij}$ are sampled independently from the provided distribution `w`. The capacity of knapsack $i$ is set to
$$
b_i = \alpha_i \sum_{j=1}^n w_{ij}
$$
where $\alpha_i$, the tightness ratio, is sampled from the provided probability
distribution `alpha`. To make the instances more challenging, the costs of the items
are linearly correlated to their average weights. More specifically, the price of each
item $j$ is set to:
$$
p_j = \sum_{i=1}^m \frac{w_{ij}}{m} + K u_j,
$$
where $K$, the correlation coefficient, and $u_j$, the correlation multiplier, are sampled
from the provided probability distributions `K` and `u`.
If `fix_w=True` is provided, then $w_{ij}$ are kept the same in all generated instances. This also implies that $n$ and $m$ are kept fixed. Although the prices and capacities are derived from $w_{ij}$, as long as `u` and `K` are not constants, the generated instances will still not be completely identical.
If a probability distribution `w_jitter` is provided, then item weights will be set to $w_{ij} \gamma_{ij}$ where $\gamma_{ij}$ is sampled from `w_jitter`. When combined with `fix_w=True`, this argument may be used to generate instances where the weight of each item is roughly the same, but not exactly identical, across all instances. The prices of the items and the capacities of the knapsacks will be calculated as above, but using these perturbed weights instead.
By default, all generated prices, weights and capacities are rounded to the nearest integer number. If `round=False` is provided, this rounding will be disabled.
!!! note "References"
* Freville, Arnaud, and Gérard Plateau. *An efficient preprocessing procedure for the multidimensional 01 knapsack problem.* Discrete applied mathematics 49.1-3 (1994): 189-212.
* Fréville, Arnaud. *The multidimensional 01 knapsack problem: An overview.* European Journal of Operational Research 155.1 (2004): 1-21.
### Challenge A
* 250 variables, 10 constraints, fixed weights
* $w \sim U(0, 1000), \gamma \sim U(0.95, 1.05)$
* $K = 500, u \sim U(0, 1), \alpha = 0.25$
* 500 training instances, 50 test instances
```python
MultiKnapsackGenerator(n=randint(low=250, high=251),
m=randint(low=10, high=11),
w=uniform(loc=0.0, scale=1000.0),
K=uniform(loc=500.0, scale=0.0),
u=uniform(loc=0.0, scale=1.0),
alpha=uniform(loc=0.25, scale=0.0),
fix_w=True,
w_jitter=uniform(loc=0.95, scale=0.1),
)
```
![alt](figures/benchmark_knapsack_a.png)

View File

@@ -0,0 +1,637 @@
# This file is machine-generated - editing it directly is not advised
julia_version = "1.9.0"
manifest_format = "2.0"
project_hash = "acf9261f767ae18f2b4613fd5590ea6a33f31e10"
[[deps.ArgTools]]
uuid = "0dad84c5-d112-42e6-8d28-ef12dabb789f"
version = "1.1.1"
[[deps.Artifacts]]
uuid = "56f22d72-fd6d-98f1-02f0-08ddc0907c33"
[[deps.Base64]]
uuid = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f"
[[deps.BenchmarkTools]]
deps = ["JSON", "Logging", "Printf", "Profile", "Statistics", "UUIDs"]
git-tree-sha1 = "d9a9701b899b30332bbcb3e1679c41cce81fb0e8"
uuid = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
version = "1.3.2"
[[deps.Bzip2_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "19a35467a82e236ff51bc17a3a44b69ef35185a2"
uuid = "6e34b625-4abd-537c-b88f-471c36dfa7a0"
version = "1.0.8+0"
[[deps.Calculus]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "f641eb0a4f00c343bbc32346e1217b86f3ce9dad"
uuid = "49dc2e85-a5d0-5ad3-a950-438e2897f1b9"
version = "0.5.1"
[[deps.CodecBzip2]]
deps = ["Bzip2_jll", "Libdl", "TranscodingStreams"]
git-tree-sha1 = "2e62a725210ce3c3c2e1a3080190e7ca491f18d7"
uuid = "523fee87-0ab8-5b00-afb7-3ecf72e48cfd"
version = "0.7.2"
[[deps.CodecZlib]]
deps = ["TranscodingStreams", "Zlib_jll"]
git-tree-sha1 = "9c209fb7536406834aa938fb149964b985de6c83"
uuid = "944b1d66-785c-5afd-91f1-9de20f533193"
version = "0.7.1"
[[deps.CommonSubexpressions]]
deps = ["MacroTools", "Test"]
git-tree-sha1 = "7b8a93dba8af7e3b42fecabf646260105ac373f7"
uuid = "bbf7d656-a473-5ed7-a52c-81e309532950"
version = "0.3.0"
[[deps.Compat]]
deps = ["UUIDs"]
git-tree-sha1 = "7a60c856b9fa189eb34f5f8a6f6b5529b7942957"
uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
version = "4.6.1"
weakdeps = ["Dates", "LinearAlgebra"]
[deps.Compat.extensions]
CompatLinearAlgebraExt = "LinearAlgebra"
[[deps.CompilerSupportLibraries_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
version = "1.0.2+0"
[[deps.Conda]]
deps = ["Downloads", "JSON", "VersionParsing"]
git-tree-sha1 = "e32a90da027ca45d84678b826fffd3110bb3fc90"
uuid = "8f4d0f93-b110-5947-807f-2305c1781a2d"
version = "1.8.0"
[[deps.DataAPI]]
git-tree-sha1 = "8da84edb865b0b5b0100c0666a9bc9a0b71c553c"
uuid = "9a962f9c-6df0-11e9-0e5d-c546b8b5ee8a"
version = "1.15.0"
[[deps.DataStructures]]
deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
git-tree-sha1 = "d1fff3a548102f48987a52a2e0d114fa97d730f0"
uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
version = "0.18.13"
[[deps.Dates]]
deps = ["Printf"]
uuid = "ade2ca70-3891-5945-98fb-dc099432e06a"
[[deps.DiffResults]]
deps = ["StaticArraysCore"]
git-tree-sha1 = "782dd5f4561f5d267313f23853baaaa4c52ea621"
uuid = "163ba53b-c6d8-5494-b064-1a9d43ac40c5"
version = "1.1.0"
[[deps.DiffRules]]
deps = ["IrrationalConstants", "LogExpFunctions", "NaNMath", "Random", "SpecialFunctions"]
git-tree-sha1 = "23163d55f885173722d1e4cf0f6110cdbaf7e272"
uuid = "b552c78f-8df3-52c6-915a-8e097449b14b"
version = "1.15.1"
[[deps.Distributions]]
deps = ["FillArrays", "LinearAlgebra", "PDMats", "Printf", "QuadGK", "Random", "SparseArrays", "SpecialFunctions", "Statistics", "StatsAPI", "StatsBase", "StatsFuns", "Test"]
git-tree-sha1 = "c72970914c8a21b36bbc244e9df0ed1834a0360b"
uuid = "31c24e10-a181-5473-b8eb-7969acd0382f"
version = "0.25.95"
[deps.Distributions.extensions]
DistributionsChainRulesCoreExt = "ChainRulesCore"
DistributionsDensityInterfaceExt = "DensityInterface"
[deps.Distributions.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
DensityInterface = "b429d917-457f-4dbc-8f4c-0cc954292b1d"
[[deps.DocStringExtensions]]
deps = ["LibGit2"]
git-tree-sha1 = "2fb1e02f2b635d0845df5d7c167fec4dd739b00d"
uuid = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
version = "0.9.3"
[[deps.Downloads]]
deps = ["ArgTools", "FileWatching", "LibCURL", "NetworkOptions"]
uuid = "f43a241f-c20a-4ad4-852c-f6b1247861c6"
version = "1.6.0"
[[deps.DualNumbers]]
deps = ["Calculus", "NaNMath", "SpecialFunctions"]
git-tree-sha1 = "5837a837389fccf076445fce071c8ddaea35a566"
uuid = "fa6b7ba4-c1ee-5f82-b5fc-ecf0adba8f74"
version = "0.6.8"
[[deps.ExprTools]]
git-tree-sha1 = "c1d06d129da9f55715c6c212866f5b1bddc5fa00"
uuid = "e2ba6199-217a-4e67-a87a-7c52f15ade04"
version = "0.1.9"
[[deps.FileIO]]
deps = ["Pkg", "Requires", "UUIDs"]
git-tree-sha1 = "299dc33549f68299137e51e6d49a13b5b1da9673"
uuid = "5789e2e9-d7fb-5bc7-8068-2c6fae9b9549"
version = "1.16.1"
[[deps.FileWatching]]
uuid = "7b1f6079-737a-58dc-b8bc-7a2ca5c1b5ee"
[[deps.FillArrays]]
deps = ["LinearAlgebra", "Random", "SparseArrays", "Statistics"]
git-tree-sha1 = "589d3d3bff204bdd80ecc53293896b4f39175723"
uuid = "1a297f60-69ca-5386-bcde-b61e274b549b"
version = "1.1.1"
[[deps.ForwardDiff]]
deps = ["CommonSubexpressions", "DiffResults", "DiffRules", "LinearAlgebra", "LogExpFunctions", "NaNMath", "Preferences", "Printf", "Random", "SpecialFunctions"]
git-tree-sha1 = "00e252f4d706b3d55a8863432e742bf5717b498d"
uuid = "f6369f11-7733-5829-9624-2563aa707210"
version = "0.10.35"
[deps.ForwardDiff.extensions]
ForwardDiffStaticArraysExt = "StaticArrays"
[deps.ForwardDiff.weakdeps]
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
[[deps.Gurobi]]
deps = ["LazyArtifacts", "Libdl", "MathOptInterface"]
git-tree-sha1 = "22439b1c2bacb7d50ed0df7dbd10211e0b4cd379"
uuid = "2e9cd046-0924-5485-92f1-d5272153d98b"
version = "1.0.1"
[[deps.HDF5]]
deps = ["Compat", "HDF5_jll", "Libdl", "Mmap", "Random", "Requires", "UUIDs"]
git-tree-sha1 = "c73fdc3d9da7700691848b78c61841274076932a"
uuid = "f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"
version = "0.16.15"
[[deps.HDF5_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LLVMOpenMP_jll", "LazyArtifacts", "LibCURL_jll", "Libdl", "MPICH_jll", "MPIPreferences", "MPItrampoline_jll", "MicrosoftMPI_jll", "OpenMPI_jll", "OpenSSL_jll", "TOML", "Zlib_jll", "libaec_jll"]
git-tree-sha1 = "3b20c3ce9c14aedd0adca2bc8c882927844bd53d"
uuid = "0234f1f7-429e-5d53-9886-15a909be8d59"
version = "1.14.0+0"
[[deps.HiGHS]]
deps = ["HiGHS_jll", "MathOptInterface", "PrecompileTools", "SparseArrays"]
git-tree-sha1 = "bbd4ab443dfac4c9d5c5b40dd45f598dfad2e26a"
uuid = "87dc4568-4c63-4d18-b0c0-bb2238e4078b"
version = "1.5.2"
[[deps.HiGHS_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl"]
git-tree-sha1 = "216e7198aeb256e7c7921ef2937d7e1e589ba6fd"
uuid = "8fd58aa0-07eb-5a78-9b36-339c94fd15ea"
version = "1.5.3+0"
[[deps.HypergeometricFunctions]]
deps = ["DualNumbers", "LinearAlgebra", "OpenLibm_jll", "SpecialFunctions"]
git-tree-sha1 = "84204eae2dd237500835990bcade263e27674a93"
uuid = "34004b35-14d8-5ef3-9330-4cdb6864b03a"
version = "0.3.16"
[[deps.InteractiveUtils]]
deps = ["Markdown"]
uuid = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
[[deps.IrrationalConstants]]
git-tree-sha1 = "630b497eafcc20001bba38a4651b327dcfc491d2"
uuid = "92d709cd-6900-40b7-9082-c6be49f344b6"
version = "0.2.2"
[[deps.JLD2]]
deps = ["FileIO", "MacroTools", "Mmap", "OrderedCollections", "Pkg", "Printf", "Reexport", "Requires", "TranscodingStreams", "UUIDs"]
git-tree-sha1 = "42c17b18ced77ff0be65957a591d34f4ed57c631"
uuid = "033835bb-8acc-5ee8-8aae-3f567f8a3819"
version = "0.4.31"
[[deps.JLLWrappers]]
deps = ["Preferences"]
git-tree-sha1 = "abc9885a7ca2052a736a600f7fa66209f96506e1"
uuid = "692b3bcd-3c85-4b1f-b108-f13ce0eb3210"
version = "1.4.1"
[[deps.JSON]]
deps = ["Dates", "Mmap", "Parsers", "Unicode"]
git-tree-sha1 = "31e996f0a15c7b280ba9f76636b3ff9e2ae58c9a"
uuid = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
version = "0.21.4"
[[deps.JuMP]]
deps = ["LinearAlgebra", "MathOptInterface", "MutableArithmetics", "OrderedCollections", "Printf", "SnoopPrecompile", "SparseArrays"]
git-tree-sha1 = "3e4a73edf2ca1bfe97f1fc86eb4364f95ef0fccd"
uuid = "4076af6c-e467-56ae-b986-b466b2749572"
version = "1.11.1"
[[deps.KLU]]
deps = ["LinearAlgebra", "SparseArrays", "SuiteSparse_jll"]
git-tree-sha1 = "764164ed65c30738750965d55652db9c94c59bfe"
uuid = "ef3ab10e-7fda-4108-b977-705223b18434"
version = "0.4.0"
[[deps.LLVMOpenMP_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "f689897ccbe049adb19a065c495e75f372ecd42b"
uuid = "1d63c593-3942-5779-bab2-d838dc0a180e"
version = "15.0.4+0"
[[deps.LazyArtifacts]]
deps = ["Artifacts", "Pkg"]
uuid = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
[[deps.LibCURL]]
deps = ["LibCURL_jll", "MozillaCACerts_jll"]
uuid = "b27032c2-a3e7-50c8-80cd-2d36dbcbfd21"
version = "0.6.3"
[[deps.LibCURL_jll]]
deps = ["Artifacts", "LibSSH2_jll", "Libdl", "MbedTLS_jll", "Zlib_jll", "nghttp2_jll"]
uuid = "deac9b47-8bc7-5906-a0fe-35ac56dc84c0"
version = "7.84.0+0"
[[deps.LibGit2]]
deps = ["Base64", "NetworkOptions", "Printf", "SHA"]
uuid = "76f85450-5226-5b5a-8eaa-529ad045b433"
[[deps.LibSSH2_jll]]
deps = ["Artifacts", "Libdl", "MbedTLS_jll"]
uuid = "29816b5a-b9ab-546f-933c-edad1886dfa8"
version = "1.10.2+0"
[[deps.Libdl]]
uuid = "8f399da3-3557-5675-b5ff-fb832c97cbdb"
[[deps.LinearAlgebra]]
deps = ["Libdl", "OpenBLAS_jll", "libblastrampoline_jll"]
uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
[[deps.LogExpFunctions]]
deps = ["DocStringExtensions", "IrrationalConstants", "LinearAlgebra"]
git-tree-sha1 = "c3ce8e7420b3a6e071e0fe4745f5d4300e37b13f"
uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688"
version = "0.3.24"
[deps.LogExpFunctions.extensions]
LogExpFunctionsChainRulesCoreExt = "ChainRulesCore"
LogExpFunctionsChangesOfVariablesExt = "ChangesOfVariables"
LogExpFunctionsInverseFunctionsExt = "InverseFunctions"
[deps.LogExpFunctions.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
ChangesOfVariables = "9e997f8a-9a97-42d5-a9f1-ce6bfc15e2c0"
InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112"
[[deps.Logging]]
uuid = "56ddb016-857b-54e1-b83d-db4d58db5568"
[[deps.MIPLearn]]
deps = ["Conda", "DataStructures", "HDF5", "HiGHS", "JLD2", "JuMP", "KLU", "LinearAlgebra", "MathOptInterface", "OrderedCollections", "Printf", "PyCall", "Random", "Requires", "SparseArrays", "Statistics", "TimerOutputs"]
path = "/home/axavier/Packages/MIPLearn.jl/dev/"
uuid = "2b1277c3-b477-4c49-a15e-7ba350325c68"
version = "0.3.0"
[[deps.MPICH_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "MPIPreferences", "TOML"]
git-tree-sha1 = "d790fbd913f85e8865c55bf4725aff197c5155c8"
uuid = "7cb0a576-ebde-5e09-9194-50597f1243b4"
version = "4.1.1+1"
[[deps.MPIPreferences]]
deps = ["Libdl", "Preferences"]
git-tree-sha1 = "d86a788b336e8ae96429c0c42740ccd60ac0dfcc"
uuid = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
version = "0.1.8"
[[deps.MPItrampoline_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "MPIPreferences", "TOML"]
git-tree-sha1 = "b3dcf8e1c610a10458df3c62038c8cc3a4d6291d"
uuid = "f1f71cc9-e9ae-5b93-9b94-4fe0e1ad3748"
version = "5.3.0+0"
[[deps.MacroTools]]
deps = ["Markdown", "Random"]
git-tree-sha1 = "42324d08725e200c23d4dfb549e0d5d89dede2d2"
uuid = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
version = "0.5.10"
[[deps.Markdown]]
deps = ["Base64"]
uuid = "d6f4376e-aef5-505a-96c1-9c027394607a"
[[deps.MathOptInterface]]
deps = ["BenchmarkTools", "CodecBzip2", "CodecZlib", "DataStructures", "ForwardDiff", "JSON", "LinearAlgebra", "MutableArithmetics", "NaNMath", "OrderedCollections", "PrecompileTools", "Printf", "SparseArrays", "SpecialFunctions", "Test", "Unicode"]
git-tree-sha1 = "19a3636968e802918f8891d729c74bd64dff6d00"
uuid = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
version = "1.17.1"
[[deps.MbedTLS_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "c8ffd9c3-330d-5841-b78e-0817d7145fa1"
version = "2.28.2+0"
[[deps.MicrosoftMPI_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "a8027af3d1743b3bfae34e54872359fdebb31422"
uuid = "9237b28f-5490-5468-be7b-bb81f5f5e6cf"
version = "10.1.3+4"
[[deps.Missings]]
deps = ["DataAPI"]
git-tree-sha1 = "f66bdc5de519e8f8ae43bdc598782d35a25b1272"
uuid = "e1d29d7a-bbdc-5cf2-9ac0-f12de2c33e28"
version = "1.1.0"
[[deps.Mmap]]
uuid = "a63ad114-7e13-5084-954f-fe012c677804"
[[deps.MozillaCACerts_jll]]
uuid = "14a3606d-f60d-562e-9121-12d972cd8159"
version = "2022.10.11"
[[deps.MutableArithmetics]]
deps = ["LinearAlgebra", "SparseArrays", "Test"]
git-tree-sha1 = "964cb1a7069723727025ae295408747a0b36a854"
uuid = "d8a4904e-b15c-11e9-3269-09a3773c0cb0"
version = "1.3.0"
[[deps.NaNMath]]
deps = ["OpenLibm_jll"]
git-tree-sha1 = "0877504529a3e5c3343c6f8b4c0381e57e4387e4"
uuid = "77ba4419-2d1f-58cd-9bb1-8ffee604a2e3"
version = "1.0.2"
[[deps.NetworkOptions]]
uuid = "ca575930-c2e3-43a9-ace4-1e988b2c1908"
version = "1.2.0"
[[deps.OpenBLAS_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "Libdl"]
uuid = "4536629a-c528-5b80-bd46-f80d51c5b363"
version = "0.3.21+4"
[[deps.OpenLibm_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "05823500-19ac-5b8b-9628-191a04bc5112"
version = "0.8.1+0"
[[deps.OpenMPI_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LazyArtifacts", "Libdl", "MPIPreferences", "TOML"]
git-tree-sha1 = "f3080f4212a8ba2ceb10a34b938601b862094314"
uuid = "fe0851c0-eecd-5654-98d4-656369965a5c"
version = "4.1.5+0"
[[deps.OpenSSL_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "cae3153c7f6cf3f069a853883fd1919a6e5bab5b"
uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95"
version = "3.0.9+0"
[[deps.OpenSpecFun_jll]]
deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "13652491f6856acfd2db29360e1bbcd4565d04f1"
uuid = "efe28fd5-8261-553b-a9e1-b2916fc3738e"
version = "0.5.5+0"
[[deps.OrderedCollections]]
git-tree-sha1 = "d321bf2de576bf25ec4d3e4360faca399afca282"
uuid = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
version = "1.6.0"
[[deps.PDMats]]
deps = ["LinearAlgebra", "SparseArrays", "SuiteSparse"]
git-tree-sha1 = "67eae2738d63117a196f497d7db789821bce61d1"
uuid = "90014a1f-27ba-587c-ab20-58faa44d9150"
version = "0.11.17"
[[deps.Parsers]]
deps = ["Dates", "PrecompileTools", "UUIDs"]
git-tree-sha1 = "a5aef8d4a6e8d81f171b2bd4be5265b01384c74c"
uuid = "69de0a69-1ddd-5017-9359-2bf0b02dc9f0"
version = "2.5.10"
[[deps.Pkg]]
deps = ["Artifacts", "Dates", "Downloads", "FileWatching", "LibGit2", "Libdl", "Logging", "Markdown", "Printf", "REPL", "Random", "SHA", "Serialization", "TOML", "Tar", "UUIDs", "p7zip_jll"]
uuid = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
version = "1.9.0"
[[deps.PrecompileTools]]
deps = ["Preferences"]
git-tree-sha1 = "9673d39decc5feece56ef3940e5dafba15ba0f81"
uuid = "aea7be01-6a6a-4083-8856-8a6e6704d82a"
version = "1.1.2"
[[deps.Preferences]]
deps = ["TOML"]
git-tree-sha1 = "7eb1686b4f04b82f96ed7a4ea5890a4f0c7a09f1"
uuid = "21216c6a-2e73-6563-6e65-726566657250"
version = "1.4.0"
[[deps.Printf]]
deps = ["Unicode"]
uuid = "de0858da-6303-5e67-8744-51eddeeeb8d7"
[[deps.Profile]]
deps = ["Printf"]
uuid = "9abbd945-dff8-562f-b5e8-e1ebf5ef1b79"
[[deps.PyCall]]
deps = ["Conda", "Dates", "Libdl", "LinearAlgebra", "MacroTools", "Serialization", "VersionParsing"]
git-tree-sha1 = "62f417f6ad727987c755549e9cd88c46578da562"
uuid = "438e738f-606a-5dbb-bf0a-cddfbfd45ab0"
version = "1.95.1"
[[deps.QuadGK]]
deps = ["DataStructures", "LinearAlgebra"]
git-tree-sha1 = "6ec7ac8412e83d57e313393220879ede1740f9ee"
uuid = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
version = "2.8.2"
[[deps.REPL]]
deps = ["InteractiveUtils", "Markdown", "Sockets", "Unicode"]
uuid = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb"
[[deps.Random]]
deps = ["SHA", "Serialization"]
uuid = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
[[deps.Reexport]]
git-tree-sha1 = "45e428421666073eab6f2da5c9d310d99bb12f9b"
uuid = "189a3867-3050-52da-a836-e630ba90ab69"
version = "1.2.2"
[[deps.Requires]]
deps = ["UUIDs"]
git-tree-sha1 = "838a3a4188e2ded87a4f9f184b4b0d78a1e91cb7"
uuid = "ae029012-a4dd-5104-9daa-d747884805df"
version = "1.3.0"
[[deps.Rmath]]
deps = ["Random", "Rmath_jll"]
git-tree-sha1 = "f65dcb5fa46aee0cf9ed6274ccbd597adc49aa7b"
uuid = "79098fc4-a85e-5d69-aa6a-4863f24498fa"
version = "0.7.1"
[[deps.Rmath_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
git-tree-sha1 = "6ed52fdd3382cf21947b15e8870ac0ddbff736da"
uuid = "f50d1b31-88e8-58de-be2c-1cc44531875f"
version = "0.4.0+0"
[[deps.SHA]]
uuid = "ea8e919c-243c-51af-8825-aaa63cd721ce"
version = "0.7.0"
[[deps.Serialization]]
uuid = "9e88b42a-f829-5b0c-bbe9-9e923198166b"
[[deps.SnoopPrecompile]]
deps = ["Preferences"]
git-tree-sha1 = "e760a70afdcd461cf01a575947738d359234665c"
uuid = "66db9d55-30c0-4569-8b51-7e840670fc0c"
version = "1.0.3"
[[deps.Sockets]]
uuid = "6462fe0b-24de-5631-8697-dd941f90decc"
[[deps.SortingAlgorithms]]
deps = ["DataStructures"]
git-tree-sha1 = "a4ada03f999bd01b3a25dcaa30b2d929fe537e00"
uuid = "a2af1166-a08f-5f64-846c-94a0d3cef48c"
version = "1.1.0"
[[deps.SparseArrays]]
deps = ["Libdl", "LinearAlgebra", "Random", "Serialization", "SuiteSparse_jll"]
uuid = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
[[deps.SpecialFunctions]]
deps = ["IrrationalConstants", "LogExpFunctions", "OpenLibm_jll", "OpenSpecFun_jll"]
git-tree-sha1 = "ef28127915f4229c971eb43f3fc075dd3fe91880"
uuid = "276daf66-3868-5448-9aa4-cd146d93841b"
version = "2.2.0"
[deps.SpecialFunctions.extensions]
SpecialFunctionsChainRulesCoreExt = "ChainRulesCore"
[deps.SpecialFunctions.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
[[deps.StaticArraysCore]]
git-tree-sha1 = "6b7ba252635a5eff6a0b0664a41ee140a1c9e72a"
uuid = "1e83bf80-4336-4d27-bf5d-d5a4f845583c"
version = "1.4.0"
[[deps.Statistics]]
deps = ["LinearAlgebra", "SparseArrays"]
uuid = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
version = "1.9.0"
[[deps.StatsAPI]]
deps = ["LinearAlgebra"]
git-tree-sha1 = "45a7769a04a3cf80da1c1c7c60caf932e6f4c9f7"
uuid = "82ae8749-77ed-4fe6-ae5f-f523153014b0"
version = "1.6.0"
[[deps.StatsBase]]
deps = ["DataAPI", "DataStructures", "LinearAlgebra", "LogExpFunctions", "Missings", "Printf", "Random", "SortingAlgorithms", "SparseArrays", "Statistics", "StatsAPI"]
git-tree-sha1 = "75ebe04c5bed70b91614d684259b661c9e6274a4"
uuid = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
version = "0.34.0"
[[deps.StatsFuns]]
deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"]
git-tree-sha1 = "f625d686d5a88bcd2b15cd81f18f98186fdc0c9a"
uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c"
version = "1.3.0"
[deps.StatsFuns.extensions]
StatsFunsChainRulesCoreExt = "ChainRulesCore"
StatsFunsInverseFunctionsExt = "InverseFunctions"
[deps.StatsFuns.weakdeps]
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112"
[[deps.SuiteSparse]]
deps = ["Libdl", "LinearAlgebra", "Serialization", "SparseArrays"]
uuid = "4607b0f0-06f3-5cda-b6b1-a6196a1729e9"
[[deps.SuiteSparse_jll]]
deps = ["Artifacts", "Libdl", "Pkg", "libblastrampoline_jll"]
uuid = "bea87d4a-7f5b-5778-9afe-8cc45184846c"
version = "5.10.1+6"
[[deps.Suppressor]]
git-tree-sha1 = "c6ed566db2fe3931292865b966d6d140b7ef32a9"
uuid = "fd094767-a336-5f1f-9728-57cf17d0bbfb"
version = "0.2.1"
[[deps.TOML]]
deps = ["Dates"]
uuid = "fa267f1f-6049-4f14-aa54-33bafae1ed76"
version = "1.0.3"
[[deps.Tar]]
deps = ["ArgTools", "SHA"]
uuid = "a4e569a6-e804-4fa4-b0f3-eef7a1d5b13e"
version = "1.10.0"
[[deps.Test]]
deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[[deps.TimerOutputs]]
deps = ["ExprTools", "Printf"]
git-tree-sha1 = "f548a9e9c490030e545f72074a41edfd0e5bcdd7"
uuid = "a759f4b9-e2f1-59dc-863e-4aeb61b1ea8f"
version = "0.5.23"
[[deps.TranscodingStreams]]
deps = ["Random", "Test"]
git-tree-sha1 = "9a6ae7ed916312b41236fcef7e0af564ef934769"
uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa"
version = "0.9.13"
[[deps.UUIDs]]
deps = ["Random", "SHA"]
uuid = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
[[deps.Unicode]]
uuid = "4ec0a83e-493e-50e2-b9ac-8f72acf5a8f5"
[[deps.VersionParsing]]
git-tree-sha1 = "58d6e80b4ee071f5efd07fda82cb9fbe17200868"
uuid = "81def892-9a0e-5fdd-b105-ffc91e053289"
version = "1.3.0"
[[deps.Zlib_jll]]
deps = ["Libdl"]
uuid = "83775a58-1f1d-513f-b197-d71354ab007a"
version = "1.2.13+0"
[[deps.libaec_jll]]
deps = ["Artifacts", "JLLWrappers", "Libdl"]
git-tree-sha1 = "eddd19a8dea6b139ea97bdc8a0e2667d4b661720"
uuid = "477f73a3-ac25-53e9-8cc3-50b2fa2566f0"
version = "1.0.6+1"
[[deps.libblastrampoline_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "8e850b90-86db-534c-a0d3-1478176c7d93"
version = "5.7.0+0"
[[deps.nghttp2_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "8e850ede-7688-5339-a07c-302acd2aaf8d"
version = "1.48.0+0"
[[deps.p7zip_jll]]
deps = ["Artifacts", "Libdl"]
uuid = "3f19e933-33d8-53b3-aaab-bd5110c3b7a0"
version = "17.4.0+0"

View File

@@ -0,0 +1,7 @@
[deps]
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Gurobi = "2e9cd046-0924-5485-92f1-d5272153d98b"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
MIPLearn = "2b1277c3-b477-4c49-a15e-7ba350325c68"
PyCall = "438e738f-606a-5dbb-bf0a-cddfbfd45ab0"
Suppressor = "fd094767-a336-5f1f-9728-57cf17d0bbfb"

View File

@@ -0,0 +1,571 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b4bd8bd6-3ce9-4932-852f-f98a44120a3e",
"metadata": {},
"source": [
"# User cuts and lazy constraints\n",
"\n",
"User cuts and lazy constraints are two advanced mixed-integer programming techniques that can accelerate solver performance. User cuts are additional constraints, derived from the constraints already in the model, that can tighten the feasible region and eliminate fractional solutions, thus reducing the size of the branch-and-bound tree. Lazy constraints, on the other hand, are constraints that are potentially part of the problem formulation but are omitted from the initial model to reduce its size; these constraints are added to the formulation only once the solver finds a solution that violates them. While both techniques have been successful, significant computational effort may still be required to generate strong user cuts and to identify violated lazy constraints, which can reduce their effectiveness.\n",
"\n",
"MIPLearn is able to predict which user cuts and which lazy constraints to enforce at the beginning of the optimization process, using machine learning. In this tutorial, we will use the framework to predict subtour elimination constraints for the **traveling salesman problem** using Gurobipy. We assume that MIPLearn has already been correctly installed.\n",
"\n",
"<div class=\"alert alert-info\">\n",
"\n",
"Solver Compatibility\n",
"\n",
"User cuts and lazy constraints are also supported in the Python/Pyomo and Julia/JuMP versions of the package. See the source code of <code>build_tsp_model_pyomo</code> and <code>build_tsp_model_jump</code> for more details. Note, however, the following limitations:\n",
"\n",
"- Python/Pyomo: Only `gurobi_persistent` is currently supported. PRs implementing callbacks for other persistent solvers are welcome.\n",
"- Julia/JuMP: Only solvers supporting solver-independent callbacks are supported. As of JuMP 1.19, this includes Gurobi, CPLEX, XPRESS, SCIP and GLPK. Note that HiGHS and Cbc are not supported. As newer versions of JuMP implement further callback support, MIPLearn should become automatically compatible with these solvers.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "72229e1f-cbd8-43f0-82ee-17d6ec9c3b7d",
"metadata": {},
"source": [
"## Modeling the traveling salesman problem\n",
"\n",
"Given a list of cities and the distances between them, the **traveling salesman problem (TSP)** asks for the shortest route starting at the first city, visiting each other city exactly once, then returning to the first city. This problem is a generalization of the Hamiltonian path problem, one of Karp's 21 NP-complete problems, and has many practical applications, including routing delivery trucks and scheduling airline routes.\n",
"\n",
"To describe an instance of TSP, we need to specify the number of cities $n$, and an $n \\times n$ matrix of distances. The class `TravelingSalesmanData`, in the `miplearn.problems.tsp` package, can hold this data:"
]
},
{
"cell_type": "markdown",
"id": "4598a1bc-55b6-48cc-a050-2262786c203a",
"metadata": {},
"source": [
"```python\n",
"@dataclass\r\n",
"class TravelingSalesmanData:\r\n",
" n_cities: int\r\n",
" distances: np.ndarray\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "3a43cc12-1207-4247-bdb2-69a6a2910738",
"metadata": {},
"source": [
"MIPLearn also provides `TravelingSalesmandGenerator`, a random generator for TSP instances, and `build_tsp_model_gurobipy`, a function which converts `TravelingSalesmanData` into an actual gurobipy optimization model, and which uses lazy constraints to enforce subtour elimination.\n",
"\n",
"The example below is a simplified and annotated version of `build_tsp_model_gurobipy`, illustrating the usage of callbacks with MIPLearn. Compared the the previous tutorial examples, note that, in addition to defining the variables, objective function and constraints of our problem, we also define two callback functions `lazy_separate` and `lazy_enforce`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e4712a85-0327-439c-8889-933e1ff714e7",
"metadata": {},
"outputs": [],
"source": [
"import gurobipy as gp\n",
"from gurobipy import quicksum, GRB, tuplelist\n",
"from miplearn.solvers.gurobi import GurobiModel\n",
"import networkx as nx\n",
"import numpy as np\n",
"from miplearn.problems.tsp import (\n",
" TravelingSalesmanData,\n",
" TravelingSalesmanGenerator,\n",
")\n",
"from scipy.stats import uniform, randint\n",
"from miplearn.io import write_pkl_gz, read_pkl_gz\n",
"from miplearn.collectors.basic import BasicCollector\n",
"from miplearn.solvers.learning import LearningSolver\n",
"from miplearn.components.lazy.mem import MemorizingLazyComponent\n",
"from miplearn.extractors.fields import H5FieldsExtractor\n",
"from sklearn.neighbors import KNeighborsClassifier\n",
"\n",
"# Set up random seed to make example more reproducible\n",
"np.random.seed(42)\n",
"\n",
"# Set up Python logging\n",
"import logging\n",
"\n",
"logging.basicConfig(level=logging.WARNING)\n",
"\n",
"\n",
"def build_tsp_model_gurobipy_simplified(data):\n",
" # Read data from file if a filename is provided\n",
" if isinstance(data, str):\n",
" data = read_pkl_gz(data)\n",
"\n",
" # Create empty gurobipy model\n",
" model = gp.Model()\n",
"\n",
" # Create set of edges between every pair of cities, for convenience\n",
" edges = tuplelist(\n",
" (i, j) for i in range(data.n_cities) for j in range(i + 1, data.n_cities)\n",
" )\n",
"\n",
" # Add binary variable x[e] for each edge e\n",
" x = model.addVars(edges, vtype=GRB.BINARY, name=\"x\")\n",
"\n",
" # Add objective function\n",
" model.setObjective(quicksum(x[(i, j)] * data.distances[i, j] for (i, j) in edges))\n",
"\n",
" # Add constraint: must choose two edges adjacent to each city\n",
" model.addConstrs(\n",
" (\n",
" quicksum(x[min(i, j), max(i, j)] for j in range(data.n_cities) if i != j)\n",
" == 2\n",
" for i in range(data.n_cities)\n",
" ),\n",
" name=\"eq_degree\",\n",
" )\n",
"\n",
" def lazy_separate(m: GurobiModel):\n",
" \"\"\"\n",
" Callback function that finds subtours in the current solution.\n",
" \"\"\"\n",
" # Query current value of the x variables\n",
" x_val = m.inner.cbGetSolution(x)\n",
"\n",
" # Initialize empty set of violations\n",
" violations = []\n",
"\n",
" # Build set of edges we have currently selected\n",
" selected_edges = [e for e in edges if x_val[e] > 0.5]\n",
"\n",
" # Build a graph containing the selected edges, using networkx\n",
" graph = nx.Graph()\n",
" graph.add_edges_from(selected_edges)\n",
"\n",
" # For each component of the graph\n",
" for component in list(nx.connected_components(graph)):\n",
"\n",
" # If the component is not the entire graph, we found a\n",
" # subtour. Add the edge cut to the list of violations.\n",
" if len(component) < data.n_cities:\n",
" cut_edges = [\n",
" [e[0], e[1]]\n",
" for e in edges\n",
" if (e[0] in component and e[1] not in component)\n",
" or (e[0] not in component and e[1] in component)\n",
" ]\n",
" violations.append(cut_edges)\n",
"\n",
" # Return the list of violations\n",
" return violations\n",
"\n",
" def lazy_enforce(m: GurobiModel, violations) -> None:\n",
" \"\"\"\n",
" Callback function that, given a list of subtours, adds lazy\n",
" constraints to remove them from the feasible region.\n",
" \"\"\"\n",
" print(f\"Enforcing {len(violations)} subtour elimination constraints\")\n",
" for violation in violations:\n",
" m.add_constr(quicksum(x[e[0], e[1]] for e in violation) >= 2)\n",
"\n",
" return GurobiModel(\n",
" model,\n",
" lazy_separate=lazy_separate,\n",
" lazy_enforce=lazy_enforce,\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "58875042-d6ac-4f93-b3cc-9a5822b11dad",
"metadata": {},
"source": [
"The `lazy_separate` function starts by querying the current fractional solution value through `m.inner.cbGetSolution` (recall that `m.inner` is a regular gurobipy model), then finds the set of violated lazy constraints. Unlike a regular lazy constraint solver callback, note that `lazy_separate` does not add the violated constraints to the model; it simply returns a list of objects that uniquely identifies the set of lazy constraints that should be generated. Enforcing the constraints is the responsbility of the second callback function, `lazy_enforce`. This function takes as input the model and the list of violations found by `lazy_separate`, converts them into actual constraints, and adds them to the model through `m.add_constr`.\n",
"\n",
"During training data generation, MIPLearn calls `lazy_separate` and `lazy_enforce` in sequence, inside a regular solver callback. However, once the machine learning models are trained, MIPLearn calls `lazy_enforce` directly, before the optimization process starts, with a list of **predicted** violations, as we will see in the example below."
]
},
{
"cell_type": "markdown",
"id": "5839728e-406c-4be2-ba81-83f2b873d4b2",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
"\n",
"Constraint Representation\n",
"\n",
"How should user cuts and lazy constraints be represented is a decision that the user can make; MIPLearn is representation agnostic. The objects returned by `lazy_separate`, however, are serialized as JSON and stored in the HDF5 training data files. Therefore, it is recommended to use only simple objects, such as lists, tuples and dictionaries.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "847ae32e-fad7-406a-8797-0d79065a07fd",
"metadata": {},
"source": [
"## Generating training data\n",
"\n",
"To test the callback defined above, we generate a small set of TSP instances, using the provided random instance generator. As in the previous tutorial, we generate some test instances and some training instances, then solve them using `BasicCollector`. Input problem data is stored in `tsp/train/00000.pkl.gz, ...`, whereas solver training data (including list of required lazy constraints) is stored in `tsp/train/00000.h5, ...`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "eb63154a-1fa6-4eac-aa46-6838b9c201f6",
"metadata": {},
"outputs": [],
"source": [
"# Configure generator to produce instances with 50 cities located\n",
"# in the 1000 x 1000 square, and with slightly perturbed distances.\n",
"gen = TravelingSalesmanGenerator(\n",
" x=uniform(loc=0.0, scale=1000.0),\n",
" y=uniform(loc=0.0, scale=1000.0),\n",
" n=randint(low=50, high=51),\n",
" gamma=uniform(loc=1.0, scale=0.25),\n",
" fix_cities=True,\n",
" round=True,\n",
")\n",
"\n",
"# Generate 500 instances and store input data file to .pkl.gz files\n",
"data = gen.generate(500)\n",
"train_data = write_pkl_gz(data[0:450], \"tsp/train\")\n",
"test_data = write_pkl_gz(data[450:500], \"tsp/test\")\n",
"\n",
"# Solve the training instances in parallel, collecting the required lazy\n",
"# constraints, in addition to other information, such as optimal solution.\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_tsp_model_gurobipy_simplified, n_jobs=10)"
]
},
{
"cell_type": "markdown",
"id": "6903c26c-dbe0-4a2e-bced-fdbf93513dde",
"metadata": {},
"source": [
"## Training and solving new instances"
]
},
{
"cell_type": "markdown",
"id": "57cd724a-2d27-4698-a1e6-9ab8345ef31f",
"metadata": {},
"source": [
"After producing the training dataset, we can train the machine learning models to predict which lazy constraints are necessary. In this tutorial, we use the following ML strategy: given a new instance, find the 50 most similar ones in the training dataset and verify how often each lazy constraint was required. If a lazy constraint was required for the majority of the 50 most-similar instances, enforce it ahead-of-time for the current instance. To measure instance similarity, use the objective function only. This ML strategy can be implemented using `MemorizingLazyComponent` with `H5FieldsExtractor` and `KNeighborsClassifier`, as shown below."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "43779e3d-4174-4189-bc75-9f564910e212",
"metadata": {},
"outputs": [],
"source": [
"solver = LearningSolver(\n",
" components=[\n",
" MemorizingLazyComponent(\n",
" extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n",
" clf=KNeighborsClassifier(n_neighbors=100),\n",
" ),\n",
" ],\n",
")\n",
"solver.fit(train_data)"
]
},
{
"cell_type": "markdown",
"id": "12480712-9d3d-4cbc-a6d7-d6c1e2f950f4",
"metadata": {},
"source": [
"Next, we solve one of the test instances using the trained solver. In the run below, we can see that MIPLearn adds many lazy constraints ahead-of-time, before the optimization starts. During the optimization process itself, some additional lazy constraints are required, but very few."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "23f904ad-f1a8-4b5a-81ae-c0b9e813a4b2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter Threads to value 1\n",
"Read parameters from file gurobi.env\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
"Model fingerprint: 0x04d7bec1\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Presolve time: 0.00s\n",
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n",
" 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 66 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 5.588000000e+03\n",
"\n",
"User-callback calls 110, time in user-callback 0.00 sec\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:miplearn.components.cuts.mem:Predicting violated lazy constraints...\n",
"INFO:miplearn.components.lazy.mem:Enforcing 19 constraints ahead-of-time...\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enforcing 19 subtour elimination constraints\n",
"Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"Threads 1\n",
"LazyConstraints 1\n",
"\n",
"Optimize a model with 69 rows, 1225 columns and 6091 nonzeros\n",
"Model fingerprint: 0x09bd34d6\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Found heuristic solution: objective 29853.000000\n",
"Presolve time: 0.00s\n",
"Presolved: 69 rows, 1225 columns, 6091 nonzeros\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"\n",
"Root relaxation: objective 6.139000e+03, 93 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 6139.00000 0 6 29853.0000 6139.00000 79.4% - 0s\n",
"H 0 0 6390.0000000 6139.00000 3.93% - 0s\n",
" 0 0 6165.50000 0 10 6390.00000 6165.50000 3.51% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
" 0 0 6165.50000 0 6 6390.00000 6165.50000 3.51% - 0s\n",
" 0 0 6198.50000 0 16 6390.00000 6198.50000 3.00% - 0s\n",
" 0 0 6210.50000 0 6 6390.00000 6210.50000 2.81% - 0s\n",
" 0 0 6212.60000 0 31 6390.00000 6212.60000 2.78% - 0s\n",
"H 0 0 6241.0000000 6212.60000 0.46% - 0s\n",
"* 0 0 0 6219.0000000 6219.00000 0.00% - 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 6\n",
" Clique: 1\n",
" MIR: 1\n",
" StrongCG: 1\n",
" Zero half: 4\n",
" RLT: 1\n",
" Lazy constraints: 3\n",
"\n",
"Explored 1 nodes (219 simplex iterations) in 0.04 seconds (0.03 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 4: 6219 6241 6390 29853 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 163, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"# Increase log verbosity, so that we can see what is MIPLearn doing\n",
"logging.getLogger(\"miplearn\").setLevel(logging.INFO)\n",
"\n",
"# Solve a new test instance\n",
"solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);"
]
},
{
"cell_type": "markdown",
"id": "79cc3e61-ee2b-4f18-82cb-373d55d67de6",
"metadata": {},
"source": [
"Finally, we solve the same instance, but using a regular solver, without ML prediction. We can see that a much larger number of lazy constraints are added during the optimization process itself. Additionally, the solver requires a larger number of iterations to find the optimal solution. There is not a significant difference in running time because of the small size of these instances."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a015c51c-091a-43b6-b761-9f3577fc083e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
"Model fingerprint: 0x04d7bec1\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Presolve time: 0.00s\n",
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 4.0600000e+02 9.700000e+01 0.000000e+00 0s\n",
" 66 5.5880000e+03 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 66 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 5.588000000e+03\n",
"\n",
"User-callback calls 110, time in user-callback 0.00 sec\n",
"Set parameter PreCrush to value 1\n",
"Set parameter LazyConstraints to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"PreCrush 1\n",
"Threads 1\n",
"LazyConstraints 1\n",
"\n",
"Optimize a model with 50 rows, 1225 columns and 2450 nonzeros\n",
"Model fingerprint: 0x77a94572\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+00]\n",
" Objective range [1e+01, 1e+03]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [2e+00, 2e+00]\n",
"Found heuristic solution: objective 29695.000000\n",
"Presolve time: 0.00s\n",
"Presolved: 50 rows, 1225 columns, 2450 nonzeros\n",
"Variable types: 0 continuous, 1225 integer (1225 binary)\n",
"\n",
"Root relaxation: objective 5.588000e+03, 68 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 5588.00000 0 12 29695.0000 5588.00000 81.2% - 0s\n",
"Enforcing 9 subtour elimination constraints\n",
"Enforcing 9 subtour elimination constraints\n",
"H 0 0 24919.000000 5588.00000 77.6% - 0s\n",
" 0 0 5847.50000 0 14 24919.0000 5847.50000 76.5% - 0s\n",
"Enforcing 5 subtour elimination constraints\n",
"Enforcing 5 subtour elimination constraints\n",
"Enforcing 3 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
"H 0 0 7764.0000000 5847.50000 24.7% - 0s\n",
"H 0 0 6684.0000000 5847.50000 12.5% - 0s\n",
" 0 0 6013.75000 0 11 6684.00000 6013.75000 10.0% - 0s\n",
"H 0 0 6340.0000000 6013.75000 5.15% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6095.00000 0 10 6340.00000 6095.00000 3.86% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6128.00000 0 - 6340.00000 6128.00000 3.34% - 0s\n",
" 0 0 6139.00000 0 6 6340.00000 6139.00000 3.17% - 0s\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6187.25000 0 17 6340.00000 6187.25000 2.41% - 0s\n",
"Enforcing 2 subtour elimination constraints\n",
"Enforcing 2 subtour elimination constraints\n",
" 0 0 6201.00000 0 15 6340.00000 6201.00000 2.19% - 0s\n",
" 0 0 6201.00000 0 15 6340.00000 6201.00000 2.19% - 0s\n",
"H 0 0 6219.0000000 6201.00000 0.29% - 0s\n",
"Enforcing 3 subtour elimination constraints\n",
" 0 0 infeasible 0 6219.00000 6219.00000 0.00% - 0s\n",
"\n",
"Cutting planes:\n",
" Lazy constraints: 2\n",
"\n",
"Explored 1 nodes (217 simplex iterations) in 0.12 seconds (0.05 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 6: 6219 6340 6684 ... 29695\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 6.219000000000e+03, best bound 6.219000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 216, time in user-callback 0.06 sec\n"
]
}
],
"source": [
"solver = LearningSolver(components=[]) # empty set of ML components\n",
"solver.optimize(test_data[0], build_tsp_model_gurobipy_simplified);"
]
},
{
"cell_type": "markdown",
"id": "432c99b2-67fe-409b-8224-ccef91de96d1",
"metadata": {},
"source": [
"## Learning user cuts\n",
"\n",
"The example above focused on lazy constraints. To enforce user cuts instead, the procedure is very similar, with the following changes:\n",
"\n",
"- Instead of `lazy_separate` and `lazy_enforce`, use `cuts_separate` and `cuts_enforce`\n",
"- Instead of `m.inner.cbGetSolution`, use `m.inner.cbGetNodeRel`\n",
"\n",
"For a complete example, see `build_stab_model_gurobipy`, `build_stab_model_pyomo` and `build_stab_model_jump`, which solves the maximum-weight stable set problem using user cut callbacks."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e6cb694d-8c43-410f-9a13-01bf9e0763b7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,863 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6b8983b1",
"metadata": {
"tags": []
},
"source": [
"# Getting started (Gurobipy)\n",
"\n",
"## Introduction\n",
"\n",
"**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:\n",
"\n",
"1. Install the Python/Gurobipy version of MIPLearn\n",
"2. Model a simple optimization problem using Gurobipy\n",
"3. Generate training data and train the ML models\n",
"4. Use the ML models together Gurobi to solve new instances\n",
"\n",
"<div class=\"alert alert-info\">\n",
"Note\n",
" \n",
"The Python/Gurobipy version of MIPLearn is only compatible with the Gurobi Optimizer. For broader solver compatibility, see the Python/Pyomo and Julia/JuMP versions of the package.\n",
"</div>\n",
"\n",
"<div class=\"alert alert-warning\">\n",
"Warning\n",
" \n",
"MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n",
" \n",
"</div>\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "02f0a927",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"MIPLearn is available in two versions:\n",
"\n",
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
"- Julia version, compatible with the JuMP modeling language.\n",
"\n",
"In this tutorial, we will demonstrate how to use and install the Python/Gurobipy version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
"\n",
"```\n",
"$ pip install MIPLearn~=0.4\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "a14e4550",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
" \n",
"Note\n",
" \n",
"In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Python projects.\n",
" \n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "16b86823",
"metadata": {},
"source": [
"## Modeling a simple optimization problem\n",
"\n",
"To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n",
"\n",
"Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs the company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts).\n",
"\n",
"This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:"
]
},
{
"cell_type": "markdown",
"id": "f12c3702",
"metadata": {},
"source": [
"$$\n",
"\\begin{align}\n",
"\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n",
"\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n",
"& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n",
"& \\sum_{i=1}^n y_i = d \\\\\n",
"& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n",
"& y_i \\geq 0 & i=1,\\ldots,n\n",
"\\end{align}\n",
"$$"
]
},
{
"cell_type": "markdown",
"id": "be3989ed",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
"\n",
"Note\n",
"\n",
"We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "a5fd33f6",
"metadata": {},
"source": [
"Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Python and Pyomo. We start by defining a data class `UnitCommitmentData`, which holds all the input data."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "22a67170-10b4-43d3-8708-014d91141e73",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:18:25.442346786Z",
"start_time": "2023-06-06T20:18:25.329017476Z"
},
"tags": []
},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"from typing import List\n",
"\n",
"import numpy as np\n",
"\n",
"\n",
"@dataclass\n",
"class UnitCommitmentData:\n",
" demand: float\n",
" pmin: List[float]\n",
" pmax: List[float]\n",
" cfix: List[float]\n",
" cvar: List[float]"
]
},
{
"cell_type": "markdown",
"id": "29f55efa-0751-465a-9b0a-a821d46a3d40",
"metadata": {},
"source": [
"Next, we write a `build_uc_model` function, which converts the input data into a concrete Pyomo model. The function accepts `UnitCommitmentData`, the data structure we previously defined, or the path to a compressed pickle file containing this data."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2f67032f-0d74-4317-b45c-19da0ec859e9",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:48:05.953902842Z",
"start_time": "2023-06-06T20:48:05.909747925Z"
}
},
"outputs": [],
"source": [
"import gurobipy as gp\n",
"from gurobipy import GRB, quicksum\n",
"from typing import Union\n",
"from miplearn.io import read_pkl_gz\n",
"from miplearn.solvers.gurobi import GurobiModel\n",
"\n",
"\n",
"def build_uc_model(data: Union[str, UnitCommitmentData]) -> GurobiModel:\n",
" if isinstance(data, str):\n",
" data = read_pkl_gz(data)\n",
"\n",
" model = gp.Model()\n",
" n = len(data.pmin)\n",
" x = model._x = model.addVars(n, vtype=GRB.BINARY, name=\"x\")\n",
" y = model._y = model.addVars(n, name=\"y\")\n",
" model.setObjective(\n",
" quicksum(data.cfix[i] * x[i] + data.cvar[i] * y[i] for i in range(n))\n",
" )\n",
" model.addConstrs(y[i] <= data.pmax[i] * x[i] for i in range(n))\n",
" model.addConstrs(y[i] >= data.pmin[i] * x[i] for i in range(n))\n",
" model.addConstr(quicksum(y[i] for i in range(n)) == data.demand)\n",
" return GurobiModel(model)"
]
},
{
"cell_type": "markdown",
"id": "c22714a3",
"metadata": {},
"source": [
"At this point, we can already use Pyomo and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2a896f47",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:49:14.266758244Z",
"start_time": "2023-06-06T20:49:14.223514806Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter Threads to value 1\n",
"Read parameters from file gurobi.env\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
"Model fingerprint: 0x58dfdd53\n",
"Variable types: 3 continuous, 3 integer (3 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 7e+01]\n",
" Objective range [2e+00, 7e+02]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [1e+02, 1e+02]\n",
"Presolve removed 6 rows and 3 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 3 columns, 3 nonzeros\n",
"Variable types: 0 continuous, 3 integer (1 binary)\n",
"Found heuristic solution: objective 1990.0000000\n",
"\n",
"Root relaxation: objective 1.320000e+03, 0 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
"\n",
"Explored 1 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 2: 1320 1990 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 541, time in user-callback 0.00 sec\n",
"obj = 1320.0\n",
"x = [-0.0, 1.0, 1.0]\n",
"y = [0.0, 60.0, 40.0]\n"
]
}
],
"source": [
"model = build_uc_model(\n",
" UnitCommitmentData(\n",
" demand=100.0,\n",
" pmin=[10, 20, 30],\n",
" pmax=[50, 60, 70],\n",
" cfix=[700, 600, 500],\n",
" cvar=[1.5, 2.0, 2.5],\n",
" )\n",
")\n",
"\n",
"model.optimize()\n",
"print(\"obj =\", model.inner.objVal)\n",
"print(\"x =\", [model.inner._x[i].x for i in range(3)])\n",
"print(\"y =\", [model.inner._y[i].x for i in range(3)])"
]
},
{
"cell_type": "markdown",
"id": "41b03bbc",
"metadata": {},
"source": [
"Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power."
]
},
{
"cell_type": "markdown",
"id": "01f576e1-1790-425e-9e5c-9fa07b6f4c26",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
" \n",
"Note\n",
"\n",
"- In the example above, `GurobiModel` is just a thin wrapper around a standard Gurobi model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as `optimize`. For more control, and to query the solution, the original Gurobi model can be accessed through `model.inner`, as illustrated above.\n",
"- To ensure training data consistency, MIPLearn requires all decision variables to have names.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "cf60c1dd",
"metadata": {},
"source": [
"## Generating training data\n",
"\n",
"Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** solver, which can optimize new instances (similar to the ones it was trained on) faster.\n",
"\n",
"In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "5eb09fab",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:49:22.758192368Z",
"start_time": "2023-06-06T20:49:22.724784572Z"
}
},
"outputs": [],
"source": [
"from scipy.stats import uniform\n",
"from typing import List\n",
"import random\n",
"\n",
"\n",
"def random_uc_data(samples: int, n: int, seed: int = 42) -> List[UnitCommitmentData]:\n",
" random.seed(seed)\n",
" np.random.seed(seed)\n",
" pmin = uniform(loc=100_000.0, scale=400_000.0).rvs(n)\n",
" pmax = pmin * uniform(loc=2.0, scale=2.5).rvs(n)\n",
" cfix = pmin * uniform(loc=100.0, scale=25.0).rvs(n)\n",
" cvar = uniform(loc=1.25, scale=0.25).rvs(n)\n",
" return [\n",
" UnitCommitmentData(\n",
" demand=pmax.sum() * uniform(loc=0.5, scale=0.25).rvs(),\n",
" pmin=pmin,\n",
" pmax=pmax,\n",
" cfix=cfix,\n",
" cvar=cvar,\n",
" )\n",
" for _ in range(samples)\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "3a03a7ac",
"metadata": {},
"source": [
"In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n",
"\n",
"Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple machines. The code below generates the files `uc/train/00000.pkl.gz`, `uc/train/00001.pkl.gz`, etc., which contain the input data in compressed (gzipped) pickle format."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6156752c",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:49:24.811192929Z",
"start_time": "2023-06-06T20:49:24.575639142Z"
}
},
"outputs": [],
"source": [
"from miplearn.io import write_pkl_gz\n",
"\n",
"data = random_uc_data(samples=500, n=500)\n",
"train_data = write_pkl_gz(data[0:450], \"uc/train\")\n",
"test_data = write_pkl_gz(data[450:500], \"uc/test\")"
]
},
{
"cell_type": "markdown",
"id": "b17af877",
"metadata": {},
"source": [
"Finally, we use `BasicCollector` to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files `uc/train/00000.h5`, `uc/train/00001.h5`, etc. The optimization models are also exported to compressed MPS files `uc/train/00000.mps.gz`, `uc/train/00001.mps.gz`, etc."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7623f002",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:49:34.936729253Z",
"start_time": "2023-06-06T20:49:25.936126612Z"
}
},
"outputs": [],
"source": [
"from miplearn.collectors.basic import BasicCollector\n",
"\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_uc_model)"
]
},
{
"cell_type": "markdown",
"id": "c42b1be1-9723-4827-82d8-974afa51ef9f",
"metadata": {},
"source": [
"## Training and solving test instances"
]
},
{
"cell_type": "markdown",
"id": "a33c6aa4-f0b8-4ccb-9935-01f7d7de2a1c",
"metadata": {},
"source": [
"With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using $k$-nearest neighbors. More specifically, the strategy is to:\n",
"\n",
"1. Memorize the optimal solutions of all training instances;\n",
"2. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;\n",
"3. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.\n",
"4. Provide this partial solution to the solver as a warm start.\n",
"\n",
"This simple strategy can be implemented as shown below, using `MemorizingPrimalComponent`. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:49:38.997939600Z",
"start_time": "2023-06-06T20:49:38.968261432Z"
}
},
"outputs": [],
"source": [
"from sklearn.neighbors import KNeighborsClassifier\n",
"from miplearn.components.primal.actions import SetWarmStart\n",
"from miplearn.components.primal.mem import (\n",
" MemorizingPrimalComponent,\n",
" MergeTopSolutions,\n",
")\n",
"from miplearn.extractors.fields import H5FieldsExtractor\n",
"\n",
"comp = MemorizingPrimalComponent(\n",
" clf=KNeighborsClassifier(n_neighbors=25),\n",
" extractor=H5FieldsExtractor(\n",
" instance_fields=[\"static_constr_rhs\"],\n",
" ),\n",
" constructor=MergeTopSolutions(25, [0.0, 1.0]),\n",
" action=SetWarmStart(),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "9536e7e4-0b0d-49b0-bebd-4a848f839e94",
"metadata": {},
"source": [
"Having defined the ML strategy, we next construct `LearningSolver`, train the ML component and optimize one of the test instances."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:49:42.072345411Z",
"start_time": "2023-06-06T20:49:41.294040974Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa8b70287\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n",
" 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.02 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n",
"\n",
"User-callback calls 59, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x892e56b2\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"\n",
"User MIP start produced solution with objective 8.29824e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29398e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.29153e+09 (0.01s)\n",
"Loaded user MIP start with objective 8.29153e+09\n",
"\n",
"Presolve removed 500 rows and 0 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 501 rows, 1000 columns, 2000 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
"Root relaxation: objective 8.290622e+09, 501 iterations, 0.00 seconds (0.02 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 - 0 8.2915e+09 8.2907e+09 0.01% - 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 1\n",
" RLT: 2\n",
"\n",
"Explored 1 nodes (550 simplex iterations) in 0.04 seconds (0.04 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 4: 8.29153e+09 8.29398e+09 8.29695e+09 8.29824e+09 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291528276179e+09, best bound 8.290709658754e+09, gap 0.0099%\n",
"\n",
"User-callback calls 799, time in user-callback 0.00 sec\n"
]
},
{
"data": {
"text/plain": [
"(<miplearn.solvers.gurobi.GurobiModel at 0x7f2bcd72cfd0>,\n",
" {'WS: Count': 1, 'WS: Number of variables set': 477.0})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from miplearn.solvers.learning import LearningSolver\n",
"\n",
"solver_ml = LearningSolver(components=[comp])\n",
"solver_ml.fit(train_data)\n",
"solver_ml.optimize(test_data[0], build_uc_model)"
]
},
{
"cell_type": "markdown",
"id": "61da6dad-7f56-4edb-aa26-c00eb5f946c0",
"metadata": {},
"source": [
"By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "2ff391ed-e855-4228-aa09-a7641d8c2893",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:49:44.012782276Z",
"start_time": "2023-06-06T20:49:43.813974362Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xa8b70287\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n",
" 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n",
"\n",
"User-callback calls 59, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x4cbbf7c7\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 500 rows and 0 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 501 rows, 1000 columns, 2000 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Found heuristic solution: objective 1.729688e+10\n",
"\n",
"Root relaxation: objective 8.290622e+09, 501 iterations, 0.00 seconds (0.02 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2906e+09 0 1 1.7297e+10 8.2906e+09 52.1% - 0s\n",
"H 0 0 8.298243e+09 8.2906e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2982e+09 8.2907e+09 0.09% - 0s\n",
"H 0 0 8.293980e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2940e+09 8.2907e+09 0.04% - 0s\n",
"H 0 0 8.291961e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 2 8.2920e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 3 8.2920e+09 8.2908e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 5 8.2920e+09 8.2908e+09 0.01% - 0s\n",
" 0 0 8.2908e+09 0 5 8.2920e+09 8.2908e+09 0.01% - 0s\n",
" 0 2 8.2908e+09 0 5 8.2920e+09 8.2908e+09 0.01% - 0s\n",
"H 9 9 8.291298e+09 8.2908e+09 0.01% 1.4 0s\n",
"\n",
"Cutting planes:\n",
" MIR: 2\n",
"\n",
"Explored 10 nodes (759 simplex iterations) in 0.09 seconds (0.11 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 6: 8.2913e+09 8.29196e+09 8.29398e+09 ... 1.72969e+10\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291298126440e+09, best bound 8.290812450252e+09, gap 0.0059%\n",
"\n",
"User-callback calls 910, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"solver_baseline = LearningSolver(components=[])\n",
"solver_baseline.fit(train_data)\n",
"solver_baseline.optimize(test_data[0], build_uc_model);"
]
},
{
"cell_type": "markdown",
"id": "b6d37b88-9fcc-43ee-ac1e-2a7b1e51a266",
"metadata": {},
"source": [
"In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems."
]
},
{
"cell_type": "markdown",
"id": "eec97f06",
"metadata": {
"tags": []
},
"source": [
"## Accessing the solution\n",
"\n",
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "67a6cd18",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:50:12.869892930Z",
"start_time": "2023-06-06T20:50:12.509410473Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x19042f12\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 6.5917580e+09 5.627453e+04 0.000000e+00 0s\n",
" 1 8.2535968e+09 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.00 seconds (0.00 work units)\n",
"Optimal objective 8.253596777e+09\n",
"\n",
"User-callback calls 59, time in user-callback 0.00 sec\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x6926c32f\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"\n",
"User MIP start produced solution with objective 8.25989e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25699e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25678e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25668e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.2554e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25448e+09 (0.05s)\n",
"User MIP start produced solution with objective 8.25448e+09 (0.05s)\n",
"Loaded user MIP start with objective 8.25448e+09\n",
"\n",
"Presolve removed 500 rows and 0 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 501 rows, 1000 columns, 2000 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
"Root relaxation: objective 8.253597e+09, 501 iterations, 0.00 seconds (0.02 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2536e+09 0 1 8.2545e+09 8.2536e+09 0.01% - 0s\n",
"H 0 0 8.254435e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 - 0 8.2544e+09 8.2537e+09 0.01% - 0s\n",
"\n",
"Cutting planes:\n",
" RLT: 2\n",
"\n",
"Explored 1 nodes (503 simplex iterations) in 0.07 seconds (0.03 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 7: 8.25443e+09 8.25448e+09 8.2554e+09 ... 8.25989e+09\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.254434593504e+09, best bound 8.253676932849e+09, gap 0.0092%\n",
"\n",
"User-callback calls 787, time in user-callback 0.00 sec\n",
"obj = 8254434593.503945\n",
"x = [1.0, 1.0, 0.0]\n",
"y = [935662.09492646, 1604270.0218116897, 0.0]\n"
]
}
],
"source": [
"data = random_uc_data(samples=1, n=500)[0]\n",
"model = build_uc_model(data)\n",
"solver_ml.optimize(model)\n",
"print(\"obj =\", model.inner.objVal)\n",
"print(\"x =\", [model.inner._x[i].x for i in range(3)])\n",
"print(\"y =\", [model.inner._y[i].x for i in range(3)])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5593d23a-83bd-4e16-8253-6300f5e3f63b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,680 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6b8983b1",
"metadata": {
"tags": []
},
"source": [
"# Getting started (JuMP)\n",
"\n",
"## Introduction\n",
"\n",
"**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:\n",
"\n",
"1. Install the Julia/JuMP version of MIPLearn\n",
"2. Model a simple optimization problem using JuMP\n",
"3. Generate training data and train the ML models\n",
"4. Use the ML models together Gurobi to solve new instances\n",
"\n",
"<div class=\"alert alert-warning\">\n",
"Warning\n",
" \n",
"MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n",
" \n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"id": "02f0a927",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"MIPLearn is available in two versions:\n",
"\n",
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
"- Julia version, compatible with the JuMP modeling language.\n",
"\n",
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Julia in your machine. See the [official Julia website for more instructions](https://julialang.org/downloads/). After Julia is installed, launch the Julia REPL, type `]` to enter package mode, then install MIPLearn:\n",
"\n",
"```\n",
"pkg> add MIPLearn@0.4\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "e8274543",
"metadata": {},
"source": [
"In addition to MIPLearn itself, we will also install:\n",
"\n",
"- the JuMP modeling language\n",
"- Gurobi, a state-of-the-art commercial MILP solver\n",
"- Distributions, to generate random data\n",
"- PyCall, to access ML model from Scikit-Learn\n",
"- Suppressor, to make the output cleaner\n",
"\n",
"```\n",
"pkg> add JuMP@1, Gurobi@1, Distributions@0.25, PyCall@1, Suppressor@0.2\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "a14e4550",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
" \n",
"Note\n",
"\n",
"- If you do not have a Gurobi license available, you can also follow the tutorial by installing an open-source solver, such as `HiGHS`, and replacing `Gurobi.Optimizer` by `HiGHS.Optimizer` in all the code examples.\n",
"- In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Julia projects.\n",
" \n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "16b86823",
"metadata": {},
"source": [
"## Modeling a simple optimization problem\n",
"\n",
"To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n",
"\n",
"Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs the company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts).\n",
"\n",
"This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:"
]
},
{
"cell_type": "markdown",
"id": "f12c3702",
"metadata": {},
"source": [
"$$\n",
"\\begin{align}\n",
"\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n",
"\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n",
"& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n",
"& \\sum_{i=1}^n y_i = d \\\\\n",
"& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n",
"& y_i \\geq 0 & i=1,\\ldots,n\n",
"\\end{align}\n",
"$$"
]
},
{
"cell_type": "markdown",
"id": "be3989ed",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
"\n",
"Note\n",
"\n",
"We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "a5fd33f6",
"metadata": {},
"source": [
"Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Julia and JuMP. We start by defining a data class `UnitCommitmentData`, which holds all the input data."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c62ebff1-db40-45a1-9997-d121837f067b",
"metadata": {},
"outputs": [],
"source": [
"struct UnitCommitmentData\n",
" demand::Float64\n",
" pmin::Vector{Float64}\n",
" pmax::Vector{Float64}\n",
" cfix::Vector{Float64}\n",
" cvar::Vector{Float64}\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "29f55efa-0751-465a-9b0a-a821d46a3d40",
"metadata": {},
"source": [
"Next, we write a `build_uc_model` function, which converts the input data into a concrete JuMP model. The function accepts `UnitCommitmentData`, the data structure we previously defined, or the path to a JLD2 file containing this data."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "79ef7775-18ca-4dfa-b438-49860f762ad0",
"metadata": {},
"outputs": [],
"source": [
"using MIPLearn\n",
"using JuMP\n",
"using Gurobi\n",
"\n",
"function build_uc_model(data)\n",
" if data isa String\n",
" data = read_jld2(data)\n",
" end\n",
" model = Model(Gurobi.Optimizer)\n",
" G = 1:length(data.pmin)\n",
" @variable(model, x[G], Bin)\n",
" @variable(model, y[G] >= 0)\n",
" @objective(model, Min, sum(data.cfix[g] * x[g] + data.cvar[g] * y[g] for g in G))\n",
" @constraint(model, eq_max_power[g in G], y[g] <= data.pmax[g] * x[g])\n",
" @constraint(model, eq_min_power[g in G], y[g] >= data.pmin[g] * x[g])\n",
" @constraint(model, eq_demand, sum(y[g] for g in G) == data.demand)\n",
" return JumpModel(model)\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "c22714a3",
"metadata": {},
"source": [
"At this point, we can already use Gurobi to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "dd828d68-fd43-4d2a-a058-3e2628d99d9e",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:01:10.993801745Z",
"start_time": "2023-06-06T20:01:10.887580927Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
"Model fingerprint: 0x55e33a07\n",
"Variable types: 3 continuous, 3 integer (3 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 7e+01]\n",
" Objective range [2e+00, 7e+02]\n",
" Bounds range [0e+00, 0e+00]\n",
" RHS range [1e+02, 1e+02]\n",
"Presolve removed 2 rows and 1 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 5 rows, 5 columns, 13 nonzeros\n",
"Variable types: 0 continuous, 5 integer (3 binary)\n",
"Found heuristic solution: objective 1400.0000000\n",
"\n",
"Root relaxation: objective 1.035000e+03, 3 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 1035.00000 0 1 1400.00000 1035.00000 26.1% - 0s\n",
" 0 0 1105.71429 0 1 1400.00000 1105.71429 21.0% - 0s\n",
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
"\n",
"Explored 1 nodes (5 simplex iterations) in 0.00 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 2: 1320 1400 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n",
"\n",
"User-callback calls 371, time in user-callback 0.00 sec\n",
"objective_value(model.inner) = 1320.0\n",
"Vector(value.(model.inner[:x])) = [-0.0, 1.0, 1.0]\n",
"Vector(value.(model.inner[:y])) = [0.0, 60.0, 40.0]\n"
]
}
],
"source": [
"model = build_uc_model(\n",
" UnitCommitmentData(\n",
" 100.0, # demand\n",
" [10, 20, 30], # pmin\n",
" [50, 60, 70], # pmax\n",
" [700, 600, 500], # cfix\n",
" [1.5, 2.0, 2.5], # cvar\n",
" )\n",
")\n",
"model.optimize()\n",
"@show objective_value(model.inner)\n",
"@show Vector(value.(model.inner[:x]))\n",
"@show Vector(value.(model.inner[:y]));"
]
},
{
"cell_type": "markdown",
"id": "41b03bbc",
"metadata": {},
"source": [
"Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power."
]
},
{
"cell_type": "markdown",
"id": "01f576e1-1790-425e-9e5c-9fa07b6f4c26",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
" \n",
"Notes\n",
" \n",
"- In the example above, `JumpModel` is just a thin wrapper around a standard JuMP model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as `optimize`. For more control, and to query the solution, the original JuMP model can be accessed through `model.inner`, as illustrated above.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "cf60c1dd",
"metadata": {},
"source": [
"## Generating training data\n",
"\n",
"Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** solver, which can optimize new instances (similar to the ones it was trained on) faster.\n",
"\n",
"In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1326efd7-3869-4137-ab6b-df9cb609a7e0",
"metadata": {},
"outputs": [],
"source": [
"using Distributions\n",
"using Random\n",
"\n",
"function random_uc_data(; samples::Int, n::Int, seed::Int=42)::Vector\n",
" Random.seed!(seed)\n",
" pmin = rand(Uniform(100_000, 500_000), n)\n",
" pmax = pmin .* rand(Uniform(2, 2.5), n)\n",
" cfix = pmin .* rand(Uniform(100, 125), n)\n",
" cvar = rand(Uniform(1.25, 1.50), n)\n",
" return [\n",
" UnitCommitmentData(\n",
" sum(pmax) * rand(Uniform(0.5, 0.75)),\n",
" pmin,\n",
" pmax,\n",
" cfix,\n",
" cvar,\n",
" )\n",
" for _ in 1:samples\n",
" ]\n",
"end;"
]
},
{
"cell_type": "markdown",
"id": "3a03a7ac",
"metadata": {},
"source": [
"In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n",
"\n",
"Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple machines. The code below generates the files `uc/train/00001.jld2`, `uc/train/00002.jld2`, etc., which contain the input data in JLD2 format."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6156752c",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:03:04.782830561Z",
"start_time": "2023-06-06T20:03:04.530421396Z"
}
},
"outputs": [],
"source": [
"data = random_uc_data(samples=500, n=500)\n",
"train_data = write_jld2(data[1:450], \"uc/train\")\n",
"test_data = write_jld2(data[451:500], \"uc/test\");"
]
},
{
"cell_type": "markdown",
"id": "b17af877",
"metadata": {},
"source": [
"Finally, we use `BasicCollector` to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files `uc/train/00001.h5`, `uc/train/00002.h5`, etc. The optimization models are also exported to compressed MPS files `uc/train/00001.mps.gz`, `uc/train/00002.mps.gz`, etc."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7623f002",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:03:35.571497019Z",
"start_time": "2023-06-06T20:03:25.804104036Z"
}
},
"outputs": [],
"source": [
"using Suppressor\n",
"@suppress_out begin\n",
" bc = BasicCollector()\n",
" bc.collect(train_data, build_uc_model)\n",
"end"
]
},
{
"cell_type": "markdown",
"id": "c42b1be1-9723-4827-82d8-974afa51ef9f",
"metadata": {},
"source": [
"## Training and solving test instances"
]
},
{
"cell_type": "markdown",
"id": "a33c6aa4-f0b8-4ccb-9935-01f7d7de2a1c",
"metadata": {},
"source": [
"With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using $k$-nearest neighbors. More specifically, the strategy is to:\n",
"\n",
"1. Memorize the optimal solutions of all training instances;\n",
"2. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;\n",
"3. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.\n",
"4. Provide this partial solution to the solver as a warm start.\n",
"\n",
"This simple strategy can be implemented as shown below, using `MemorizingPrimalComponent`. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:05:20.497772794Z",
"start_time": "2023-06-06T20:05:20.484821405Z"
}
},
"outputs": [],
"source": [
"# Load kNN classifier from Scikit-Learn\n",
"using PyCall\n",
"KNeighborsClassifier = pyimport(\"sklearn.neighbors\").KNeighborsClassifier\n",
"\n",
"# Build the MIPLearn component\n",
"comp = MemorizingPrimalComponent(\n",
" clf=KNeighborsClassifier(n_neighbors=25),\n",
" extractor=H5FieldsExtractor(\n",
" instance_fields=[\"static_constr_rhs\"],\n",
" ),\n",
" constructor=MergeTopSolutions(25, [0.0, 1.0]),\n",
" action=SetWarmStart(),\n",
");"
]
},
{
"cell_type": "markdown",
"id": "9536e7e4-0b0d-49b0-bebd-4a848f839e94",
"metadata": {},
"source": [
"Having defined the ML strategy, we next construct `LearningSolver`, train the ML component and optimize one of the test instances."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:05:22.672002339Z",
"start_time": "2023-06-06T20:05:21.447466634Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xd2378195\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [0e+00, 0e+00]\n",
" RHS range [2e+08, 2e+08]\n",
"\n",
"User MIP start produced solution with objective 1.02165e+10 (0.00s)\n",
"Loaded user MIP start with objective 1.02165e+10\n",
"\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
"Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 1.0216e+10 0 1 1.0217e+10 1.0216e+10 0.01% - 0s\n",
"\n",
"Explored 1 nodes (510 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 1: 1.02165e+10 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.021651058978e+10, best bound 1.021567971257e+10, gap 0.0081%\n",
"\n",
"User-callback calls 169, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"solver_ml = LearningSolver(components=[comp])\n",
"solver_ml.fit(train_data)\n",
"solver_ml.optimize(test_data[1], build_uc_model);"
]
},
{
"cell_type": "markdown",
"id": "61da6dad-7f56-4edb-aa26-c00eb5f946c0",
"metadata": {},
"source": [
"By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "2ff391ed-e855-4228-aa09-a7641d8c2893",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:05:46.969575966Z",
"start_time": "2023-06-06T20:05:46.420803286Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xb45c0594\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [0e+00, 0e+00]\n",
" RHS range [2e+08, 2e+08]\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Found heuristic solution: objective 1.071463e+10\n",
"\n",
"Root relaxation: objective 1.021568e+10, 510 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 1.0216e+10 0 1 1.0715e+10 1.0216e+10 4.66% - 0s\n",
"H 0 0 1.025162e+10 1.0216e+10 0.35% - 0s\n",
" 0 0 1.0216e+10 0 1 1.0252e+10 1.0216e+10 0.35% - 0s\n",
"H 0 0 1.023090e+10 1.0216e+10 0.15% - 0s\n",
"H 0 0 1.022335e+10 1.0216e+10 0.07% - 0s\n",
"H 0 0 1.022281e+10 1.0216e+10 0.07% - 0s\n",
"H 0 0 1.021753e+10 1.0216e+10 0.02% - 0s\n",
"H 0 0 1.021752e+10 1.0216e+10 0.02% - 0s\n",
" 0 0 1.0216e+10 0 3 1.0218e+10 1.0216e+10 0.02% - 0s\n",
" 0 0 1.0216e+10 0 1 1.0218e+10 1.0216e+10 0.02% - 0s\n",
"H 0 0 1.021651e+10 1.0216e+10 0.01% - 0s\n",
"\n",
"Explored 1 nodes (764 simplex iterations) in 0.03 seconds (0.02 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 7: 1.02165e+10 1.02175e+10 1.02228e+10 ... 1.07146e+10\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.021651058978e+10, best bound 1.021573363741e+10, gap 0.0076%\n",
"\n",
"User-callback calls 204, time in user-callback 0.00 sec\n"
]
}
],
"source": [
"solver_baseline = LearningSolver(components=[])\n",
"solver_baseline.fit(train_data)\n",
"solver_baseline.optimize(test_data[1], build_uc_model);"
]
},
{
"cell_type": "markdown",
"id": "b6d37b88-9fcc-43ee-ac1e-2a7b1e51a266",
"metadata": {},
"source": [
"In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems."
]
},
{
"cell_type": "markdown",
"id": "eec97f06",
"metadata": {
"tags": []
},
"source": [
"## Accessing the solution\n",
"\n",
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a JuMP model entirely in-memory, using our trained solver."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "67a6cd18",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:06:26.913448568Z",
"start_time": "2023-06-06T20:06:26.169047914Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
"\n",
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x974a7fba\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 1e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [0e+00, 0e+00]\n",
" RHS range [2e+08, 2e+08]\n",
"\n",
"User MIP start produced solution with objective 9.86729e+09 (0.00s)\n",
"User MIP start produced solution with objective 9.86675e+09 (0.00s)\n",
"User MIP start produced solution with objective 9.86654e+09 (0.01s)\n",
"User MIP start produced solution with objective 9.8661e+09 (0.01s)\n",
"Loaded user MIP start with objective 9.8661e+09\n",
"\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
"Root relaxation: objective 9.865344e+09, 510 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 9.8653e+09 0 1 9.8661e+09 9.8653e+09 0.01% - 0s\n",
"\n",
"Explored 1 nodes (510 simplex iterations) in 0.02 seconds (0.01 work units)\n",
"Thread count was 32 (of 32 available processors)\n",
"\n",
"Solution count 4: 9.8661e+09 9.86654e+09 9.86675e+09 9.86729e+09 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 9.866096485614e+09, best bound 9.865343669936e+09, gap 0.0076%\n",
"\n",
"User-callback calls 182, time in user-callback 0.00 sec\n",
"objective_value(model.inner) = 9.866096485613789e9\n"
]
}
],
"source": [
"data = random_uc_data(samples=1, n=500)[1]\n",
"model = build_uc_model(data)\n",
"solver_ml.optimize(model)\n",
"@show objective_value(model.inner);"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.9.0",
"language": "julia",
"name": "julia-1.9"
},
"language_info": {
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.9.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,880 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6b8983b1",
"metadata": {
"tags": []
},
"source": [
"# Getting started (Pyomo)\n",
"\n",
"## Introduction\n",
"\n",
"**MIPLearn** is an open source framework that uses machine learning (ML) to accelerate the performance of mixed-integer programming solvers (e.g. Gurobi, CPLEX, XPRESS). In this tutorial, we will:\n",
"\n",
"1. Install the Python/Pyomo version of MIPLearn\n",
"2. Model a simple optimization problem using Pyomo\n",
"3. Generate training data and train the ML models\n",
"4. Use the ML models together Gurobi to solve new instances\n",
"\n",
"<div class=\"alert alert-info\">\n",
"Note\n",
" \n",
"The Python/Pyomo version of MIPLearn is currently only compatible with Pyomo persistent solvers (Gurobi, CPLEX and XPRESS). For broader solver compatibility, see the Julia/JuMP version of the package.\n",
"</div>\n",
"\n",
"<div class=\"alert alert-warning\">\n",
"Warning\n",
" \n",
"MIPLearn is still in early development stage. If run into any bugs or issues, please submit a bug report in our GitHub repository. Comments, suggestions and pull requests are also very welcome!\n",
" \n",
"</div>\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "02f0a927",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"MIPLearn is available in two versions:\n",
"\n",
"- Python version, compatible with the Pyomo and Gurobipy modeling languages,\n",
"- Julia version, compatible with the JuMP modeling language.\n",
"\n",
"In this tutorial, we will demonstrate how to use and install the Python/Pyomo version of the package. The first step is to install Python 3.9+ in your computer. See the [official Python website for more instructions](https://www.python.org/downloads/). After Python is installed, we proceed to install MIPLearn using `pip`:\n",
"\n",
"```\n",
"$ pip install MIPLearn~=0.4\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "a14e4550",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
" \n",
"Note\n",
" \n",
"In the code above, we install specific version of all packages to ensure that this tutorial keeps running in the future, even when newer (and possibly incompatible) versions of the packages are released. This is usually a recommended practice for all Python projects.\n",
" \n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "16b86823",
"metadata": {},
"source": [
"## Modeling a simple optimization problem\n",
"\n",
"To illustrate how can MIPLearn be used, we will model and solve a small optimization problem related to power systems optimization. The problem we discuss below is a simplification of the **unit commitment problem,** a practical optimization problem solved daily by electric grid operators around the world. \n",
"\n",
"Suppose that a utility company needs to decide which electrical generators should be online at each hour of the day, as well as how much power should each generator produce. More specifically, assume that the company owns $n$ generators, denoted by $g_1, \\ldots, g_n$. Each generator can either be online or offline. An online generator $g_i$ can produce between $p^\\text{min}_i$ to $p^\\text{max}_i$ megawatts of power, and it costs the company $c^\\text{fix}_i + c^\\text{var}_i y_i$, where $y_i$ is the amount of power produced. An offline generator produces nothing and costs nothing. The total amount of power to be produced needs to be exactly equal to the total demand $d$ (in megawatts).\n",
"\n",
"This simple problem can be modeled as a *mixed-integer linear optimization* problem as follows. For each generator $g_i$, let $x_i \\in \\{0,1\\}$ be a decision variable indicating whether $g_i$ is online, and let $y_i \\geq 0$ be a decision variable indicating how much power does $g_i$ produce. The problem is then given by:"
]
},
{
"cell_type": "markdown",
"id": "f12c3702",
"metadata": {},
"source": [
"$$\n",
"\\begin{align}\n",
"\\text{minimize } \\quad & \\sum_{i=1}^n \\left( c^\\text{fix}_i x_i + c^\\text{var}_i y_i \\right) \\\\\n",
"\\text{subject to } \\quad & y_i \\leq p^\\text{max}_i x_i & i=1,\\ldots,n \\\\\n",
"& y_i \\geq p^\\text{min}_i x_i & i=1,\\ldots,n \\\\\n",
"& \\sum_{i=1}^n y_i = d \\\\\n",
"& x_i \\in \\{0,1\\} & i=1,\\ldots,n \\\\\n",
"& y_i \\geq 0 & i=1,\\ldots,n\n",
"\\end{align}\n",
"$$"
]
},
{
"cell_type": "markdown",
"id": "be3989ed",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
"\n",
"Note\n",
"\n",
"We use a simplified version of the unit commitment problem in this tutorial just to make it easier to follow. MIPLearn can also handle realistic, large-scale versions of this problem.\n",
"\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "a5fd33f6",
"metadata": {},
"source": [
"Next, let us convert this abstract mathematical formulation into a concrete optimization model, using Python and Pyomo. We start by defining a data class `UnitCommitmentData`, which holds all the input data."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "22a67170-10b4-43d3-8708-014d91141e73",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:00:03.278853343Z",
"start_time": "2023-06-06T20:00:03.123324067Z"
},
"tags": []
},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"from typing import List\n",
"\n",
"import numpy as np\n",
"\n",
"\n",
"@dataclass\n",
"class UnitCommitmentData:\n",
" demand: float\n",
" pmin: List[float]\n",
" pmax: List[float]\n",
" cfix: List[float]\n",
" cvar: List[float]"
]
},
{
"cell_type": "markdown",
"id": "29f55efa-0751-465a-9b0a-a821d46a3d40",
"metadata": {},
"source": [
"Next, we write a `build_uc_model` function, which converts the input data into a concrete Pyomo model. The function accepts `UnitCommitmentData`, the data structure we previously defined, or the path to a compressed pickle file containing this data."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2f67032f-0d74-4317-b45c-19da0ec859e9",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:00:45.890126754Z",
"start_time": "2023-06-06T20:00:45.637044282Z"
}
},
"outputs": [],
"source": [
"import pyomo.environ as pe\n",
"from typing import Union\n",
"from miplearn.io import read_pkl_gz\n",
"from miplearn.solvers.pyomo import PyomoModel\n",
"\n",
"\n",
"def build_uc_model(data: Union[str, UnitCommitmentData]) -> PyomoModel:\n",
" if isinstance(data, str):\n",
" data = read_pkl_gz(data)\n",
"\n",
" model = pe.ConcreteModel()\n",
" n = len(data.pmin)\n",
" model.x = pe.Var(range(n), domain=pe.Binary)\n",
" model.y = pe.Var(range(n), domain=pe.NonNegativeReals)\n",
" model.obj = pe.Objective(\n",
" expr=sum(\n",
" data.cfix[i] * model.x[i] + data.cvar[i] * model.y[i] for i in range(n)\n",
" )\n",
" )\n",
" model.eq_max_power = pe.ConstraintList()\n",
" model.eq_min_power = pe.ConstraintList()\n",
" for i in range(n):\n",
" model.eq_max_power.add(model.y[i] <= data.pmax[i] * model.x[i])\n",
" model.eq_min_power.add(model.y[i] >= data.pmin[i] * model.x[i])\n",
" model.eq_demand = pe.Constraint(\n",
" expr=sum(model.y[i] for i in range(n)) == data.demand,\n",
" )\n",
" return PyomoModel(model, \"gurobi_persistent\")"
]
},
{
"cell_type": "markdown",
"id": "c22714a3",
"metadata": {},
"source": [
"At this point, we can already use Pyomo and any mixed-integer linear programming solver to find optimal solutions to any instance of this problem. To illustrate this, let us solve a small instance with three generators:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2a896f47",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:01:10.993801745Z",
"start_time": "2023-06-06T20:01:10.887580927Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter Threads to value 1\n",
"Read parameters from file gurobi.env\n",
"Restricted license - for non-production use only - expires 2026-11-23\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 7 rows, 6 columns and 15 nonzeros\n",
"Model fingerprint: 0x15c7a953\n",
"Variable types: 3 continuous, 3 integer (3 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 7e+01]\n",
" Objective range [2e+00, 7e+02]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [1e+02, 1e+02]\n",
"Presolve removed 6 rows and 3 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 3 columns, 3 nonzeros\n",
"Variable types: 0 continuous, 3 integer (1 binary)\n",
"Found heuristic solution: objective 1990.0000000\n",
"\n",
"Root relaxation: objective 1.320000e+03, 0 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
"* 0 0 0 1320.0000000 1320.00000 0.00% - 0s\n",
"\n",
"Explored 1 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 2: 1320 1990 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 1.320000000000e+03, best bound 1.320000000000e+03, gap 0.0000%\n",
"WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n",
"obj = 1320.0\n",
"x = [-0.0, 1.0, 1.0]\n",
"y = [0.0, 60.0, 40.0]\n"
]
}
],
"source": [
"model = build_uc_model(\n",
" UnitCommitmentData(\n",
" demand=100.0,\n",
" pmin=[10, 20, 30],\n",
" pmax=[50, 60, 70],\n",
" cfix=[700, 600, 500],\n",
" cvar=[1.5, 2.0, 2.5],\n",
" )\n",
")\n",
"\n",
"model.optimize()\n",
"print(\"obj =\", model.inner.obj())\n",
"print(\"x =\", [model.inner.x[i].value for i in range(3)])\n",
"print(\"y =\", [model.inner.y[i].value for i in range(3)])"
]
},
{
"cell_type": "markdown",
"id": "41b03bbc",
"metadata": {},
"source": [
"Running the code above, we found that the optimal solution for our small problem instance costs \\$1320. It is achieve by keeping generators 2 and 3 online and producing, respectively, 60 MW and 40 MW of power."
]
},
{
"cell_type": "markdown",
"id": "01f576e1-1790-425e-9e5c-9fa07b6f4c26",
"metadata": {},
"source": [
"<div class=\"alert alert-info\">\n",
" \n",
"Notes\n",
" \n",
"- In the example above, `PyomoModel` is just a thin wrapper around a standard Pyomo model. This wrapper allows MIPLearn to be solver- and modeling-language-agnostic. The wrapper provides only a few basic methods, such as `optimize`. For more control, and to query the solution, the original Pyomo model can be accessed through `model.inner`, as illustrated above. \n",
"- To use CPLEX or XPRESS, instead of Gurobi, replace `gurobi_persistent` by `cplex_persistent` or `xpress_persistent` in the `build_uc_model`. Note that only persistent Pyomo solvers are currently supported. Pull requests adding support for other types of solver are very welcome.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "cf60c1dd",
"metadata": {},
"source": [
"## Generating training data\n",
"\n",
"Although Gurobi could solve the small example above in a fraction of a second, it gets slower for larger and more complex versions of the problem. If this is a problem that needs to be solved frequently, as it is often the case in practice, it could make sense to spend some time upfront generating a **trained** solver, which can optimize new instances (similar to the ones it was trained on) faster.\n",
"\n",
"In the following, we will use MIPLearn to train machine learning models that is able to predict the optimal solution for instances that follow a given probability distribution, then it will provide this predicted solution to Gurobi as a warm start. Before we can train the model, we need to collect training data by solving a large number of instances. In real-world situations, we may construct these training instances based on historical data. In this tutorial, we will construct them using a random instance generator:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "5eb09fab",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:02:27.324208900Z",
"start_time": "2023-06-06T20:02:26.990044230Z"
}
},
"outputs": [],
"source": [
"from scipy.stats import uniform\n",
"from typing import List\n",
"import random\n",
"\n",
"\n",
"def random_uc_data(samples: int, n: int, seed: int = 42) -> List[UnitCommitmentData]:\n",
" random.seed(seed)\n",
" np.random.seed(seed)\n",
" pmin = uniform(loc=100_000.0, scale=400_000.0).rvs(n)\n",
" pmax = pmin * uniform(loc=2.0, scale=2.5).rvs(n)\n",
" cfix = pmin * uniform(loc=100.0, scale=25.0).rvs(n)\n",
" cvar = uniform(loc=1.25, scale=0.25).rvs(n)\n",
" return [\n",
" UnitCommitmentData(\n",
" demand=pmax.sum() * uniform(loc=0.5, scale=0.25).rvs(),\n",
" pmin=pmin,\n",
" pmax=pmax,\n",
" cfix=cfix,\n",
" cvar=cvar,\n",
" )\n",
" for _ in range(samples)\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "3a03a7ac",
"metadata": {},
"source": [
"In this example, for simplicity, only the demands change from one instance to the next. We could also have randomized the costs, production limits or even the number of units. The more randomization we have in the training data, however, the more challenging it is for the machine learning models to learn solution patterns.\n",
"\n",
"Now we generate 500 instances of this problem, each one with 50 generators, and we use 450 of these instances for training. After generating the instances, we write them to individual files. MIPLearn uses files during the training process because, for large-scale optimization problems, it is often impractical to hold in memory the entire training data, as well as the concrete Pyomo models. Files also make it much easier to solve multiple instances simultaneously, potentially on multiple machines. The code below generates the files `uc/train/00000.pkl.gz`, `uc/train/00001.pkl.gz`, etc., which contain the input data in compressed (gzipped) pickle format."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6156752c",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:03:04.782830561Z",
"start_time": "2023-06-06T20:03:04.530421396Z"
}
},
"outputs": [],
"source": [
"from miplearn.io import write_pkl_gz\n",
"\n",
"data = random_uc_data(samples=500, n=500)\n",
"train_data = write_pkl_gz(data[0:450], \"uc/train\")\n",
"test_data = write_pkl_gz(data[450:500], \"uc/test\")"
]
},
{
"cell_type": "markdown",
"id": "b17af877",
"metadata": {},
"source": [
"Finally, we use `BasicCollector` to collect the optimal solutions and other useful training data for all training instances. The data is stored in HDF5 files `uc/train/00000.h5`, `uc/train/00001.h5`, etc. The optimization models are also exported to compressed MPS files `uc/train/00000.mps.gz`, `uc/train/00001.mps.gz`, etc."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7623f002",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:03:35.571497019Z",
"start_time": "2023-06-06T20:03:25.804104036Z"
}
},
"outputs": [],
"source": [
"from miplearn.collectors.basic import BasicCollector\n",
"\n",
"bc = BasicCollector()\n",
"bc.collect(train_data, build_uc_model, n_jobs=4)"
]
},
{
"cell_type": "markdown",
"id": "c42b1be1-9723-4827-82d8-974afa51ef9f",
"metadata": {},
"source": [
"## Training and solving test instances"
]
},
{
"cell_type": "markdown",
"id": "a33c6aa4-f0b8-4ccb-9935-01f7d7de2a1c",
"metadata": {},
"source": [
"With training data in hand, we can now design and train a machine learning model to accelerate solver performance. In this tutorial, for illustration purposes, we will use ML to generate a good warm start using $k$-nearest neighbors. More specifically, the strategy is to:\n",
"\n",
"1. Memorize the optimal solutions of all training instances;\n",
"2. Given a test instance, find the 25 most similar training instances, based on constraint right-hand sides;\n",
"3. Merge their optimal solutions into a single partial solution; specifically, only assign values to the binary variables that agree unanimously.\n",
"4. Provide this partial solution to the solver as a warm start.\n",
"\n",
"This simple strategy can be implemented as shown below, using `MemorizingPrimalComponent`. For more advanced strategies, and for the usage of more advanced classifiers, see the user guide."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "435f7bf8-4b09-4889-b1ec-b7b56e7d8ed2",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:05:20.497772794Z",
"start_time": "2023-06-06T20:05:20.484821405Z"
}
},
"outputs": [],
"source": [
"from sklearn.neighbors import KNeighborsClassifier\n",
"from miplearn.components.primal.actions import SetWarmStart\n",
"from miplearn.components.primal.mem import (\n",
" MemorizingPrimalComponent,\n",
" MergeTopSolutions,\n",
")\n",
"from miplearn.extractors.fields import H5FieldsExtractor\n",
"\n",
"comp = MemorizingPrimalComponent(\n",
" clf=KNeighborsClassifier(n_neighbors=25),\n",
" extractor=H5FieldsExtractor(\n",
" instance_fields=[\"static_constr_rhs\"],\n",
" ),\n",
" constructor=MergeTopSolutions(25, [0.0, 1.0]),\n",
" action=SetWarmStart(),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "9536e7e4-0b0d-49b0-bebd-4a848f839e94",
"metadata": {},
"source": [
"Having defined the ML strategy, we next construct `LearningSolver`, train the ML component and optimize one of the test instances."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9d13dd50-3dcf-4673-a757-6f44dcc0dedf",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:05:22.672002339Z",
"start_time": "2023-06-06T20:05:21.447466634Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x5e67c6ee\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n",
" 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xff6a55c5\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"\n",
"User MIP start produced solution with objective 8.29153e+09 (0.00s)\n",
"User MIP start produced solution with objective 8.29153e+09 (0.00s)\n",
"Loaded user MIP start with objective 8.29153e+09\n",
"\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2906e+09 0 1 8.2915e+09 8.2906e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 3 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2915e+09 8.2907e+09 0.01% - 0s\n",
" 0 0 - 0 8.2915e+09 8.2907e+09 0.01% - 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 1\n",
" Cover: 1\n",
" Flow cover: 2\n",
"\n",
"Explored 1 nodes (564 simplex iterations) in 0.03 seconds (0.01 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 1: 8.29153e+09 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291528276179e+09, best bound 8.290729173948e+09, gap 0.0096%\n",
"WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n"
]
},
{
"data": {
"text/plain": [
"(<miplearn.solvers.pyomo.PyomoModel at 0x7fdb38952450>, {})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from miplearn.solvers.learning import LearningSolver\n",
"\n",
"solver_ml = LearningSolver(components=[comp])\n",
"solver_ml.fit(train_data)\n",
"solver_ml.optimize(test_data[0], build_uc_model)"
]
},
{
"cell_type": "markdown",
"id": "61da6dad-7f56-4edb-aa26-c00eb5f946c0",
"metadata": {},
"source": [
"By examining the solve log above, specifically the line `Loaded user MIP start with objective...`, we can see that MIPLearn was able to construct an initial solution which turned out to be very close to the optimal solution to the problem. Now let us repeat the code above, but a solver which does not apply any ML strategies. Note that our previously-defined component is not provided."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "2ff391ed-e855-4228-aa09-a7641d8c2893",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:05:46.969575966Z",
"start_time": "2023-06-06T20:05:46.420803286Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x5e67c6ee\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 6.6166537e+09 5.648803e+04 0.000000e+00 0s\n",
" 1 8.2906219e+09 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.290621916e+09\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x8a0f9587\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Found heuristic solution: objective 9.757128e+09\n",
"\n",
"Root relaxation: objective 8.290622e+09, 512 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2906e+09 0 1 9.7571e+09 8.2906e+09 15.0% - 0s\n",
"H 0 0 8.298273e+09 8.2906e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2983e+09 8.2907e+09 0.09% - 0s\n",
" 0 0 8.2907e+09 0 4 8.2983e+09 8.2907e+09 0.09% - 0s\n",
"H 0 0 8.293980e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 5 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 1 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2907e+09 0 2 8.2940e+09 8.2907e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 4 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 3 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 0 8.2908e+09 0 3 8.2940e+09 8.2908e+09 0.04% - 0s\n",
" 0 2 8.2908e+09 0 3 8.2940e+09 8.2908e+09 0.04% - 0s\n",
"H 9 9 8.292471e+09 8.2908e+09 0.02% 1.3 0s\n",
"* 90 41 44 8.291525e+09 8.2908e+09 0.01% 1.5 0s\n",
"\n",
"Cutting planes:\n",
" Gomory: 1\n",
" Cover: 1\n",
" MIR: 2\n",
"\n",
"Explored 91 nodes (1166 simplex iterations) in 0.06 seconds (0.05 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 7: 8.29152e+09 8.29247e+09 8.29398e+09 ... 1.0319e+10\n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.291524908632e+09, best bound 8.290823611882e+09, gap 0.0085%\n",
"WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n"
]
},
{
"data": {
"text/plain": [
"(<miplearn.solvers.pyomo.PyomoModel at 0x7fdb2f563f50>, {})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"solver_baseline = LearningSolver(components=[])\n",
"solver_baseline.fit(train_data)\n",
"solver_baseline.optimize(test_data[0], build_uc_model)"
]
},
{
"cell_type": "markdown",
"id": "b6d37b88-9fcc-43ee-ac1e-2a7b1e51a266",
"metadata": {},
"source": [
"In the log above, the `MIP start` line is missing, and Gurobi had to start with a significantly inferior initial solution. The solver was still able to find the optimal solution at the end, but it required using its own internal heuristic procedures. In this example, because we solve very small optimization problems, there was almost no difference in terms of running time, but the difference can be significant for larger problems."
]
},
{
"cell_type": "markdown",
"id": "eec97f06",
"metadata": {
"tags": []
},
"source": [
"## Accessing the solution\n",
"\n",
"In the example above, we used `LearningSolver.solve` together with data files to solve both the training and the test instances. In the following example, we show how to build and solve a Pyomo model entirely in-memory, using our trained solver."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "67a6cd18",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-06T20:06:26.913448568Z",
"start_time": "2023-06-06T20:06:26.169047914Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0x2dfe4e1c\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"Presolve removed 1000 rows and 500 columns\n",
"Presolve time: 0.00s\n",
"Presolved: 1 rows, 500 columns, 500 nonzeros\n",
"\n",
"Iteration Objective Primal Inf. Dual Inf. Time\n",
" 0 6.5917580e+09 5.627453e+04 0.000000e+00 0s\n",
" 1 8.2535968e+09 0.000000e+00 0.000000e+00 0s\n",
"\n",
"Solved in 1 iterations and 0.01 seconds (0.00 work units)\n",
"Optimal objective 8.253596777e+09\n",
"Set parameter OutputFlag to value 1\n",
"Set parameter QCPDual to value 1\n",
"Gurobi Optimizer version 12.0.2 build v12.0.2rc0 (linux64 - \"Ubuntu 22.04.4 LTS\")\n",
"\n",
"CPU model: 13th Gen Intel(R) Core(TM) i7-13800H, instruction set [SSE2|AVX|AVX2]\n",
"Thread count: 10 physical cores, 20 logical processors, using up to 1 threads\n",
"\n",
"Non-default parameters:\n",
"QCPDual 1\n",
"Threads 1\n",
"\n",
"Optimize a model with 1001 rows, 1000 columns and 2500 nonzeros\n",
"Model fingerprint: 0xd941f1ed\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"Coefficient statistics:\n",
" Matrix range [1e+00, 2e+06]\n",
" Objective range [1e+00, 6e+07]\n",
" Bounds range [1e+00, 1e+00]\n",
" RHS range [3e+08, 3e+08]\n",
"\n",
"User MIP start produced solution with objective 8.25814e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25512e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25448e+09 (0.01s)\n",
"User MIP start produced solution with objective 8.25448e+09 (0.02s)\n",
"Loaded user MIP start with objective 8.25448e+09\n",
"\n",
"Presolve time: 0.00s\n",
"Presolved: 1001 rows, 1000 columns, 2500 nonzeros\n",
"Variable types: 500 continuous, 500 integer (500 binary)\n",
"\n",
"Root relaxation: objective 8.253597e+09, 512 iterations, 0.00 seconds (0.00 work units)\n",
"\n",
" Nodes | Current Node | Objective Bounds | Work\n",
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
"\n",
" 0 0 8.2536e+09 0 1 8.2545e+09 8.2536e+09 0.01% - 0s\n",
" 0 0 - 0 8.2545e+09 8.2537e+09 0.01% - 0s\n",
"\n",
"Cutting planes:\n",
" Cover: 1\n",
" Flow cover: 2\n",
"\n",
"Explored 1 nodes (514 simplex iterations) in 0.03 seconds (0.01 work units)\n",
"Thread count was 1 (of 20 available processors)\n",
"\n",
"Solution count 3: 8.25448e+09 8.25512e+09 8.25814e+09 \n",
"\n",
"Optimal solution found (tolerance 1.00e-04)\n",
"Best objective 8.254479145594e+09, best bound 8.253676932849e+09, gap 0.0097%\n",
"WARNING: Cannot get reduced costs for MIP.\n",
"WARNING: Cannot get duals for MIP.\n",
"obj = 8254479145.594172\n",
" x = [1.0, 1.0, 0.0, 1.0, 1.0]\n",
" y = [935662.0949262811, 1604270.0218116897, 0.0, 1369560.835229226, 602828.5321028307]\n"
]
}
],
"source": [
"data = random_uc_data(samples=1, n=500)[0]\n",
"model = build_uc_model(data)\n",
"solver_ml.optimize(model)\n",
"print(\"obj =\", model.inner.obj())\n",
"print(\" x =\", [model.inner.x[i].value for i in range(5)])\n",
"print(\" y =\", [model.inner.y[i].value for i in range(5)])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5593d23a-83bd-4e16-8253-6300f5e3f63b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1 @@
Threads 1

View File

@@ -1,171 +0,0 @@
# Usage
## 1. Installation
In these docs, we describe the Python/Pyomo version of the package, although a [Julia/JuMP version](https://github.com/ANL-CEEESA/MIPLearn.jl) is also available. A mixed-integer solver is also required and its Python bindings must be properly installed. Supported solvers are currently CPLEX and Gurobi.
To install MIPLearn, run:
```bash
pip3 install miplearn
```
After installation, the package `miplearn` should become available to Python. It can be imported
as follows:
```python
import miplearn
```
## 2. Using `LearningSolver`
The main class provided by this package is `LearningSolver`, a learning-enhanced MIP solver which uses information from previously solved instances to accelerate the solution of new instances. The following example shows its basic usage:
```python
from miplearn import LearningSolver
# List of user-provided instances
training_instances = [...]
test_instances = [...]
# Create solver
solver = LearningSolver()
# Solve all training instances
for instance in training_instances:
solver.solve(instance)
# Learn from training instances
solver.fit(training_instances)
# Solve all test instances
for instance in test_instances:
solver.solve(instance)
```
In this example, we have two lists of user-provided instances: `training_instances` and `test_instances`. We start by solving all training instances. Since there is no historical information available at this point, the instances will be processed from scratch, with no ML acceleration. After solving each instance, the solver stores within each `instance` object the optimal solution, the optimal objective value, and other information that can be used to accelerate future solves. After all training instances are solved, we call `solver.fit(training_instances)`. This instructs the solver to train all its internal machine-learning models based on the solutions of the (solved) trained instances. Subsequent calls to `solver.solve(instance)` will automatically use the trained Machine Learning models to accelerate the solution process.
## 3. Describing problem instances
Instances to be solved by `LearningSolver` must derive from the abstract class `miplearn.Instance`. The following three abstract methods must be implemented:
* `instance.to_model()`, which returns a concrete Pyomo model corresponding to the instance;
* `instance.get_instance_features()`, which returns a 1-dimensional Numpy array of (numerical) features describing the entire instance;
* `instance.get_variable_features(var_name, index)`, which returns a 1-dimensional array of (numerical) features describing a particular decision variable.
The first method is used by `LearningSolver` to construct a concrete Pyomo model, which will be provided to the internal MIP solver. The second and third methods provide an encoding of the instance, which can be used by the ML models to make predictions. In the knapsack problem, for example, an implementation may decide to provide as instance features the average weights, average prices, number of items and the size of the knapsack. The weight and the price of each individual item could be provided as variable features. See `src/python/miplearn/problems/knapsack.py` for a concrete example.
An optional method which can be implemented is `instance.get_variable_category(var_name, index)`, which returns a category (a string, an integer or any hashable type) for each decision variable. If two variables have the same category, `LearningSolver` will use the same internal ML model to predict the values of both variables. By default, all variables belong to the `"default"` category, and therefore only one ML model is used for all variables. If the returned category is `None`, ML predictors will ignore the variable.
It is not necessary to have a one-to-one correspondence between features and problem instances. One important (and deliberate) limitation of MIPLearn, however, is that `get_instance_features()` must always return arrays of same length for all relevant instances of the problem. Similarly, `get_variable_features(var_name, index)` must also always return arrays of same length for all variables in each category. It is up to the user to decide how to encode variable-length characteristics of the problem into fixed-length vectors. In graph problems, for example, graph embeddings can be used to reduce the (variable-length) lists of nodes and edges into a fixed-length structure that still preserves some properties of the graph. Different instance encodings may have significant impact on performance.
## 4. Describing lazy constraints
For many MIP formulations, it is not desirable to add all constraints up-front, either because the total number of constraints is very large, or because some of the constraints, even in relatively small numbers, can still cause significant performance impact when added to the formulation. In these situations, it may be desirable to generate and add constraints incrementaly, during the solution process itself. Conventional MIP solvers typically start by solving the problem without any lazy constraints. Whenever a candidate solution is found, the solver finds all violated lazy constraints and adds them to the formulation. MIPLearn significantly accelerates this process by using ML to predict which lazy constraints should be enforced from the very beginning of the optimization process, even before a candidate solution is available.
MIPLearn supports two types of lazy constraints: through constraint annotations and through callbacks.
### 4.1 Adding lazy constraints through annotations
The easiest way to create lazy constraints in MIPLearn is to add them to the model (just like any regular constraints), then annotate them as lazy, as described below. Just before the optimization starts, MIPLearn removes all lazy constraints from the model and places them in a lazy constraint pool. If any trained ML models are available, MIPLearn queries these models to decide which of these constraints should be moved back into the formulation. After this step, the optimization starts, and lazy constraints from the pool are added to the model in the conventional fashion.
To tag a constraint as lazy, the following methods must be implemented:
* `instance.has_static_lazy_constraints()`, which returns `True` if the model has any annotated lazy constraints. By default, this method returns `False`.
* `instance.is_constraint_lazy(cid)`, which returns `True` if the constraint with name `cid` should be treated as a lazy constraint, and `False` otherwise.
* `instance.get_constraint_features(cid)`, which returns a 1-dimensional Numpy array of (numerical) features describing the constraint.
For instances such that `has_lazy_constraints` returns `True`, MIPLearn calls `is_constraint_lazy` for each constraint in the formulation, providing the name of the constraint. For constraints such that `is_constraint_lazy` returns `True`, MIPLearn additionally calls `get_constraint_features` to gather a ML representation of each constraint. These features are used to predict which lazy constraints should be initially enforced.
An additional method that can be implemented is `get_lazy_constraint_category(cid)`, which returns a category (a string or any other hashable type) for each lazy constraint. Similarly to decision variable categories, if two lazy constraints have the same category, then MIPLearn will use the same internal ML model to decide whether to initially enforce them. By default, all lazy constraints belong to the `"default"` category, and therefore a single ML model is used.
!!! warning
If two lazy constraints belong to the same category, their feature vectors should have the same length.
### 4.2 Adding lazy constraints through callbacks
Although convenient, the method described in the previous subsection still requires the generation of all lazy constraints ahead of time, which can be prohibitively expensive. An alternative method is through a lazy constraint callbacks, described below. During the solution process, MIPLearn will repeatedly call a user-provided function to identify any violated lazy constraints. If violated constraints are identified, MIPLearn will additionally call another user-provided function to generate the constraint and add it to the formulation.
To describe lazy constraints through user callbacks, the following methods need to be implemented:
* `instance.has_dynamic_lazy_constraints()`, which returns `True` if the model has any lazy constraints generated by user callbacks. By default, this method returns `False`.
* `instance.find_violated_lazy_constraints(model)`, which returns a list of identifiers corresponding to the lazy constraints found to be violated by the current solution. These identifiers should be strings, tuples or any other hashable type.
* `instance.build_violated_lazy_constraints(model, cid)`, which returns either a list of Pyomo constraints, or a single Pyomo constraint, corresponding to the given lazy constraint identifier.
* `instance.get_constraint_features(cid)`, which returns a 1-dimensional Numpy array of (numerical) features describing the constraint. If this constraint is not valid, returns `None`.
* `instance.get_lazy_constraint_category(cid)`, which returns a category (a string or any other hashable type) for each lazy constraint, indicating which ML model to use. By default, returns `"default"`.
Assuming that trained ML models are available, immediately after calling `solver.solve`, MIPLearn will call `get_constraint_features` for each lazy constraint identifier found in the training set. For constraints such that `get_constraint_features` returns a vector (instead of `None`), MIPLearn will call `get_constraint_category` to decide which trained ML model to use. It will then query the ML model to decide whether the constraint should be initially enforced. Assuming that the ML predicts this constraint will be necessary, MIPLearn calls `build_violated_constraints` then adds the returned list of Pyomo constraints to the model. The optimization then starts. When no trained ML models are available, this entire initial process is skipped, and MIPLearn behaves like a conventional solver.
After the optimization process starts, MIPLearn will periodically call `find_violated_lazy_constraints` to verify if the current solution violates any lazy constraints. If any violated lazy constraints are found, MIPLearn will call the method `build_violated_lazy_constraints` and add the returned constraints to the formulation.
!!! note
When implementing `find_violated_lazy_constraints(self, model)`, the current solution may be accessed through `self.solution[var_name][index]`.
## 5. Obtaining heuristic solutions
By default, `LearningSolver` uses Machine Learning to accelerate the MIP solution process, while maintaining all optimality guarantees provided by the MIP solver. In the default mode of operation, for example, predicted optimal solutions are used only as MIP starts.
For more significant performance benefits, `LearningSolver` can also be configured to place additional trust in the Machine Learning predictors, by using the `mode="heuristic"` constructor argument. When operating in this mode, if a ML model is statistically shown (through *stratified k-fold cross validation*) to have exceptionally high accuracy, the solver may decide to restrict the search space based on its predictions. The parts of the solution which the ML models cannot predict accurately will still be explored using traditional (branch-and-bound) methods. For particular applications, this mode has been shown to quickly produce optimal or near-optimal solutions (see [references](about.md#references) and [benchmark results](benchmark.md)).
!!! danger
The `heuristic` mode provides no optimality guarantees, and therefore should only be used if the solver is first trained on a large and representative set of training instances. Training on a small or non-representative set of instances may produce low-quality solutions, or make the solver incorrectly classify new instances as infeasible.
## 6. Saving and loading solver state
After solving a large number of training instances, it may be desirable to save the current state of `LearningSolver` to disk, so that the solver can still use the acquired knowledge after the application restarts. This can be accomplished by using the standard `pickle` module, as the following example illustrates:
```python
from miplearn import LearningSolver
import pickle
# Solve training instances
training_instances = [...]
solver = LearningSolver()
for instance in training_instances:
solver.solve(instance)
# Train machine-learning models
solver.fit(training_instances)
# Save trained solver to disk
pickle.dump(solver, open("solver.pickle", "wb"))
# Application restarts...
# Load trained solver from disk
solver = pickle.load(open("solver.pickle", "rb"))
# Solve additional instances
test_instances = [...]
for instance in test_instances:
solver.solve(instance)
```
## 7. Solving training instances in parallel
In many situations, training and test instances can be solved in parallel to accelerate the training process. `LearningSolver` provides the method `parallel_solve(instances)` to easily achieve this:
```python
from miplearn import LearningSolver
training_instances = [...]
solver = LearningSolver()
solver.parallel_solve(training_instances, n_jobs=4)
solver.fit(training_instances)
# Test phase...
test_instances = [...]
solver.parallel_solve(test_instances)
```
## 8. Current Limitations
* Only binary and continuous decision variables are currently supported.

BIN
miplearn/.io.py.swp Normal file

Binary file not shown.

View File

@@ -1,32 +1,3 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from .extractors import (SolutionExtractor,
InstanceFeaturesExtractor,
ObjectiveValueExtractor,
VariableFeaturesExtractor)
from .components.component import Component
from .components.objective import ObjectiveValueComponent
from .components.lazy_dynamic import DynamicLazyConstraintsComponent
from .components.lazy_static import StaticLazyConstraintsComponent
from .components.cuts import UserCutsComponent
from .components.primal import PrimalSolutionComponent
from .components.relaxation import RelaxationComponent
from .classifiers.adaptive import AdaptiveClassifier
from .classifiers.threshold import MinPrecisionThreshold
from .benchmark import BenchmarkRunner
from .instance import Instance
from .solvers.pyomo.base import BasePyomoSolver
from .solvers.pyomo.cplex import CplexPyomoSolver
from .solvers.pyomo.gurobi import GurobiPyomoSolver
from .solvers.gurobi import GurobiSolver
from .solvers.internal import InternalSolver
from .solvers.learning import LearningSolver
from .log import setup_logger

View File

@@ -1,203 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from copy import deepcopy
import pandas as pd
import numpy as np
import logging
from tqdm.auto import tqdm
from .solvers.learning import LearningSolver
class BenchmarkRunner:
def __init__(self, solvers):
assert isinstance(solvers, dict)
for solver in solvers.values():
assert isinstance(solver, LearningSolver)
self.solvers = solvers
self.results = None
def solve(self, instances, tee=False):
for (solver_name, solver) in self.solvers.items():
for i in tqdm(range(len((instances)))):
results = solver.solve(deepcopy(instances[i]), tee=tee)
self._push_result(results, solver=solver, solver_name=solver_name, instance=i)
def parallel_solve(self,
instances,
n_jobs=1,
n_trials=1,
index_offset=0,
):
self._silence_miplearn_logger()
trials = instances * n_trials
for (solver_name, solver) in self.solvers.items():
results = solver.parallel_solve(trials,
n_jobs=n_jobs,
label="Solve (%s)" % solver_name)
for i in range(len(trials)):
idx = (i % len(instances)) + index_offset
self._push_result(results[i],
solver=solver,
solver_name=solver_name,
instance=idx)
self._restore_miplearn_logger()
def raw_results(self):
return self.results
def save_results(self, filename):
self.results.to_csv(filename)
def load_results(self, filename):
self.results = pd.read_csv(filename, index_col=0)
def load_state(self, filename):
for (solver_name, solver) in self.solvers.items():
solver.load_state(filename)
def fit(self, training_instances):
for (solver_name, solver) in self.solvers.items():
solver.fit(training_instances)
def _push_result(self, result, solver, solver_name, instance):
if self.results is None:
self.results = pd.DataFrame(columns=["Solver",
"Instance",
"Wallclock Time",
"Lower Bound",
"Upper Bound",
"Gap",
"Nodes",
"Mode",
"Sense",
"Predicted LB",
"Predicted UB",
])
lb = result["Lower bound"]
ub = result["Upper bound"]
gap = (ub - lb) / lb
if "Predicted LB" not in result:
result["Predicted LB"] = float("nan")
result["Predicted UB"] = float("nan")
self.results = self.results.append({
"Solver": solver_name,
"Instance": instance,
"Wallclock Time": result["Wallclock time"],
"Lower Bound": lb,
"Upper Bound": ub,
"Gap": gap,
"Nodes": result["Nodes"],
"Mode": solver.mode,
"Sense": result["Sense"],
"Predicted LB": result["Predicted LB"],
"Predicted UB": result["Predicted UB"],
}, ignore_index=True)
groups = self.results.groupby("Instance")
best_lower_bound = groups["Lower Bound"].transform("max")
best_upper_bound = groups["Upper Bound"].transform("min")
best_gap = groups["Gap"].transform("min")
best_nodes = np.maximum(1, groups["Nodes"].transform("min"))
best_wallclock_time = groups["Wallclock Time"].transform("min")
self.results["Relative Lower Bound"] = \
self.results["Lower Bound"] / best_lower_bound
self.results["Relative Upper Bound"] = \
self.results["Upper Bound"] / best_upper_bound
self.results["Relative Wallclock Time"] = \
self.results["Wallclock Time"] / best_wallclock_time
self.results["Relative Gap"] = \
self.results["Gap"] / best_gap
self.results["Relative Nodes"] = \
self.results["Nodes"] / best_nodes
def save_chart(self, filename):
import matplotlib.pyplot as plt
import seaborn as sns
from numpy import median
sns.set_style("whitegrid")
sns.set_palette("Blues_r")
results = self.raw_results()
results["Gap (%)"] = results["Gap"] * 100.0
sense = results.loc[0, "Sense"]
if sense == "min":
primal_column = "Relative Upper Bound"
obj_column = "Upper Bound"
predicted_obj_column = "Predicted UB"
else:
primal_column = "Relative Lower Bound"
obj_column = "Lower Bound"
predicted_obj_column = "Predicted LB"
fig, (ax1, ax2, ax3, ax4) = plt.subplots(nrows=1,
ncols=4,
figsize=(12,4),
gridspec_kw={'width_ratios': [2, 1, 1, 2]})
# Figure 1: Solver x Wallclock Time
sns.stripplot(x="Solver",
y="Wallclock Time",
data=results,
ax=ax1,
jitter=0.25,
size=4.0,
)
sns.barplot(x="Solver",
y="Wallclock Time",
data=results,
ax=ax1,
errwidth=0.,
alpha=0.4,
estimator=median,
)
ax1.set(ylabel='Wallclock Time (s)')
# Figure 2: Solver x Gap (%)
ax2.set_ylim(-0.5, 5.5)
sns.stripplot(x="Solver",
y="Gap (%)",
jitter=0.25,
data=results[results["Mode"] != "heuristic"],
ax=ax2,
size=4.0,
)
# Figure 3: Solver x Primal Value
ax3.set_ylim(0.95,1.05)
sns.stripplot(x="Solver",
y=primal_column,
jitter=0.25,
data=results[results["Mode"] == "heuristic"],
ax=ax3,
)
# Figure 4: Predicted vs Actual Objective Value
sns.scatterplot(x=obj_column,
y=predicted_obj_column,
hue="Solver",
data=results[results["Mode"] != "heuristic"],
ax=ax4,
)
xlim, ylim = ax4.get_xlim(), ax4.get_ylim()
ax4.plot([-1e10, 1e10], [-1e10, 1e10], ls='-', color="#cccccc")
ax4.set_xlim(xlim)
ax4.set_ylim(ylim)
ax4.get_legend().remove()
fig.tight_layout()
plt.savefig(filename, bbox_inches='tight', dpi=150)
def _silence_miplearn_logger(self):
miplearn_logger = logging.getLogger("miplearn")
self.prev_log_level = miplearn_logger.getEffectiveLevel()
miplearn_logger.setLevel(logging.WARNING)
def _restore_miplearn_logger(self):
miplearn_logger = logging.getLogger("miplearn")
miplearn_logger.setLevel(self.prev_log_level)

View File

@@ -1,33 +1,3 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from abc import ABC, abstractmethod
import numpy as np
class Classifier(ABC):
@abstractmethod
def fit(self, x_train, y_train):
pass
@abstractmethod
def predict_proba(self, x_test):
pass
def predict(self, x_test):
proba = self.predict_proba(x_test)
assert isinstance(proba, np.ndarray)
assert proba.shape == (x_test.shape[0], 2)
return (proba[:, 1] > 0.5).astype(float)
class Regressor(ABC):
@abstractmethod
def fit(self, x_train, y_train):
pass
@abstractmethod
def predict(self):
pass

View File

@@ -1,66 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from copy import deepcopy
from miplearn.classifiers import Classifier
from miplearn.classifiers.counting import CountingClassifier
from miplearn.classifiers.evaluator import ClassifierEvaluator
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
logger = logging.getLogger(__name__)
class AdaptiveClassifier(Classifier):
"""
A meta-classifier which dynamically selects what actual classifier to use
based on its cross-validation score on a particular training data set.
"""
def __init__(self,
candidates=None,
evaluator=ClassifierEvaluator()):
"""
Initializes the meta-classifier.
"""
if candidates is None:
candidates = {
"knn(100)": {
"classifier": KNeighborsClassifier(n_neighbors=100),
"min samples": 100,
},
"logistic": {
"classifier": make_pipeline(StandardScaler(),
LogisticRegression()),
"min samples": 30,
},
"counting": {
"classifier": CountingClassifier(),
"min samples": 0,
}
}
self.candidates = candidates
self.evaluator = evaluator
self.classifier = None
def fit(self, x_train, y_train):
best_name, best_clf, best_score = None, None, -float("inf")
n_samples = x_train.shape[0]
for (name, clf_dict) in self.candidates.items():
if n_samples < clf_dict["min samples"]:
continue
clf = deepcopy(clf_dict["classifier"])
clf.fit(x_train, y_train)
score = self.evaluator.evaluate(clf, x_train, y_train)
if score > best_score:
best_name, best_clf, best_score = name, clf, score
logger.debug("Best classifier: %s (score=%.3f)" % (best_name, best_score))
self.classifier = best_clf
def predict_proba(self, x_test):
return self.classifier.predict_proba(x_test)

View File

@@ -1,28 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from miplearn.classifiers import Classifier
import numpy as np
class CountingClassifier(Classifier):
"""
A classifier that generates constant predictions, based only on the
frequency of the training labels. For example, if y_train is [1.0, 0.0, 0.0]
this classifier always returns [0.66 0.33] for any x_test. It essentially
counts how many times each label appeared, hence the name.
"""
def __init__(self):
self.mean = None
def fit(self, x_train, y_train):
self.mean = np.mean(y_train)
def predict_proba(self, x_test):
return np.array([[1 - self.mean, self.mean]
for _ in range(x_test.shape[0])])
def __repr__(self):
return "CountingClassifier(mean=%s)" % self.mean

View File

@@ -1,71 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from copy import deepcopy
import numpy as np
from miplearn.classifiers import Classifier
from sklearn.dummy import DummyClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
import logging
logger = logging.getLogger(__name__)
class CrossValidatedClassifier(Classifier):
"""
A meta-classifier that, upon training, evaluates the performance of another
classifier on the training data set using k-fold cross validation, then
either adopts the other classifier it if the cv-score is high enough, or
returns a constant label for every x_test otherwise.
The threshold is specified in comparison to a dummy classifier trained
on the same dataset. For example, a threshold of 0.0 indicates that any
classifier as good as the dummy predictor is acceptable. A threshold of 1.0
indicates that only classifier with a perfect cross-validation score are
acceptable. Other numbers are a linear interpolation of these two extremes.
"""
def __init__(self,
classifier=LogisticRegression(),
threshold=0.75,
constant=0.0,
cv=5,
scoring='accuracy'):
self.classifier = None
self.classifier_prototype = classifier
self.constant = constant
self.threshold = threshold
self.cv = cv
self.scoring = scoring
def fit(self, x_train, y_train):
# Calculate dummy score and absolute score threshold
y_train_avg = np.average(y_train)
dummy_score = max(y_train_avg, 1 - y_train_avg)
absolute_threshold = 1. * self.threshold + dummy_score * (1 - self.threshold)
# Calculate cross validation score and decide which classifier to use
clf = deepcopy(self.classifier_prototype)
cv_score = float(np.mean(cross_val_score(clf,
x_train,
y_train,
cv=self.cv,
scoring=self.scoring)))
if cv_score >= absolute_threshold:
logger.debug("cv_score is above threshold (%.2f >= %.2f); keeping" %
(cv_score, absolute_threshold))
self.classifier = clf
else:
logger.debug("cv_score is below threshold (%.2f < %.2f); discarding" %
(cv_score, absolute_threshold))
self.classifier = DummyClassifier(strategy="constant",
constant=self.constant)
# Train chosen classifier
self.classifier.fit(x_train, y_train)
def predict_proba(self, x_test):
return self.classifier.predict_proba(x_test)

View File

@@ -1,15 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from sklearn.metrics import roc_auc_score
class ClassifierEvaluator:
def __init__(self):
pass
def evaluate(self, clf, x_train, y_train):
# FIXME: use cross-validation
proba = clf.predict_proba(x_train)
return roc_auc_score(y_train, proba[:, 1])

View File

@@ -0,0 +1,61 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import List, Any, Callable, Optional
import numpy as np
import sklearn
from sklearn.base import BaseEstimator
from sklearn.utils.multiclass import unique_labels
class MinProbabilityClassifier(BaseEstimator):
"""
Meta-classifier that returns NaN for predictions made by a base classifier that
have probability below a given threshold. More specifically, this meta-classifier
calls base_clf.predict_proba and compares the result against the provided
thresholds. If the probability for one of the classes is above its threshold,
the meta-classifier returns that prediction. Otherwise, it returns NaN.
"""
def __init__(
self,
base_clf: Any,
thresholds: List[float],
clone_fn: Callable[[Any], Any] = sklearn.base.clone,
) -> None:
assert len(thresholds) == 2
self.base_clf = base_clf
self.thresholds = thresholds
self.clone_fn = clone_fn
self.clf_: Optional[Any] = None
self.classes_: Optional[List[Any]] = None
def fit(self, x: np.ndarray, y: np.ndarray) -> None:
assert len(y.shape) == 1
assert len(x.shape) == 2
classes = unique_labels(y)
assert len(classes) == len(self.thresholds)
self.clf_ = self.clone_fn(self.base_clf)
self.clf_.fit(x, y)
self.classes_ = self.clf_.classes_
def predict(self, x: np.ndarray) -> np.ndarray:
assert self.clf_ is not None
assert self.classes_ is not None
y_proba = self.clf_.predict_proba(x)
assert len(y_proba.shape) == 2
assert y_proba.shape[0] == x.shape[0]
assert y_proba.shape[1] == 2
n_samples = x.shape[0]
y_pred = []
for sample_idx in range(n_samples):
yi = float("nan")
for class_idx, class_val in enumerate(self.classes_):
if y_proba[sample_idx, class_idx] >= self.thresholds[class_idx]:
yi = class_val
y_pred.append(yi)
return np.array(y_pred)

View File

@@ -0,0 +1,51 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import Callable, Optional
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator
from sklearn.utils.multiclass import unique_labels
class SingleClassFix(BaseEstimator):
"""
Some sklearn classifiers, such as logistic regression, have issues with datasets
that contain a single class. This meta-classifier fixes the issue. If the
training data contains a single class, this meta-classifier always returns that
class as a prediction. Otherwise, it fits the provided base classifier,
and returns its predictions instead.
"""
def __init__(
self,
base_clf: BaseEstimator,
clone_fn: Callable = sklearn.base.clone,
):
self.base_clf = base_clf
self.clf_: Optional[BaseEstimator] = None
self.constant_ = None
self.classes_ = None
self.clone_fn = clone_fn
def fit(self, x: np.ndarray, y: np.ndarray) -> None:
classes = unique_labels(y)
if len(classes) == 1:
assert classes[0] is not None
self.clf_ = None
self.constant_ = classes[0]
self.classes_ = classes
else:
self.clf_ = self.clone_fn(self.base_clf)
assert self.clf_ is not None
self.clf_.fit(x, y)
self.constant_ = None
self.classes_ = self.clf_.classes_
def predict(self, x: np.ndarray) -> np.ndarray:
if self.constant_ is not None:
return np.full(x.shape[0], self.constant_)
else:
assert self.clf_ is not None
return self.clf_.predict(x)

View File

@@ -1,18 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from miplearn.classifiers.counting import CountingClassifier
import numpy as np
from numpy.linalg import norm
E = 0.1
def test_counting():
clf = CountingClassifier()
clf.fit(np.zeros((8, 25)), [0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0])
expected_proba = np.array([[0.375, 0.625],
[0.375, 0.625]])
actual_proba = clf.predict_proba(np.zeros((2, 25)))
assert norm(actual_proba - expected_proba) < E

View File

@@ -1,46 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import numpy as np
from miplearn.classifiers.cv import CrossValidatedClassifier
from numpy.linalg import norm
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
E = 0.1
def test_cv():
# Training set: label is true if point is inside a 2D circle
x_train = np.array([[x1, x2]
for x1 in range(-10, 11)
for x2 in range(-10, 11)])
x_train = StandardScaler().fit_transform(x_train)
n_samples = x_train.shape[0]
y_train = np.array([1.0 if x1*x1 + x2*x2 <= 100 else 0.0
for x1 in range(-10, 11)
for x2 in range(-10, 11)])
# Support vector machines with linear kernels do not perform well on this
# data set, so predictor should return the given constant.
clf = CrossValidatedClassifier(classifier=SVC(probability=True,
random_state=42),
threshold=0.90,
constant=0.0,
cv=30)
clf.fit(x_train, y_train)
assert norm(np.zeros(n_samples) - clf.predict(x_train)) < E
# Support vector machines with quadratic kernels perform almost perfectly
# on this data set, so predictor should return their prediction.
clf = CrossValidatedClassifier(classifier=SVC(probability=True,
kernel='poly',
degree=2,
random_state=42),
threshold=0.90,
cv=30)
clf.fit(x_train, y_train)
print(y_train - clf.predict(x_train))
assert norm(y_train - clf.predict(x_train)) < E

View File

@@ -1,20 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import numpy as np
from miplearn.classifiers.evaluator import ClassifierEvaluator
from sklearn.neighbors import KNeighborsClassifier
def test_evaluator():
clf_a = KNeighborsClassifier(n_neighbors=1)
clf_b = KNeighborsClassifier(n_neighbors=2)
x_train = np.array([[0, 0], [1, 0]])
y_train = np.array([0, 1])
clf_a.fit(x_train, y_train)
clf_b.fit(x_train, y_train)
ev = ClassifierEvaluator()
assert ev.evaluate(clf_a, x_train, y_train) == 1.0
assert ev.evaluate(clf_b, x_train, y_train) == 0.5

View File

@@ -1,34 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from unittest.mock import Mock
import numpy as np
from miplearn.classifiers import Classifier
from miplearn.classifiers.threshold import MinPrecisionThreshold
def test_threshold_dynamic():
clf = Mock(spec=Classifier)
clf.predict_proba = Mock(return_value=np.array([
[0.10, 0.90],
[0.10, 0.90],
[0.20, 0.80],
[0.30, 0.70],
]))
x_train = np.array([0, 1, 2, 3])
y_train = np.array([1, 1, 0, 0])
threshold = MinPrecisionThreshold(min_precision=1.0)
assert threshold.find(clf, x_train, y_train) == 0.90
threshold = MinPrecisionThreshold(min_precision=0.65)
assert threshold.find(clf, x_train, y_train) == 0.80
threshold = MinPrecisionThreshold(min_precision=0.50)
assert threshold.find(clf, x_train, y_train) == 0.70
threshold = MinPrecisionThreshold(min_precision=0.00)
assert threshold.find(clf, x_train, y_train) == 0.70

View File

@@ -1,45 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from abc import abstractmethod, ABC
import numpy as np
from sklearn.metrics._ranking import _binary_clf_curve
class DynamicThreshold(ABC):
@abstractmethod
def find(self, clf, x_train, y_train):
"""
Given a trained binary classifier `clf` and a training data set,
returns the numerical threshold (float) satisfying some criterea.
"""
pass
class MinPrecisionThreshold(DynamicThreshold):
"""
The smallest possible threshold satisfying a minimum acceptable true
positive rate (also known as precision).
"""
def __init__(self, min_precision):
self.min_precision = min_precision
def find(self, clf, x_train, y_train):
proba = clf.predict_proba(x_train)
assert isinstance(proba, np.ndarray), \
"classifier should return numpy array"
assert proba.shape == (x_train.shape[0], 2), \
"classifier should return (%d,%d)-shaped array, not %s" % (
x_train.shape[0], 2, str(proba.shape))
fps, tps, thresholds = _binary_clf_curve(y_train, proba[:, 1])
precision = tps / (tps + fps)
for k in reversed(range(len(precision))):
if precision[k] >= self.min_precision:
return thresholds[k]
return 2.0

View File

View File

@@ -0,0 +1,107 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import os
import sys
from io import StringIO
from os.path import exists
from typing import Callable, List, Any
import traceback
from ..h5 import H5File
from ..io import _RedirectOutput, gzip, _to_h5_filename
from ..parallel import p_umap
class BasicCollector:
def __init__(
self,
skip_lp: bool = False,
write_mps: bool = True,
write_log: bool = True,
) -> None:
self.skip_lp = skip_lp
self.write_mps = write_mps
self.write_log = write_log
def collect(
self,
filenames: List[str],
build_model: Callable,
n_jobs: int = 1,
progress: bool = False,
verbose: bool = False,
) -> None:
def _collect(data_filename: str) -> None:
try:
h5_filename = _to_h5_filename(data_filename)
mps_filename = h5_filename.replace(".h5", ".mps")
log_filename = h5_filename.replace(".h5", ".h5.log")
if exists(h5_filename):
# Try to read optimal solution
mip_var_values = None
try:
with H5File(h5_filename, "r") as h5:
mip_var_values = h5.get_array("mip_var_values")
except:
pass
if mip_var_values is None:
print(f"Removing empty/corrupted h5 file: {h5_filename}")
os.remove(h5_filename)
else:
return
with H5File(h5_filename, "w") as h5:
h5.put_scalar("data_filename", data_filename)
streams: List[Any] = [StringIO()]
if verbose:
streams += [sys.stdout]
with _RedirectOutput(streams):
# Load and extract static features
model = build_model(data_filename)
model.extract_after_load(h5)
if not self.skip_lp:
# Solve LP relaxation
relaxed = model.relax()
relaxed.optimize()
relaxed.extract_after_lp(h5)
# Solve MIP
model.optimize()
model.extract_after_mip(h5)
if self.write_mps:
# Add lazy constraints to model
model._lazy_enforce_collected()
# Save MPS file
model.write(mps_filename)
gzip(mps_filename)
log = streams[0].getvalue()
h5.put_scalar("mip_log", log)
if self.write_log:
with open(log_filename, "w") as log_file:
log_file.write(log)
except:
print(f"Error processing: data_filename")
traceback.print_exc()
if n_jobs > 1:
p_umap(
_collect,
filenames,
num_cpus=n_jobs,
desc="collect",
smoothing=0,
disable=not progress,
)
else:
for filename in filenames:
_collect(filename)

View File

@@ -0,0 +1,49 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import os
import subprocess
from typing import Callable
from ..h5 import H5File
class BranchPriorityCollector:
def __init__(
self,
time_limit: float = 900.0,
print_interval: int = 1,
node_limit: int = 500,
) -> None:
self.time_limit = time_limit
self.print_interval = print_interval
self.node_limit = node_limit
def collect(self, data_filename: str, _: Callable) -> None:
basename = data_filename.replace(".pkl.gz", "")
env = os.environ.copy()
env["JULIA_NUM_THREADS"] = "1"
ret = subprocess.run(
[
"julia",
"--project=.",
"-e",
(
f"using CPLEX, JuMP, MIPLearn.BB; "
f"BB.solve!("
f' optimizer_with_attributes(CPLEX.Optimizer, "CPXPARAM_Threads" => 1),'
f' "{basename}",'
f" print_interval={self.print_interval},"
f" time_limit={self.time_limit:.2f},"
f" node_limit={self.node_limit},"
f")"
),
],
check=True,
capture_output=True,
env=env,
)
h5_filename = f"{basename}.h5"
with H5File(h5_filename, "r+") as h5:
h5.put_scalar("bb_log", ret.stdout)

View File

@@ -1,41 +1,3 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
def classifier_evaluation_dict(tp, tn, fp, fn):
p = tp + fn
n = fp + tn
d = {
"Predicted positive": fp + tp,
"Predicted negative": fn + tn,
"Condition positive": p,
"Condition negative": n,
"True positive": tp,
"True negative": tn,
"False positive": fp,
"False negative": fn,
"Accuracy": (tp + tn) / (p + n),
"F1 score": (2 * tp) / (2 * tp + fp + fn),
}
if p > 0:
d["Recall"] = tp / p
else:
d["Recall"] = 1.0
if tp + fp > 0:
d["Precision"] = tp / (tp + fp)
else:
d["Precision"] = 1.0
t = (p + n) / 100.0
d["Predicted positive (%)"] = d["Predicted positive"] / t
d["Predicted negative (%)"] = d["Predicted negative"] / t
d["Condition positive (%)"] = d["Condition positive"] / t
d["Condition negative (%)"] = d["Condition negative"] / t
d["True positive (%)"] = d["True positive"] / t
d["True negative (%)"] = d["True negative"] / t
d["False positive (%)"] = d["False positive"] / t
d["False negative (%)"] = d["False negative"] / t
return d

View File

@@ -1,29 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from abc import ABC, abstractmethod
class Component(ABC):
"""
A Component is an object which adds functionality to a LearningSolver.
"""
@abstractmethod
def before_solve(self, solver, instance, model):
pass
@abstractmethod
def after_solve(self, solver, instance, model, results):
pass
@abstractmethod
def fit(self, training_instances):
pass
def after_iteration(self, solver, instance, model):
return False
def on_lazy_callback(self, solver, instance, model):
return

View File

@@ -1,96 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import sys
from copy import deepcopy
from miplearn.classifiers.counting import CountingClassifier
from miplearn.components import classifier_evaluation_dict
from .component import Component
from ..extractors import *
logger = logging.getLogger(__name__)
class UserCutsComponent(Component):
"""
A component that predicts which user cuts to enforce.
"""
def __init__(self,
classifier=CountingClassifier(),
threshold=0.05):
self.violations = set()
self.count = {}
self.n_samples = 0
self.threshold = threshold
self.classifier_prototype = classifier
self.classifiers = {}
def before_solve(self, solver, instance, model):
instance.found_violated_user_cuts = []
logger.info("Predicting violated user cuts...")
violations = self.predict(instance)
logger.info("Enforcing %d user cuts..." % len(violations))
for v in violations:
cut = instance.build_user_cut(model, v)
solver.internal_solver.add_constraint(cut)
def after_solve(self, solver, instance, model, results):
pass
def fit(self, training_instances):
logger.debug("Fitting...")
features = InstanceFeaturesExtractor().extract(training_instances)
self.classifiers = {}
violation_to_instance_idx = {}
for (idx, instance) in enumerate(training_instances):
if not hasattr(instance, "found_violated_user_cuts"):
continue
for v in instance.found_violated_user_cuts:
if v not in self.classifiers:
self.classifiers[v] = deepcopy(self.classifier_prototype)
violation_to_instance_idx[v] = []
violation_to_instance_idx[v] += [idx]
for (v, classifier) in tqdm(self.classifiers.items(),
desc="Fit (user cuts)",
disable=not sys.stdout.isatty(),
):
logger.debug("Training: %s" % (str(v)))
label = np.zeros(len(training_instances))
label[violation_to_instance_idx[v]] = 1.0
classifier.fit(features, label)
def predict(self, instance):
violations = []
features = InstanceFeaturesExtractor().extract([instance])
for (v, classifier) in self.classifiers.items():
proba = classifier.predict_proba(features)
if proba[0][1] > self.threshold:
violations += [v]
return violations
def evaluate(self, instances):
results = {}
all_violations = set()
for instance in instances:
all_violations |= set(instance.found_violated_user_cuts)
for idx in tqdm(range(len(instances)),
desc="Evaluate (lazy)",
disable=not sys.stdout.isatty(),
):
instance = instances[idx]
condition_positive = set(instance.found_violated_user_cuts)
condition_negative = all_violations - condition_positive
pred_positive = set(self.predict(instance)) & all_violations
pred_negative = all_violations - pred_positive
tp = len(pred_positive & condition_positive)
tn = len(pred_negative & condition_negative)
fp = len(pred_positive & condition_negative)
fn = len(pred_negative & condition_positive)
results[idx] = classifier_evaluation_dict(tp, tn, fp, fn)
return results

View File

View File

@@ -0,0 +1,35 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import logging
from typing import Dict, Any, List
from miplearn.components.cuts.mem import convert_lists_to_tuples
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class ExpertCutsComponent:
def fit(
self,
_: List[str],
) -> None:
pass
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
with H5File(test_h5, "r") as h5:
cuts_str = h5.get_scalar("mip_cuts")
assert cuts_str is not None
assert isinstance(cuts_str, str)
cuts = list(set(convert_lists_to_tuples(json.loads(cuts_str))))
model.set_cuts(cuts)
stats["Cuts: AOT"] = len(cuts)

View File

@@ -0,0 +1,113 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import logging
from typing import List, Dict, Any, Hashable
import numpy as np
from sklearn.preprocessing import MultiLabelBinarizer
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
def convert_lists_to_tuples(obj: Any) -> Any:
if isinstance(obj, list):
return tuple(convert_lists_to_tuples(item) for item in obj)
elif isinstance(obj, dict):
return {key: convert_lists_to_tuples(value) for key, value in obj.items()}
else:
return obj
class _BaseMemorizingConstrComponent:
def __init__(self, clf: Any, extractor: FeaturesExtractor, field: str) -> None:
self.clf = clf
self.extractor = extractor
self.constrs_: List[Hashable] = []
self.n_features_: int = 0
self.n_targets_: int = 0
self.field = field
def fit(
self,
train_h5: List[str],
) -> None:
logger.info("Reading training data...")
n_samples = len(train_h5)
x, y, constrs, n_features = [], [], [], None
constr_to_idx: Dict[Hashable, int] = {}
for h5_filename in train_h5:
with H5File(h5_filename, "r") as h5:
# Store constraints
sample_constrs_str = h5.get_scalar(self.field)
assert sample_constrs_str is not None
assert isinstance(sample_constrs_str, str)
sample_constrs = convert_lists_to_tuples(json.loads(sample_constrs_str))
y_sample = []
for c in sample_constrs:
if c not in constr_to_idx:
constr_to_idx[c] = len(constr_to_idx)
constrs.append(c)
y_sample.append(constr_to_idx[c])
y.append(y_sample)
# Extract features
x_sample = self.extractor.get_instance_features(h5)
assert len(x_sample.shape) == 1
if n_features is None:
n_features = len(x_sample)
else:
assert len(x_sample) == n_features
x.append(x_sample)
logger.info("Constructing matrices...")
assert n_features is not None
self.n_features_ = n_features
self.constrs_ = constrs
self.n_targets_ = len(constr_to_idx)
x_np = np.vstack(x)
assert x_np.shape == (n_samples, n_features)
y_np = MultiLabelBinarizer().fit_transform(y)
assert y_np.shape == (n_samples, self.n_targets_)
logger.info(
f"Dataset has {n_samples:,d} samples, "
f"{n_features:,d} features and {self.n_targets_:,d} targets"
)
logger.info("Training classifier...")
self.clf.fit(x_np, y_np)
def predict(
self,
msg: str,
test_h5: str,
) -> List[Hashable]:
with H5File(test_h5, "r") as h5:
x_sample = self.extractor.get_instance_features(h5)
assert x_sample.shape == (self.n_features_,)
x_sample = x_sample.reshape(1, -1)
logger.info(msg)
y = self.clf.predict(x_sample)
assert y.shape == (1, self.n_targets_)
y = y.reshape(-1)
return [self.constrs_[i] for (i, yi) in enumerate(y) if yi > 0.5]
class MemorizingCutsComponent(_BaseMemorizingConstrComponent):
def __init__(self, clf: Any, extractor: FeaturesExtractor) -> None:
super().__init__(clf, extractor, "mip_cuts")
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
assert self.constrs_ is not None
cuts = self.predict("Predicting cutting planes...", test_h5)
model.set_cuts(cuts)
stats["Cuts: AOT"] = len(cuts)

View File

View File

@@ -0,0 +1,36 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import json
import logging
from typing import Dict, Any, List
from miplearn.components.cuts.mem import convert_lists_to_tuples
from miplearn.h5 import H5File
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class ExpertLazyComponent:
def fit(
self,
_: List[str],
) -> None:
pass
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
with H5File(test_h5, "r") as h5:
violations_str = h5.get_scalar("mip_lazy")
assert violations_str is not None
assert isinstance(violations_str, str)
violations = list(set(convert_lists_to_tuples(json.loads(violations_str))))
logger.info(f"Enforcing {len(violations)} constraints ahead-of-time...")
model.lazy_enforce(violations)
stats["Lazy Constraints: AOT"] = len(violations)

View File

@@ -0,0 +1,31 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from typing import List, Dict, Any, Hashable
from miplearn.components.cuts.mem import (
_BaseMemorizingConstrComponent,
)
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger(__name__)
class MemorizingLazyComponent(_BaseMemorizingConstrComponent):
def __init__(self, clf: Any, extractor: FeaturesExtractor) -> None:
super().__init__(clf, extractor, "mip_lazy")
def before_mip(
self,
test_h5: str,
model: AbstractModel,
stats: Dict[str, Any],
) -> None:
assert self.constrs_ is not None
violations = self.predict("Predicting violated lazy constraints...", test_h5)
logger.info(f"Enforcing {len(violations)} constraints ahead-of-time...")
model.lazy_enforce(violations)
stats["Lazy Constraints: AOT"] = len(violations)

View File

@@ -1,108 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import sys
from copy import deepcopy
from miplearn.classifiers.counting import CountingClassifier
from miplearn.components import classifier_evaluation_dict
from .component import Component
from ..extractors import *
logger = logging.getLogger(__name__)
class DynamicLazyConstraintsComponent(Component):
"""
A component that predicts which lazy constraints to enforce.
"""
def __init__(self,
classifier=CountingClassifier(),
threshold=0.05):
self.violations = set()
self.count = {}
self.n_samples = 0
self.threshold = threshold
self.classifier_prototype = classifier
self.classifiers = {}
def before_solve(self, solver, instance, model):
instance.found_violated_lazy_constraints = []
logger.info("Predicting violated lazy constraints...")
violations = self.predict(instance)
logger.info("Enforcing %d lazy constraints..." % len(violations))
for v in violations:
cut = instance.build_lazy_constraint(model, v)
solver.internal_solver.add_constraint(cut)
def after_iteration(self, solver, instance, model):
logger.debug("Finding violated (dynamic) lazy constraints...")
violations = instance.find_violated_lazy_constraints(model)
if len(violations) == 0:
return False
instance.found_violated_lazy_constraints += violations
logger.debug(" %d violations found" % len(violations))
for v in violations:
cut = instance.build_lazy_constraint(model, v)
solver.internal_solver.add_constraint(cut)
return True
def after_solve(self, solver, instance, model, results):
pass
def fit(self, training_instances):
logger.debug("Fitting...")
features = InstanceFeaturesExtractor().extract(training_instances)
self.classifiers = {}
violation_to_instance_idx = {}
for (idx, instance) in enumerate(training_instances):
for v in instance.found_violated_lazy_constraints:
if isinstance(v, list):
v = tuple(v)
if v not in self.classifiers:
self.classifiers[v] = deepcopy(self.classifier_prototype)
violation_to_instance_idx[v] = []
violation_to_instance_idx[v] += [idx]
for (v, classifier) in tqdm(self.classifiers.items(),
desc="Fit (lazy)",
disable=not sys.stdout.isatty(),
):
logger.debug("Training: %s" % (str(v)))
label = np.zeros(len(training_instances))
label[violation_to_instance_idx[v]] = 1.0
classifier.fit(features, label)
def predict(self, instance):
violations = []
features = InstanceFeaturesExtractor().extract([instance])
for (v, classifier) in self.classifiers.items():
proba = classifier.predict_proba(features)
if proba[0][1] > self.threshold:
violations += [v]
return violations
def evaluate(self, instances):
results = {}
all_violations = set()
for instance in instances:
all_violations |= set(instance.found_violated_lazy_constraints)
for idx in tqdm(range(len(instances)),
desc="Evaluate (lazy)",
disable=not sys.stdout.isatty(),
):
instance = instances[idx]
condition_positive = set(instance.found_violated_lazy_constraints)
condition_negative = all_violations - condition_positive
pred_positive = set(self.predict(instance)) & all_violations
pred_negative = all_violations - pred_positive
tp = len(pred_positive & condition_positive)
tn = len(pred_negative & condition_negative)
fp = len(pred_positive & condition_negative)
fn = len(pred_negative & condition_positive)
results[idx] = classifier_evaluation_dict(tp, tn, fp, fn)
return results

View File

@@ -1,179 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import sys
from copy import deepcopy
from miplearn.classifiers.counting import CountingClassifier
from .component import Component
from ..extractors import *
logger = logging.getLogger(__name__)
class LazyConstraint:
def __init__(self, cid, obj):
self.cid = cid
self.obj = obj
class StaticLazyConstraintsComponent(Component):
def __init__(self,
classifier=CountingClassifier(),
threshold=0.05,
use_two_phase_gap=True,
large_gap=1e-2,
violation_tolerance=-0.5,
):
self.threshold = threshold
self.classifier_prototype = classifier
self.classifiers = {}
self.pool = []
self.original_gap = None
self.large_gap = large_gap
self.is_gap_large = False
self.use_two_phase_gap = use_two_phase_gap
self.violation_tolerance = violation_tolerance
def before_solve(self, solver, instance, model):
self.pool = []
if not solver.use_lazy_cb and self.use_two_phase_gap:
logger.info("Increasing gap tolerance to %f", self.large_gap)
self.original_gap = solver.gap_tolerance
self.is_gap_large = True
solver.internal_solver.set_gap_tolerance(self.large_gap)
instance.found_violated_lazy_constraints = []
if instance.has_static_lazy_constraints():
self._extract_and_predict_static(solver, instance)
def after_solve(self, solver, instance, model, results):
pass
def after_iteration(self, solver, instance, model):
if solver.use_lazy_cb:
return False
else:
should_repeat = self._check_and_add(instance, solver)
if should_repeat:
return True
else:
if self.is_gap_large:
logger.info("Restoring gap tolerance to %f", self.original_gap)
solver.internal_solver.set_gap_tolerance(self.original_gap)
self.is_gap_large = False
return True
else:
return False
def on_lazy_callback(self, solver, instance, model):
self._check_and_add(instance, solver)
def _check_and_add(self, instance, solver):
logger.debug("Finding violated lazy constraints...")
constraints_to_add = []
for c in self.pool:
if not solver.internal_solver.is_constraint_satisfied(c.obj,
tol=self.violation_tolerance):
constraints_to_add.append(c)
for c in constraints_to_add:
self.pool.remove(c)
solver.internal_solver.add_constraint(c.obj)
instance.found_violated_lazy_constraints += [c.cid]
if len(constraints_to_add) > 0:
logger.info("%8d lazy constraints added %8d in the pool" % (len(constraints_to_add), len(self.pool)))
return True
else:
return False
def fit(self, training_instances):
training_instances = [t
for t in training_instances
if hasattr(t, "found_violated_lazy_constraints")]
logger.debug("Extracting x and y...")
x = self.x(training_instances)
y = self.y(training_instances)
logger.debug("Fitting...")
for category in tqdm(x.keys(),
desc="Fit (lazy)",
disable=not sys.stdout.isatty()):
if category not in self.classifiers:
self.classifiers[category] = deepcopy(self.classifier_prototype)
self.classifiers[category].fit(x[category], y[category])
def predict(self, instance):
pass
def evaluate(self, instances):
pass
def _extract_and_predict_static(self, solver, instance):
x = {}
constraints = {}
logger.info("Extracting lazy constraints...")
for cid in solver.internal_solver.get_constraint_ids():
if instance.is_constraint_lazy(cid):
category = instance.get_constraint_category(cid)
if category not in x:
x[category] = []
constraints[category] = []
x[category] += [instance.get_constraint_features(cid)]
c = LazyConstraint(cid=cid,
obj=solver.internal_solver.extract_constraint(cid))
constraints[category] += [c]
self.pool.append(c)
logger.info("%8d lazy constraints extracted" % len(self.pool))
logger.info("Predicting required lazy constraints...")
n_added = 0
for (category, x_values) in x.items():
if category not in self.classifiers:
continue
if isinstance(x_values[0], np.ndarray):
x[category] = np.array(x_values)
proba = self.classifiers[category].predict_proba(x[category])
for i in range(len(proba)):
if proba[i][1] > self.threshold:
n_added += 1
c = constraints[category][i]
self.pool.remove(c)
solver.internal_solver.add_constraint(c.obj)
instance.found_violated_lazy_constraints += [c.cid]
logger.info("%8d lazy constraints added %8d in the pool" % (n_added, len(self.pool)))
def _collect_constraints(self, train_instances):
constraints = {}
for instance in train_instances:
for cid in instance.found_violated_lazy_constraints:
category = instance.get_constraint_category(cid)
if category not in constraints:
constraints[category] = set()
constraints[category].add(cid)
for (category, cids) in constraints.items():
constraints[category] = sorted(list(cids))
return constraints
def x(self, train_instances):
result = {}
constraints = self._collect_constraints(train_instances)
for (category, cids) in constraints.items():
result[category] = []
for instance in train_instances:
for cid in cids:
result[category].append(instance.get_constraint_features(cid))
return result
def y(self, train_instances):
result = {}
constraints = self._collect_constraints(train_instances)
for (category, cids) in constraints.items():
result[category] = []
for instance in train_instances:
for cid in cids:
if cid in instance.found_violated_lazy_constraints:
result[category].append([0, 1])
else:
result[category].append([1, 0])
return result

View File

@@ -1,85 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from sklearn.metrics import mean_squared_error, explained_variance_score, max_error, mean_absolute_error, r2_score
from .. import Component, InstanceFeaturesExtractor, ObjectiveValueExtractor
from sklearn.linear_model import LinearRegression
from copy import deepcopy
import numpy as np
import logging
logger = logging.getLogger(__name__)
class ObjectiveValueComponent(Component):
"""
A Component which predicts the optimal objective value of the problem.
"""
def __init__(self,
regressor=LinearRegression()):
self.ub_regressor = None
self.lb_regressor = None
self.regressor_prototype = regressor
def before_solve(self, solver, instance, model):
if self.ub_regressor is not None:
logger.info("Predicting optimal value...")
lb, ub = self.predict([instance])[0]
instance.predicted_ub = ub
instance.predicted_lb = lb
logger.info("Predicted values: lb=%.2f, ub=%.2f" % (lb, ub))
def after_solve(self, solver, instance, model, results):
if self.ub_regressor is not None:
results["Predicted UB"] = instance.predicted_ub
results["Predicted LB"] = instance.predicted_lb
else:
results["Predicted UB"] = None
results["Predicted LB"] = None
def fit(self, training_instances):
logger.debug("Extracting features...")
features = InstanceFeaturesExtractor().extract(training_instances)
ub = ObjectiveValueExtractor(kind="upper bound").extract(training_instances)
lb = ObjectiveValueExtractor(kind="lower bound").extract(training_instances)
assert ub.shape == (len(training_instances), 1)
assert lb.shape == (len(training_instances), 1)
self.ub_regressor = deepcopy(self.regressor_prototype)
self.lb_regressor = deepcopy(self.regressor_prototype)
logger.debug("Fitting ub_regressor...")
self.ub_regressor.fit(features, ub.ravel())
logger.debug("Fitting ub_regressor...")
self.lb_regressor.fit(features, lb.ravel())
def predict(self, instances):
features = InstanceFeaturesExtractor().extract(instances)
lb = self.lb_regressor.predict(features)
ub = self.ub_regressor.predict(features)
assert lb.shape == (len(instances),)
assert ub.shape == (len(instances),)
return np.array([lb, ub]).T
def evaluate(self, instances):
y_pred = self.predict(instances)
y_true = np.array([[inst.lower_bound, inst.upper_bound] for inst in instances])
y_true_lb, y_true_ub = y_true[:, 0], y_true[:, 1]
y_pred_lb, y_pred_ub = y_pred[:, 1], y_pred[:, 1]
ev = {
"Lower bound": {
"Mean squared error": mean_squared_error(y_true_lb, y_pred_lb),
"Explained variance": explained_variance_score(y_true_lb, y_pred_lb),
"Max error": max_error(y_true_lb, y_pred_lb),
"Mean absolute error": mean_absolute_error(y_true_lb, y_pred_lb),
"R2": r2_score(y_true_lb, y_pred_lb),
"Median absolute error": mean_absolute_error(y_true_lb, y_pred_lb),
},
"Upper bound": {
"Mean squared error": mean_squared_error(y_true_ub, y_pred_ub),
"Explained variance": explained_variance_score(y_true_ub, y_pred_ub),
"Max error": max_error(y_true_ub, y_pred_ub),
"Mean absolute error": mean_absolute_error(y_true_ub, y_pred_ub),
"R2": r2_score(y_true_ub, y_pred_ub),
"Median absolute error": mean_absolute_error(y_true_ub, y_pred_ub),
},
}
return ev

View File

@@ -1,150 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from copy import deepcopy
import sys
from .component import Component
from ..classifiers.adaptive import AdaptiveClassifier
from ..classifiers.threshold import MinPrecisionThreshold, DynamicThreshold
from ..components import classifier_evaluation_dict
from ..extractors import *
logger = logging.getLogger(__name__)
class PrimalSolutionComponent(Component):
"""
A component that predicts primal solutions.
"""
def __init__(self,
classifier=AdaptiveClassifier(),
mode="exact",
threshold=MinPrecisionThreshold(0.98)):
self.mode = mode
self.classifiers = {}
self.thresholds = {}
self.threshold_prototype = threshold
self.classifier_prototype = classifier
def before_solve(self, solver, instance, model):
logger.info("Predicting primal solution...")
solution = self.predict(instance)
if self.mode == "heuristic":
solver.internal_solver.fix(solution)
else:
solver.internal_solver.set_warm_start(solution)
def after_solve(self, solver, instance, model, results):
pass
def x(self, training_instances):
return VariableFeaturesExtractor().extract(training_instances)
def y(self, training_instances):
return SolutionExtractor().extract(training_instances)
def fit(self, training_instances, n_jobs=1):
logger.debug("Extracting features...")
features = VariableFeaturesExtractor().extract(training_instances)
solutions = SolutionExtractor().extract(training_instances)
for category in tqdm(features.keys(),
desc="Fit (primal)",
disable=not sys.stdout.isatty(),
):
x_train = features[category]
for label in [0, 1]:
y_train = solutions[category][:, label].astype(int)
# If all samples are either positive or negative, make constant predictions
y_avg = np.average(y_train)
if y_avg < 0.001 or y_avg >= 0.999:
self.classifiers[category, label] = round(y_avg)
self.thresholds[category, label] = 0.50
continue
# Create a copy of classifier prototype and train it
if isinstance(self.classifier_prototype, list):
clf = deepcopy(self.classifier_prototype[label])
else:
clf = deepcopy(self.classifier_prototype)
clf.fit(x_train, y_train)
# Find threshold (dynamic or static)
if isinstance(self.threshold_prototype, DynamicThreshold):
self.thresholds[category, label] = self.threshold_prototype.find(clf, x_train, y_train)
else:
self.thresholds[category, label] = deepcopy(self.threshold_prototype)
self.classifiers[category, label] = clf
def predict(self, instance):
solution = {}
x_test = VariableFeaturesExtractor().extract([instance])
var_split = Extractor.split_variables(instance)
for category in var_split.keys():
n = len(var_split[category])
for (i, (var, index)) in enumerate(var_split[category]):
if var not in solution.keys():
solution[var] = {}
solution[var][index] = None
for label in [0, 1]:
if (category, label) not in self.classifiers.keys():
continue
clf = self.classifiers[category, label]
if isinstance(clf, float) or isinstance(clf, int):
ws = np.array([[1 - clf, clf] for _ in range(n)])
else:
ws = clf.predict_proba(x_test[category])
assert ws.shape == (n, 2), "ws.shape should be (%d, 2) not %s" % (n, ws.shape)
for (i, (var, index)) in enumerate(var_split[category]):
if ws[i, 1] >= self.thresholds[category, label]:
solution[var][index] = label
return solution
def evaluate(self, instances):
ev = {"Fix zero": {},
"Fix one": {}}
for instance_idx in tqdm(range(len(instances)),
desc="Evaluate (primal)",
disable=not sys.stdout.isatty(),
):
instance = instances[instance_idx]
solution_actual = instance.solution
solution_pred = self.predict(instance)
vars_all, vars_one, vars_zero = set(), set(), set()
pred_one_positive, pred_zero_positive = set(), set()
for (varname, var_dict) in solution_actual.items():
if varname not in solution_pred.keys():
continue
for (idx, value) in var_dict.items():
vars_all.add((varname, idx))
if value > 0.5:
vars_one.add((varname, idx))
else:
vars_zero.add((varname, idx))
if solution_pred[varname][idx] is not None:
if solution_pred[varname][idx] > 0.5:
pred_one_positive.add((varname, idx))
else:
pred_zero_positive.add((varname, idx))
pred_one_negative = vars_all - pred_one_positive
pred_zero_negative = vars_all - pred_zero_positive
tp_zero = len(pred_zero_positive & vars_zero)
fp_zero = len(pred_zero_positive & vars_one)
tn_zero = len(pred_zero_negative & vars_one)
fn_zero = len(pred_zero_negative & vars_zero)
tp_one = len(pred_one_positive & vars_one)
fp_one = len(pred_one_positive & vars_zero)
tn_one = len(pred_one_negative & vars_zero)
fn_one = len(pred_one_negative & vars_one)
ev["Fix zero"][instance_idx] = classifier_evaluation_dict(tp_zero, tn_zero, fp_zero, fn_zero)
ev["Fix one"][instance_idx] = classifier_evaluation_dict(tp_one, tn_one, fp_one, fn_one)
return ev

View File

@@ -0,0 +1,53 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import Tuple, List
import numpy as np
from miplearn.h5 import H5File
def _extract_var_names_values(
h5: H5File,
selected_var_types: List[bytes],
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
bin_var_names, bin_var_indices = _extract_var_names(h5, selected_var_types)
var_values = h5.get_array("mip_var_values")
assert var_values is not None
bin_var_values = var_values[bin_var_indices].astype(int)
return bin_var_names, bin_var_values, bin_var_indices
def _extract_var_names(
h5: H5File,
selected_var_types: List[bytes],
) -> Tuple[np.ndarray, np.ndarray]:
var_types = h5.get_array("static_var_types")
var_names = h5.get_array("static_var_names")
assert var_types is not None
assert var_names is not None
bin_var_indices = np.where(np.isin(var_types, selected_var_types))[0]
bin_var_names = var_names[bin_var_indices]
assert len(bin_var_names.shape) == 1
return bin_var_names, bin_var_indices
def _extract_bin_var_names_values(
h5: H5File,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
return _extract_var_names_values(h5, [b"B"])
def _extract_bin_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
return _extract_var_names(h5, [b"B"])
def _extract_int_var_names_values(
h5: H5File,
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
return _extract_var_names_values(h5, [b"B", b"I"])
def _extract_int_var_names(h5: H5File) -> Tuple[np.ndarray, np.ndarray]:
return _extract_var_names(h5, [b"B", b"I"])

View File

@@ -0,0 +1,93 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from abc import ABC, abstractmethod
from typing import Optional, Dict
import numpy as np
from miplearn.solvers.abstract import AbstractModel
logger = logging.getLogger()
class PrimalComponentAction(ABC):
@abstractmethod
def perform(
self,
model: AbstractModel,
var_names: np.ndarray,
var_values: np.ndarray,
stats: Optional[Dict],
) -> None:
pass
class SetWarmStart(PrimalComponentAction):
def perform(
self,
model: AbstractModel,
var_names: np.ndarray,
var_values: np.ndarray,
stats: Optional[Dict],
) -> None:
logger.info("Setting warm starts...")
model.set_warm_starts(var_names, var_values, stats)
class FixVariables(PrimalComponentAction):
def perform(
self,
model: AbstractModel,
var_names: np.ndarray,
var_values: np.ndarray,
stats: Optional[Dict],
) -> None:
logger.info("Fixing variables...")
assert len(var_values.shape) == 2
assert var_values.shape[0] == 1
var_values = var_values.reshape(-1)
model.fix_variables(var_names, var_values, stats)
if stats is not None:
stats["Heuristic"] = True
class EnforceProximity(PrimalComponentAction):
def __init__(self, tol: float) -> None:
self.tol = tol
def perform(
self,
model: AbstractModel,
var_names: np.ndarray,
var_values: np.ndarray,
stats: Optional[Dict],
) -> None:
assert len(var_values.shape) == 2
assert var_values.shape[0] == 1
var_values = var_values.reshape(-1)
constr_lhs = []
constr_vars = []
constr_rhs = 0.0
for i, var_name in enumerate(var_names):
if np.isnan(var_values[i]):
continue
constr_lhs.append(1.0 if var_values[i] < 0.5 else -1.0)
constr_rhs -= var_values[i]
constr_vars.append(var_name)
constr_rhs += len(constr_vars) * self.tol
logger.info(
f"Adding proximity constraint (tol={self.tol}, nz={len(constr_vars)})..."
)
model.add_constrs(
np.array(constr_vars),
np.array([constr_lhs]),
np.array(["<"], dtype="S"),
np.array([constr_rhs]),
)
if stats is not None:
stats["Heuristic"] = True

View File

@@ -0,0 +1,32 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from typing import Any, Dict, List
from . import _extract_int_var_names_values
from .actions import PrimalComponentAction
from ...solvers.abstract import AbstractModel
from ...h5 import H5File
logger = logging.getLogger(__name__)
class ExpertPrimalComponent:
def __init__(self, action: PrimalComponentAction):
self.action = action
"""
Component that predicts warm starts by peeking at the optimal solution.
"""
def fit(self, train_h5: List[str]) -> None:
pass
def before_mip(
self, test_h5: str, model: AbstractModel, stats: Dict[str, Any]
) -> None:
with H5File(test_h5, "r") as h5:
names, values, _ = _extract_int_var_names_values(h5)
self.action.perform(model, names, values.reshape(1, -1), stats)

View File

@@ -0,0 +1,129 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from typing import Any, Dict, List, Callable, Optional
import numpy as np
import sklearn
from miplearn.components.primal import (
_extract_bin_var_names_values,
_extract_bin_var_names,
)
from miplearn.components.primal.actions import PrimalComponentAction
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.solvers.abstract import AbstractModel
from miplearn.h5 import H5File
logger = logging.getLogger(__name__)
class IndependentVarsPrimalComponent:
def __init__(
self,
base_clf: Any,
extractor: FeaturesExtractor,
action: PrimalComponentAction,
clone_fn: Callable[[Any], Any] = sklearn.clone,
):
self.base_clf = base_clf
self.extractor = extractor
self.clf_: Dict[bytes, Any] = {}
self.bin_var_names_: Optional[np.ndarray] = None
self.n_features_: Optional[int] = None
self.clone_fn = clone_fn
self.action = action
def fit(self, train_h5: List[str]) -> None:
logger.info("Reading training data...")
self.bin_var_names_ = None
n_bin_vars: Optional[int] = None
n_vars: Optional[int] = None
x, y = [], []
for h5_filename in train_h5:
with H5File(h5_filename, "r") as h5:
# Get number of variables
var_types = h5.get_array("static_var_types")
assert var_types is not None
n_vars = len(var_types)
# Extract features
(
bin_var_names,
bin_var_values,
bin_var_indices,
) = _extract_bin_var_names_values(h5)
# Store/check variable names
if self.bin_var_names_ is None:
self.bin_var_names_ = bin_var_names
n_bin_vars = len(self.bin_var_names_)
else:
assert np.all(bin_var_names == self.bin_var_names_)
# Build x and y vectors
x_sample = self.extractor.get_var_features(h5)
assert len(x_sample.shape) == 2
assert x_sample.shape[0] == n_vars
x_sample = x_sample[bin_var_indices]
if self.n_features_ is None:
self.n_features_ = x_sample.shape[1]
else:
assert x_sample.shape[1] == self.n_features_
x.append(x_sample)
y.append(bin_var_values)
assert n_bin_vars is not None
assert self.bin_var_names_ is not None
logger.info("Constructing matrices...")
x_np = np.vstack(x)
y_np = np.hstack(y)
n_samples = len(train_h5) * n_bin_vars
assert x_np.shape == (n_samples, self.n_features_)
assert y_np.shape == (n_samples,)
logger.info(
f"Dataset has {n_bin_vars} binary variables, "
f"{len(train_h5):,d} samples per variable, "
f"{self.n_features_:,d} features, 1 target and 2 classes"
)
logger.info(f"Training {n_bin_vars} classifiers...")
self.clf_ = {}
for var_idx, var_name in enumerate(self.bin_var_names_):
self.clf_[var_name] = self.clone_fn(self.base_clf)
self.clf_[var_name].fit(
x_np[var_idx::n_bin_vars, :], y_np[var_idx::n_bin_vars]
)
logger.info("Done fitting.")
def before_mip(
self, test_h5: str, model: AbstractModel, stats: Dict[str, Any]
) -> None:
assert self.bin_var_names_ is not None
assert self.n_features_ is not None
# Read features
with H5File(test_h5, "r") as h5:
x_sample = self.extractor.get_var_features(h5)
bin_var_names, bin_var_indices = _extract_bin_var_names(h5)
assert np.all(bin_var_names == self.bin_var_names_)
x_sample = x_sample[bin_var_indices]
assert x_sample.shape == (len(self.bin_var_names_), self.n_features_)
# Predict optimal solution
logger.info("Predicting warm starts...")
y_pred = []
for var_idx, var_name in enumerate(self.bin_var_names_):
x_var = x_sample[var_idx, :].reshape(1, -1)
y_var = self.clf_[var_name].predict(x_var)
assert y_var.shape == (1,)
y_pred.append(y_var[0])
# Construct warm starts, based on prediction
y_pred_np = np.array(y_pred).reshape(1, -1)
assert y_pred_np.shape == (1, len(self.bin_var_names_))
self.action.perform(model, self.bin_var_names_, y_pred_np, stats)

View File

@@ -0,0 +1,88 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from typing import List, Dict, Any, Optional
import numpy as np
from miplearn.components.primal import _extract_bin_var_names_values
from miplearn.components.primal.actions import PrimalComponentAction
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.solvers.abstract import AbstractModel
from miplearn.h5 import H5File
logger = logging.getLogger(__name__)
class JointVarsPrimalComponent:
def __init__(
self, clf: Any, extractor: FeaturesExtractor, action: PrimalComponentAction
):
self.clf = clf
self.extractor = extractor
self.bin_var_names_: Optional[np.ndarray] = None
self.action = action
def fit(self, train_h5: List[str]) -> None:
logger.info("Reading training data...")
self.bin_var_names_ = None
x, y, n_samples, n_features = [], [], len(train_h5), None
for h5_filename in train_h5:
with H5File(h5_filename, "r") as h5:
bin_var_names, bin_var_values, _ = _extract_bin_var_names_values(h5)
# Store/check variable names
if self.bin_var_names_ is None:
self.bin_var_names_ = bin_var_names
else:
assert np.all(bin_var_names == self.bin_var_names_)
# Build x and y vectors
x_sample = self.extractor.get_instance_features(h5)
assert len(x_sample.shape) == 1
if n_features is None:
n_features = len(x_sample)
else:
assert len(x_sample) == n_features
x.append(x_sample)
y.append(bin_var_values)
assert self.bin_var_names_ is not None
logger.info("Constructing matrices...")
x_np = np.vstack(x)
y_np = np.array(y)
assert len(x_np.shape) == 2
assert x_np.shape[0] == n_samples
assert x_np.shape[1] == n_features
assert y_np.shape == (n_samples, len(self.bin_var_names_))
logger.info(
f"Dataset has {n_samples:,d} samples, "
f"{n_features:,d} features and {y_np.shape[1]:,d} targets"
)
logger.info("Training classifier...")
self.clf.fit(x_np, y_np)
logger.info("Done fitting.")
def before_mip(
self, test_h5: str, model: AbstractModel, stats: Dict[str, Any]
) -> None:
assert self.bin_var_names_ is not None
# Read features
with H5File(test_h5, "r") as h5:
x_sample = self.extractor.get_instance_features(h5)
assert len(x_sample.shape) == 1
x_sample = x_sample.reshape(1, -1)
# Predict optimal solution
logger.info("Predicting warm starts...")
y_pred = self.clf.predict(x_sample)
assert len(y_pred.shape) == 2
assert y_pred.shape[0] == 1
assert y_pred.shape[1] == len(self.bin_var_names_)
# Construct warm starts, based on prediction
self.action.perform(model, self.bin_var_names_, y_pred, stats)

View File

@@ -0,0 +1,167 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from abc import ABC, abstractmethod
from typing import List, Dict, Any, Optional, Tuple
import numpy as np
from . import _extract_bin_var_names_values
from .actions import PrimalComponentAction
from ...extractors.abstract import FeaturesExtractor
from ...solvers.abstract import AbstractModel
from ...h5 import H5File
logger = logging.getLogger()
class SolutionConstructor(ABC):
@abstractmethod
def construct(self, y_proba: np.ndarray, solutions: np.ndarray) -> np.ndarray:
pass
class MemorizingPrimalComponent:
"""
Component that memorizes all solutions seen during training, then fits a
single classifier to predict which of the memorized solutions should be
provided to the solver. Optionally combines multiple memorized solutions
into a single, partial one.
"""
def __init__(
self,
clf: Any,
extractor: FeaturesExtractor,
constructor: SolutionConstructor,
action: PrimalComponentAction,
) -> None:
assert clf is not None
self.clf = clf
self.extractor = extractor
self.constructor = constructor
self.solutions_: Optional[np.ndarray] = None
self.bin_var_names_: Optional[np.ndarray] = None
self.action = action
def fit(self, train_h5: List[str]) -> None:
logger.info("Reading training data...")
n_samples = len(train_h5)
solutions_ = []
self.bin_var_names_ = None
x, y, n_features = [], [], None
solution_to_idx: Dict[Tuple, int] = {}
for h5_filename in train_h5:
with H5File(h5_filename, "r") as h5:
bin_var_names, bin_var_values, _ = _extract_bin_var_names_values(h5)
# Store/check variable names
if self.bin_var_names_ is None:
self.bin_var_names_ = bin_var_names
else:
assert np.all(bin_var_names == self.bin_var_names_)
# Store solution
sol = tuple(np.where(bin_var_values)[0])
if sol not in solution_to_idx:
solutions_.append(bin_var_values)
solution_to_idx[sol] = len(solution_to_idx)
y.append(solution_to_idx[sol])
# Extract features
x_sample = self.extractor.get_instance_features(h5)
assert len(x_sample.shape) == 1
if n_features is None:
n_features = len(x_sample)
else:
assert len(x_sample) == n_features
x.append(x_sample)
logger.info("Constructing matrices...")
x_np = np.vstack(x)
y_np = np.array(y)
assert len(x_np.shape) == 2
assert x_np.shape[0] == n_samples
assert x_np.shape[1] == n_features
assert y_np.shape == (n_samples,)
self.solutions_ = np.array(solutions_)
n_classes = len(solution_to_idx)
logger.info(
f"Dataset has {n_samples:,d} samples, "
f"{n_features:,d} features and {n_classes:,d} classes"
)
logger.info("Training classifier...")
self.clf.fit(x_np, y_np)
logger.info("Done fitting.")
def before_mip(
self, test_h5: str, model: AbstractModel, stats: Dict[str, Any]
) -> None:
assert self.solutions_ is not None
assert self.bin_var_names_ is not None
# Read features
with H5File(test_h5, "r") as h5:
x_sample = self.extractor.get_instance_features(h5)
assert len(x_sample.shape) == 1
x_sample = x_sample.reshape(1, -1)
# Predict optimal solution
logger.info("Predicting primal solution...")
y_proba = self.clf.predict_proba(x_sample)
assert len(y_proba.shape) == 2
assert y_proba.shape[0] == 1
assert y_proba.shape[1] == len(self.solutions_)
# Construct warm starts, based on prediction
starts = self.constructor.construct(y_proba[0, :], self.solutions_)
self.action.perform(model, self.bin_var_names_, starts, stats)
class SelectTopSolutions(SolutionConstructor):
"""
Warm start construction strategy that selects and returns the top k solutions.
"""
def __init__(self, k: int) -> None:
self.k = k
def construct(self, y_proba: np.ndarray, solutions: np.ndarray) -> np.ndarray:
# Check arguments
assert len(y_proba.shape) == 1
assert len(solutions.shape) == 2
assert len(y_proba) == solutions.shape[0]
# Select top k solutions
ind = np.argsort(-y_proba, kind="stable")
selected = ind[: min(self.k, len(ind))]
return solutions[selected, :]
class MergeTopSolutions(SolutionConstructor):
"""
Warm start construction strategy that first selects the top k solutions,
then merges them into a single solution.
To merge the solutions, the strategy first computes the mean optimal value of each
decision variable, then: (i) sets the variable to zero if the mean is below
thresholds[0]; (ii) sets the variable to one if the mean is above thresholds[1];
(iii) leaves the variable free otherwise.
"""
def __init__(self, k: int, thresholds: List[float]):
assert len(thresholds) == 2
self.k = k
self.thresholds = thresholds
def construct(self, y_proba: np.ndarray, solutions: np.ndarray) -> np.ndarray:
filtered = SelectTopSolutions(self.k).construct(y_proba, solutions)
mean = filtered.mean(axis=0)
start = np.full((1, solutions.shape[1]), float("nan"))
start[0, mean <= self.thresholds[0]] = 0
start[0, mean >= self.thresholds[1]] = 1
return start

View File

@@ -0,0 +1,32 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from math import log
from typing import List, Dict, Any
import numpy as np
import gurobipy as gp
from ..h5 import H5File
class ExpertBranchPriorityComponent:
def __init__(self) -> None:
pass
def fit(self, train_h5: List[str]) -> None:
pass
def before_mip(self, test_h5: str, model: gp.Model, _: Dict[str, Any]) -> None:
with H5File(test_h5, "r") as h5:
var_names = h5.get_array("static_var_names")
var_priority = h5.get_array("bb_var_priority")
assert var_priority is not None
assert var_names is not None
for var_idx, var_name in enumerate(var_names):
if np.isfinite(var_priority[var_idx]):
var = model.getVarByName(var_name.decode())
assert var is not None, f"unknown var: {var_name}"
var.BranchPriority = int(log(1 + var_priority[var_idx]))

View File

@@ -1,151 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
import sys
from copy import deepcopy
import numpy as np
from miplearn.components import classifier_evaluation_dict
from tqdm import tqdm
from miplearn import Component
from miplearn.classifiers.counting import CountingClassifier
logger = logging.getLogger(__name__)
class RelaxationComponent(Component):
"""
A Component which builds a relaxation of the problem by dropping constraints.
Currently, this component drops all integrality constraints, as well as
all inequality constraints which are not likely binding in the LP relaxation.
In a future version of MIPLearn, this component may decide to keep some
integrality constraints it it determines that they have small impact on
running time, but large impact on dual bound.
"""
def __init__(self,
classifier=CountingClassifier(),
threshold=0.95,
slack_tolerance=1e-5,
):
self.classifiers = {}
self.classifier_prototype = classifier
self.threshold = threshold
self.slack_tolerance = slack_tolerance
def before_solve(self, solver, instance, _):
logger.info("Relaxing integrality...")
solver.internal_solver.relax()
logger.info("Predicting redundant LP constraints...")
cids = solver.internal_solver.get_constraint_ids()
x, constraints = self.x([instance],
constraint_ids=cids,
return_constraints=True)
y = self.predict(x)
n_removed = 0
for category in y.keys():
for i in range(len(y[category])):
if y[category][i][0] == 1:
cid = constraints[category][i]
solver.internal_solver.extract_constraint(cid)
n_removed += 1
logger.info("Removed %d predicted redundant LP constraints" % n_removed)
def after_solve(self, solver, instance, model, results):
instance.slacks = solver.internal_solver.get_constraint_slacks()
def fit(self, training_instances):
training_instances = [instance
for instance in training_instances
if hasattr(instance, "slacks")]
logger.debug("Extracting x and y...")
x = self.x(training_instances)
y = self.y(training_instances)
logger.debug("Fitting...")
for category in tqdm(x.keys(),
desc="Fit (relaxation)",
disable=not sys.stdout.isatty()):
if category not in self.classifiers:
self.classifiers[category] = deepcopy(self.classifier_prototype)
self.classifiers[category].fit(x[category], y[category])
def x(self,
instances,
constraint_ids=None,
return_constraints=False):
x = {}
constraints = {}
for instance in instances:
if constraint_ids is not None:
cids = constraint_ids
else:
cids = instance.slacks.keys()
for cid in cids:
category = instance.get_constraint_category(cid)
if category is None:
continue
if category not in x:
x[category] = []
constraints[category] = []
x[category] += [instance.get_constraint_features(cid)]
constraints[category] += [cid]
if return_constraints:
return x, constraints
else:
return x
def y(self, instances):
y = {}
for instance in instances:
for (cid, slack) in instance.slacks.items():
category = instance.get_constraint_category(cid)
if category is None:
continue
if category not in y:
y[category] = []
if slack > self.slack_tolerance:
y[category] += [[1]]
else:
y[category] += [[0]]
return y
def predict(self, x):
y = {}
for (category, x_cat) in x.items():
if category not in self.classifiers:
continue
y[category] = []
#x_cat = np.array(x_cat)
proba = self.classifiers[category].predict_proba(x_cat)
for i in range(len(proba)):
if proba[i][1] >= self.threshold:
y[category] += [[1]]
else:
y[category] += [[0]]
return y
def evaluate(self, instance):
x = self.x([instance])
y_true = self.y([instance])
y_pred = self.predict(x)
tp, tn, fp, fn = 0, 0, 0, 0
for category in y_true.keys():
for i in range(len(y_true[category])):
if y_pred[category][i][0] == 1:
if y_true[category][i][0] == 1:
tp += 1
else:
fp += 1
else:
if y_true[category][i][0] == 1:
fn += 1
else:
tn += 1
return classifier_evaluation_dict(tp, tn, fp, fn)

View File

@@ -1,140 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from unittest.mock import Mock
import numpy as np
from miplearn import DynamicLazyConstraintsComponent, LearningSolver, InternalSolver
from miplearn.classifiers import Classifier
from miplearn.tests import get_test_pyomo_instances
from numpy.linalg import norm
E = 0.1
def test_lazy_fit():
instances, models = get_test_pyomo_instances()
instances[0].found_violated_lazy_constraints = ["a", "b"]
instances[1].found_violated_lazy_constraints = ["b", "c"]
classifier = Mock(spec=Classifier)
component = DynamicLazyConstraintsComponent(classifier=classifier)
component.fit(instances)
# Should create one classifier for each violation
assert "a" in component.classifiers
assert "b" in component.classifiers
assert "c" in component.classifiers
# Should provide correct x_train to each classifier
expected_x_train_a = np.array([[67., 21.75, 1287.92], [70., 23.75, 1199.83]])
expected_x_train_b = np.array([[67., 21.75, 1287.92], [70., 23.75, 1199.83]])
expected_x_train_c = np.array([[67., 21.75, 1287.92], [70., 23.75, 1199.83]])
actual_x_train_a = component.classifiers["a"].fit.call_args[0][0]
actual_x_train_b = component.classifiers["b"].fit.call_args[0][0]
actual_x_train_c = component.classifiers["c"].fit.call_args[0][0]
assert norm(expected_x_train_a - actual_x_train_a) < E
assert norm(expected_x_train_b - actual_x_train_b) < E
assert norm(expected_x_train_c - actual_x_train_c) < E
# Should provide correct y_train to each classifier
expected_y_train_a = np.array([1.0, 0.0])
expected_y_train_b = np.array([1.0, 1.0])
expected_y_train_c = np.array([0.0, 1.0])
actual_y_train_a = component.classifiers["a"].fit.call_args[0][1]
actual_y_train_b = component.classifiers["b"].fit.call_args[0][1]
actual_y_train_c = component.classifiers["c"].fit.call_args[0][1]
assert norm(expected_y_train_a - actual_y_train_a) < E
assert norm(expected_y_train_b - actual_y_train_b) < E
assert norm(expected_y_train_c - actual_y_train_c) < E
def test_lazy_before():
instances, models = get_test_pyomo_instances()
instances[0].build_lazy_constraint = Mock(return_value="c1")
solver = LearningSolver()
solver.internal_solver = Mock(spec=InternalSolver)
component = DynamicLazyConstraintsComponent(threshold=0.10)
component.classifiers = {"a": Mock(spec=Classifier),
"b": Mock(spec=Classifier)}
component.classifiers["a"].predict_proba = Mock(return_value=[[0.95, 0.05]])
component.classifiers["b"].predict_proba = Mock(return_value=[[0.02, 0.80]])
component.before_solve(solver, instances[0], models[0])
# Should ask classifier likelihood of each constraint being violated
expected_x_test_a = np.array([[67., 21.75, 1287.92]])
expected_x_test_b = np.array([[67., 21.75, 1287.92]])
actual_x_test_a = component.classifiers["a"].predict_proba.call_args[0][0]
actual_x_test_b = component.classifiers["b"].predict_proba.call_args[0][0]
assert norm(expected_x_test_a - actual_x_test_a) < E
assert norm(expected_x_test_b - actual_x_test_b) < E
# Should ask instance to generate cut for constraints whose likelihood
# of being violated exceeds the threshold
instances[0].build_lazy_constraint.assert_called_once_with(models[0], "b")
# Should ask internal solver to add generated constraint
solver.internal_solver.add_constraint.assert_called_once_with("c1")
def test_lazy_evaluate():
instances, models = get_test_pyomo_instances()
component = DynamicLazyConstraintsComponent()
component.classifiers = {"a": Mock(spec=Classifier),
"b": Mock(spec=Classifier),
"c": Mock(spec=Classifier)}
component.classifiers["a"].predict_proba = Mock(return_value=[[1.0, 0.0]])
component.classifiers["b"].predict_proba = Mock(return_value=[[0.0, 1.0]])
component.classifiers["c"].predict_proba = Mock(return_value=[[0.0, 1.0]])
instances[0].found_violated_lazy_constraints = ["a", "b", "c"]
instances[1].found_violated_lazy_constraints = ["b", "d"]
assert component.evaluate(instances) == {
0: {
"Accuracy": 0.75,
"F1 score": 0.8,
"Precision": 1.0,
"Recall": 2/3.,
"Predicted positive": 2,
"Predicted negative": 2,
"Condition positive": 3,
"Condition negative": 1,
"False negative": 1,
"False positive": 0,
"True negative": 1,
"True positive": 2,
"Predicted positive (%)": 50.0,
"Predicted negative (%)": 50.0,
"Condition positive (%)": 75.0,
"Condition negative (%)": 25.0,
"False negative (%)": 25.0,
"False positive (%)": 0,
"True negative (%)": 25.0,
"True positive (%)": 50.0,
},
1: {
"Accuracy": 0.5,
"F1 score": 0.5,
"Precision": 0.5,
"Recall": 0.5,
"Predicted positive": 2,
"Predicted negative": 2,
"Condition positive": 2,
"Condition negative": 2,
"False negative": 1,
"False positive": 1,
"True negative": 1,
"True positive": 1,
"Predicted positive (%)": 50.0,
"Predicted negative (%)": 50.0,
"Condition positive (%)": 50.0,
"Condition negative (%)": 50.0,
"False negative (%)": 25.0,
"False positive (%)": 25.0,
"True negative (%)": 25.0,
"True positive (%)": 25.0,
}
}

View File

@@ -1,188 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from unittest.mock import Mock, call
from miplearn import (StaticLazyConstraintsComponent,
LearningSolver,
Instance,
InternalSolver)
from miplearn.classifiers import Classifier
def test_usage_with_solver():
solver = Mock(spec=LearningSolver)
solver.use_lazy_cb = False
solver.gap_tolerance = 1e-4
internal = solver.internal_solver = Mock(spec=InternalSolver)
internal.get_constraint_ids = Mock(return_value=["c1", "c2", "c3", "c4"])
internal.extract_constraint = Mock(side_effect=lambda cid: "<%s>" % cid)
internal.is_constraint_satisfied = Mock(return_value=False)
instance = Mock(spec=Instance)
instance.has_static_lazy_constraints = Mock(return_value=True)
instance.is_constraint_lazy = Mock(side_effect=lambda cid: {
"c1": False,
"c2": True,
"c3": True,
"c4": True,
}[cid])
instance.get_constraint_features = Mock(side_effect=lambda cid: {
"c2": [1.0, 0.0],
"c3": [0.5, 0.5],
"c4": [1.0],
}[cid])
instance.get_constraint_category = Mock(side_effect=lambda cid: {
"c2": "type-a",
"c3": "type-a",
"c4": "type-b",
}[cid])
component = StaticLazyConstraintsComponent(threshold=0.90,
use_two_phase_gap=False,
violation_tolerance=1.0)
component.classifiers = {
"type-a": Mock(spec=Classifier),
"type-b": Mock(spec=Classifier),
}
component.classifiers["type-a"].predict_proba = \
Mock(return_value=[
[0.20, 0.80],
[0.05, 0.95],
])
component.classifiers["type-b"].predict_proba = \
Mock(return_value=[
[0.02, 0.98],
])
# LearningSolver calls before_solve
component.before_solve(solver, instance, None)
# Should ask if instance has static lazy constraints
instance.has_static_lazy_constraints.assert_called_once()
# Should ask internal solver for a list of constraints in the model
internal.get_constraint_ids.assert_called_once()
# Should ask if each constraint in the model is lazy
instance.is_constraint_lazy.assert_has_calls([
call("c1"), call("c2"), call("c3"), call("c4"),
])
# For the lazy ones, should ask for features
instance.get_constraint_features.assert_has_calls([
call("c2"), call("c3"), call("c4"),
])
# Should also ask for categories
assert instance.get_constraint_category.call_count == 3
instance.get_constraint_category.assert_has_calls([
call("c2"), call("c3"), call("c4"),
])
# Should ask internal solver to remove constraints identified as lazy
assert internal.extract_constraint.call_count == 3
internal.extract_constraint.assert_has_calls([
call("c2"), call("c3"), call("c4"),
])
# Should ask ML to predict whether each lazy constraint should be enforced
component.classifiers["type-a"].predict_proba.assert_called_once_with([[1.0, 0.0], [0.5, 0.5]])
component.classifiers["type-b"].predict_proba.assert_called_once_with([[1.0]])
# For the ones that should be enforced, should ask solver to re-add them
# to the formulation. The remaining ones should remain in the pool.
assert internal.add_constraint.call_count == 2
internal.add_constraint.assert_has_calls([
call("<c3>"), call("<c4>"),
])
internal.add_constraint.reset_mock()
# LearningSolver calls after_iteration (first time)
should_repeat = component.after_iteration(solver, instance, None)
assert should_repeat
# Should ask internal solver to verify if constraints in the pool are
# satisfied and add the ones that are not
internal.is_constraint_satisfied.assert_called_once_with("<c2>", tol=1.0)
internal.is_constraint_satisfied.reset_mock()
internal.add_constraint.assert_called_once_with("<c2>")
internal.add_constraint.reset_mock()
# LearningSolver calls after_iteration (second time)
should_repeat = component.after_iteration(solver, instance, None)
assert not should_repeat
# The lazy constraint pool should be empty by now, so no calls should be made
internal.is_constraint_satisfied.assert_not_called()
internal.add_constraint.assert_not_called()
# Should update instance object
assert instance.found_violated_lazy_constraints == ["c3", "c4", "c2"]
def test_fit():
instance_1 = Mock(spec=Instance)
instance_1.found_violated_lazy_constraints = ["c1", "c2", "c4", "c5"]
instance_1.get_constraint_category = Mock(side_effect=lambda cid: {
"c1": "type-a",
"c2": "type-a",
"c3": "type-a",
"c4": "type-b",
"c5": "type-b",
}[cid])
instance_1.get_constraint_features = Mock(side_effect=lambda cid: {
"c1": [1, 1],
"c2": [1, 2],
"c3": [1, 3],
"c4": [1, 4, 0],
"c5": [1, 5, 0],
}[cid])
instance_2 = Mock(spec=Instance)
instance_2.found_violated_lazy_constraints = ["c2", "c3", "c4"]
instance_2.get_constraint_category = Mock(side_effect=lambda cid: {
"c1": "type-a",
"c2": "type-a",
"c3": "type-a",
"c4": "type-b",
"c5": "type-b",
}[cid])
instance_2.get_constraint_features = Mock(side_effect=lambda cid: {
"c1": [2, 1],
"c2": [2, 2],
"c3": [2, 3],
"c4": [2, 4, 0],
"c5": [2, 5, 0],
}[cid])
instances = [instance_1, instance_2]
component = StaticLazyConstraintsComponent()
component.classifiers = {
"type-a": Mock(spec=Classifier),
"type-b": Mock(spec=Classifier),
}
expected_constraints = {
"type-a": ["c1", "c2", "c3"],
"type-b": ["c4", "c5"],
}
expected_x = {
"type-a": [[1, 1], [1, 2], [1, 3], [2, 1], [2, 2], [2, 3]],
"type-b": [[1, 4, 0], [1, 5, 0], [2, 4, 0], [2, 5, 0]]
}
expected_y = {
"type-a": [[0, 1], [0, 1], [1, 0], [1, 0], [0, 1], [0, 1]],
"type-b": [[0, 1], [0, 1], [0, 1], [1, 0]]
}
assert component._collect_constraints(instances) == expected_constraints
assert component.x(instances) == expected_x
assert component.y(instances) == expected_y
component.fit(instances)
component.classifiers["type-a"].fit.assert_called_once_with(expected_x["type-a"],
expected_y["type-a"])
component.classifiers["type-b"].fit.assert_called_once_with(expected_x["type-b"],
expected_y["type-b"])

View File

@@ -1,47 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from unittest.mock import Mock
import numpy as np
from miplearn import ObjectiveValueComponent
from miplearn.classifiers import Regressor
from miplearn.tests import get_test_pyomo_instances
def test_usage():
instances, models = get_test_pyomo_instances()
comp = ObjectiveValueComponent()
comp.fit(instances)
assert instances[0].lower_bound == 1183.0
assert instances[0].upper_bound == 1183.0
assert np.round(comp.predict(instances), 2).tolist() == [[1183.0, 1183.0],
[1070.0, 1070.0]]
def test_obj_evaluate():
instances, models = get_test_pyomo_instances()
reg = Mock(spec=Regressor)
reg.predict = Mock(return_value=np.array([1000.0, 1000.0]))
comp = ObjectiveValueComponent(regressor=reg)
comp.fit(instances)
ev = comp.evaluate(instances)
assert ev == {
'Lower bound': {
'Explained variance': 0.0,
'Max error': 183.0,
'Mean absolute error': 126.5,
'Mean squared error': 19194.5,
'Median absolute error': 126.5,
'R2': -5.012843605607331,
},
'Upper bound': {
'Explained variance': 0.0,
'Max error': 183.0,
'Mean absolute error': 126.5,
'Mean squared error': 19194.5,
'Median absolute error': 126.5,
'R2': -5.012843605607331,
}
}

View File

@@ -1,99 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from unittest.mock import Mock
import numpy as np
from miplearn import PrimalSolutionComponent
from miplearn.classifiers import Classifier
from miplearn.tests import get_test_pyomo_instances
def test_predict():
instances, models = get_test_pyomo_instances()
comp = PrimalSolutionComponent()
comp.fit(instances)
solution = comp.predict(instances[0])
assert "x" in solution
assert 0 in solution["x"]
assert 1 in solution["x"]
assert 2 in solution["x"]
assert 3 in solution["x"]
def test_evaluate():
instances, models = get_test_pyomo_instances()
clf_zero = Mock(spec=Classifier)
clf_zero.predict_proba = Mock(return_value=np.array([
[0., 1.], # x[0]
[0., 1.], # x[1]
[1., 0.], # x[2]
[1., 0.], # x[3]
]))
clf_one = Mock(spec=Classifier)
clf_one.predict_proba = Mock(return_value=np.array([
[1., 0.], # x[0] instances[0]
[1., 0.], # x[1] instances[0]
[0., 1.], # x[2] instances[0]
[1., 0.], # x[3] instances[0]
]))
comp = PrimalSolutionComponent(classifier=[clf_zero, clf_one],
threshold=0.50)
comp.fit(instances[:1])
assert comp.predict(instances[0]) == {"x": {0: 0,
1: 0,
2: 1,
3: None}}
assert instances[0].solution == {"x": {0: 1,
1: 0,
2: 1,
3: 1}}
ev = comp.evaluate(instances[:1])
assert ev == {'Fix one': {0: {'Accuracy': 0.5,
'Condition negative': 1,
'Condition negative (%)': 25.0,
'Condition positive': 3,
'Condition positive (%)': 75.0,
'F1 score': 0.5,
'False negative': 2,
'False negative (%)': 50.0,
'False positive': 0,
'False positive (%)': 0.0,
'Precision': 1.0,
'Predicted negative': 3,
'Predicted negative (%)': 75.0,
'Predicted positive': 1,
'Predicted positive (%)': 25.0,
'Recall': 0.3333333333333333,
'True negative': 1,
'True negative (%)': 25.0,
'True positive': 1,
'True positive (%)': 25.0}},
'Fix zero': {0: {'Accuracy': 0.75,
'Condition negative': 3,
'Condition negative (%)': 75.0,
'Condition positive': 1,
'Condition positive (%)': 25.0,
'F1 score': 0.6666666666666666,
'False negative': 0,
'False negative (%)': 0.0,
'False positive': 1,
'False positive (%)': 25.0,
'Precision': 0.5,
'Predicted negative': 2,
'Predicted negative (%)': 50.0,
'Predicted positive': 2,
'Predicted positive (%)': 50.0,
'Recall': 1.0,
'True negative': 2,
'True negative (%)': 50.0,
'True positive': 1,
'True positive (%)': 25.0}}}
def test_primal_parallel_fit():
instances, models = get_test_pyomo_instances()
comp = PrimalSolutionComponent()
comp.fit(instances, n_jobs=2)
assert len(comp.classifiers) == 2

View File

@@ -1,188 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from unittest.mock import Mock, call
from miplearn import (RelaxationComponent,
LearningSolver,
Instance,
InternalSolver)
from miplearn.classifiers import Classifier
def test_usage_with_solver():
solver = Mock(spec=LearningSolver)
internal = solver.internal_solver = Mock(spec=InternalSolver)
internal.get_constraint_ids = Mock(return_value=["c1", "c2", "c3", "c4"])
internal.get_constraint_slacks = Mock(side_effect=lambda: {
"c1": 0.5,
"c2": 0.0,
"c3": 0.0,
"c4": 1.4,
})
instance = Mock(spec=Instance)
instance.get_constraint_features = Mock(side_effect=lambda cid: {
"c2": [1.0, 0.0],
"c3": [0.5, 0.5],
"c4": [1.0],
}[cid])
instance.get_constraint_category = Mock(side_effect=lambda cid: {
"c1": None,
"c2": "type-a",
"c3": "type-a",
"c4": "type-b",
}[cid])
component = RelaxationComponent()
component.classifiers = {
"type-a": Mock(spec=Classifier),
"type-b": Mock(spec=Classifier),
}
component.classifiers["type-a"].predict_proba = \
Mock(return_value=[
[0.20, 0.80],
[0.05, 0.95],
])
component.classifiers["type-b"].predict_proba = \
Mock(return_value=[
[0.02, 0.98],
])
# LearningSolver calls before_solve
component.before_solve(solver, instance, None)
# Should relax integrality of the problem
internal.relax.assert_called_once()
# Should query list of constraints
internal.get_constraint_ids.assert_called_once()
# Should query category and features for each constraint in the model
assert instance.get_constraint_category.call_count == 4
instance.get_constraint_category.assert_has_calls([
call("c1"), call("c2"), call("c3"), call("c4"),
])
# For constraint with non-null categories, should ask for features
assert instance.get_constraint_features.call_count == 3
instance.get_constraint_features.assert_has_calls([
call("c2"), call("c3"), call("c4"),
])
# Should ask ML to predict whether constraint should be removed
component.classifiers["type-a"].predict_proba.assert_called_once_with([[1.0, 0.0], [0.5, 0.5]])
component.classifiers["type-b"].predict_proba.assert_called_once_with([[1.0]])
# Should ask internal solver to remove constraints predicted as redundant
assert internal.extract_constraint.call_count == 2
internal.extract_constraint.assert_has_calls([
call("c3"), call("c4"),
])
# LearningSolver calls after_solve
component.after_solve(solver, instance, None, None)
# Should query slack for all constraints
internal.get_constraint_slacks.assert_called_once()
# Should store constraint slacks in instance object
assert hasattr(instance, "slacks")
assert instance.slacks == {
"c1": 0.5,
"c2": 0.0,
"c3": 0.0,
"c4": 1.4,
}
def test_x_y_fit_predict_evaluate():
instances = [Mock(spec=Instance), Mock(spec=Instance)]
component = RelaxationComponent(slack_tolerance=0.05,
threshold=0.80)
component.classifiers = {
"type-a": Mock(spec=Classifier),
"type-b": Mock(spec=Classifier),
}
component.classifiers["type-a"].predict_proba = \
Mock(return_value=[
[0.20, 0.80],
])
component.classifiers["type-b"].predict_proba = \
Mock(return_value=[
[0.50, 0.50],
[0.05, 0.95],
])
# First mock instance
instances[0].slacks = {
"c1": 0.00,
"c2": 0.05,
"c3": 0.00,
"c4": 30.0,
}
instances[0].get_constraint_category = Mock(side_effect=lambda cid: {
"c1": None,
"c2": "type-a",
"c3": "type-a",
"c4": "type-b",
}[cid])
instances[0].get_constraint_features = Mock(side_effect=lambda cid: {
"c2": [1.0, 0.0],
"c3": [0.5, 0.5],
"c4": [1.0],
}[cid])
# Second mock instance
instances[1].slacks = {
"c1": 0.00,
"c3": 0.30,
"c4": 0.00,
"c5": 0.00,
}
instances[1].get_constraint_category = Mock(side_effect=lambda cid: {
"c1": None,
"c3": "type-a",
"c4": "type-b",
"c5": "type-b",
}[cid])
instances[1].get_constraint_features = Mock(side_effect=lambda cid: {
"c3": [0.3, 0.4],
"c4": [0.7],
"c5": [0.8],
}[cid])
expected_x = {
"type-a": [[1.0, 0.0], [0.5, 0.5], [0.3, 0.4]],
"type-b": [[1.0], [0.7], [0.8]],
}
expected_y = {
"type-a": [[0], [0], [1]],
"type-b": [[1], [0], [0]]
}
# Should build X and Y matrices correctly
assert component.x(instances) == expected_x
assert component.y(instances) == expected_y
# Should pass along X and Y matrices to classifiers
component.fit(instances)
component.classifiers["type-a"].fit.assert_called_with(expected_x["type-a"], expected_y["type-a"])
component.classifiers["type-b"].fit.assert_called_with(expected_x["type-b"], expected_y["type-b"])
assert component.predict(expected_x) == {
"type-a": [[1]],
"type-b": [[0], [1]]
}
ev = component.evaluate(instances[1])
assert ev["True positive"] == 1
assert ev["True negative"] == 1
assert ev["False positive"] == 1
assert ev["False negative"] == 0

View File

@@ -1,105 +0,0 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import logging
from abc import ABC, abstractmethod
import numpy as np
from tqdm import tqdm
logger = logging.getLogger(__name__)
class Extractor(ABC):
@abstractmethod
def extract(self, instances,):
pass
@staticmethod
def split_variables(instance):
assert hasattr(instance, "lp_solution")
result = {}
for var_name in instance.lp_solution:
for index in instance.lp_solution[var_name]:
category = instance.get_variable_category(var_name, index)
if category is None:
continue
if category not in result:
result[category] = []
result[category] += [(var_name, index)]
return result
class VariableFeaturesExtractor(Extractor):
def extract(self, instances):
result = {}
for instance in tqdm(instances,
desc="Extract (vars)",
disable=len(instances) < 5):
instance_features = instance.get_instance_features()
var_split = self.split_variables(instance)
for (category, var_index_pairs) in var_split.items():
if category not in result:
result[category] = []
for (var_name, index) in var_index_pairs:
result[category] += [
instance_features.tolist() + \
instance.get_variable_features(var_name, index).tolist() + \
[instance.lp_solution[var_name][index]]
]
for category in result:
result[category] = np.array(result[category])
return result
class SolutionExtractor(Extractor):
def __init__(self, relaxation=False):
self.relaxation = relaxation
def extract(self, instances):
result = {}
for instance in tqdm(instances,
desc="Extract (solution)",
disable=len(instances) < 5):
var_split = self.split_variables(instance)
for (category, var_index_pairs) in var_split.items():
if category not in result:
result[category] = []
for (var_name, index) in var_index_pairs:
if self.relaxation:
v = instance.lp_solution[var_name][index]
else:
v = instance.solution[var_name][index]
if v is None:
result[category] += [[0, 0]]
else:
result[category] += [[1 - v, v]]
for category in result:
result[category] = np.array(result[category])
return result
class InstanceFeaturesExtractor(Extractor):
def extract(self, instances):
return np.vstack([
np.hstack([
instance.get_instance_features(),
instance.lp_value,
])
for instance in instances
])
class ObjectiveValueExtractor(Extractor):
def __init__(self, kind="lp"):
assert kind in ["lower bound", "upper bound", "lp"]
self.kind = kind
def extract(self, instances):
if self.kind == "lower bound":
return np.array([[instance.lower_bound] for instance in instances])
if self.kind == "upper bound":
return np.array([[instance.upper_bound] for instance in instances])
if self.kind == "lp":
return np.array([[instance.lp_value] for instance in instances])

View File

@@ -0,0 +1,210 @@
# MIPLearn: Extensible Framework for Learning-Enhanced Mixed-Integer Optimization
# Copyright (C) 2020-2022, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
from typing import Tuple, Optional
import numpy as np
from miplearn.extractors.abstract import FeaturesExtractor
from miplearn.h5 import H5File
class AlvLouWeh2017Extractor(FeaturesExtractor):
def __init__(
self,
with_m1: bool = True,
with_m2: bool = True,
with_m3: bool = True,
):
self.with_m1 = with_m1
self.with_m2 = with_m2
self.with_m3 = with_m3
def get_instance_features(self, h5: H5File) -> np.ndarray:
raise NotImplementedError()
def get_var_features(self, h5: H5File) -> np.ndarray:
"""
Computes static variable features described in:
Alvarez, A. M., Louveaux, Q., & Wehenkel, L. (2017). A machine learning-based
approximation of strong branching. INFORMS Journal on Computing, 29(1),
185-195.
"""
A = h5.get_sparse("static_constr_lhs")
b = h5.get_array("static_constr_rhs")
c = h5.get_array("static_var_obj_coeffs")
c_sa_up = h5.get_array("lp_var_sa_obj_up")
c_sa_down = h5.get_array("lp_var_sa_obj_down")
values = h5.get_array("lp_var_values")
assert A is not None
assert b is not None
assert c is not None
nvars = len(c)
curr = 0
max_n_features = 40
features = np.zeros((nvars, max_n_features))
def push(v: np.ndarray) -> None:
nonlocal curr
assert v.shape == (nvars,), f"{v.shape} != ({nvars},)"
features[:, curr] = v
curr += 1
def push_sign_abs(v: np.ndarray) -> None:
assert v.shape == (nvars,), f"{v.shape} != ({nvars},)"
push(np.sign(v))
push(np.abs(v))
def maxmin(M: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
M_max = np.ravel(M.max(axis=0).todense())
M_min = np.ravel(M.min(axis=0).todense())
return M_max, M_min
with np.errstate(divide="ignore", invalid="ignore"):
# Feature 1
push(np.sign(c))
# Feature 2
c_pos_sum = c[c > 0].sum()
push(np.abs(c) / c_pos_sum)
# Feature 3
c_neg_sum = -c[c < 0].sum()
push(np.abs(c) / c_neg_sum)
if A is not None and self.with_m1:
# Compute A_ji / |b_j|
M1 = A.T.multiply(1.0 / np.abs(b)).T.tocsr()
# Select rows with positive b_j and compute max/min
M1_pos = M1[b > 0, :]
if M1_pos.shape[0] > 0:
M1_pos_max = np.asarray(M1_pos.max(axis=0).todense()).flatten()
M1_pos_min = np.asarray(M1_pos.min(axis=0).todense()).flatten()
else:
M1_pos_max = np.zeros(nvars)
M1_pos_min = np.zeros(nvars)
# Select rows with negative b_j and compute max/min
M1_neg = M1[b < 0, :]
if M1_neg.shape[0] > 0:
M1_neg_max = np.asarray(M1_neg.max(axis=0).todense()).flatten()
M1_neg_min = np.asarray(M1_neg.min(axis=0).todense()).flatten()
else:
M1_neg_max = np.zeros(nvars)
M1_neg_min = np.zeros(nvars)
# Features 4-11
push_sign_abs(M1_pos_min)
push_sign_abs(M1_pos_max)
push_sign_abs(M1_neg_min)
push_sign_abs(M1_neg_max)
if A is not None and self.with_m2:
# Compute |c_i| / A_ij
M2 = A.power(-1).multiply(np.abs(c)).tocsc()
# Compute max/min
M2_max, M2_min = maxmin(M2)
# Make copies of M2 and erase elements based on sign(c)
M2_pos_max = M2_max.copy()
M2_neg_max = M2_max.copy()
M2_pos_min = M2_min.copy()
M2_neg_min = M2_min.copy()
M2_pos_max[c <= 0] = 0
M2_pos_min[c <= 0] = 0
M2_neg_max[c >= 0] = 0
M2_neg_min[c >= 0] = 0
# Features 12-19
push_sign_abs(M2_pos_min)
push_sign_abs(M2_pos_max)
push_sign_abs(M2_neg_min)
push_sign_abs(M2_neg_max)
if A is not None and self.with_m3:
# Compute row sums
S_pos = A.maximum(0).sum(axis=1)
S_neg = np.abs(A.minimum(0).sum(axis=1))
# Divide A by positive and negative row sums
M3_pos = A.multiply(1 / S_pos).tocsr()
M3_neg = A.multiply(1 / S_neg).tocsr()
# Remove +inf and -inf generated by division by zero
M3_pos.data[~np.isfinite(M3_pos.data)] = 0.0
M3_neg.data[~np.isfinite(M3_neg.data)] = 0.0
M3_pos.eliminate_zeros()
M3_neg.eliminate_zeros()
# Split each matrix into positive and negative parts
M3_pos_pos = M3_pos.maximum(0)
M3_pos_neg = -(M3_pos.minimum(0))
M3_neg_pos = M3_neg.maximum(0)
M3_neg_neg = -(M3_neg.minimum(0))
# Calculate max/min
M3_pos_pos_max, M3_pos_pos_min = maxmin(M3_pos_pos)
M3_pos_neg_max, M3_pos_neg_min = maxmin(M3_pos_neg)
M3_neg_pos_max, M3_neg_pos_min = maxmin(M3_neg_pos)
M3_neg_neg_max, M3_neg_neg_min = maxmin(M3_neg_neg)
# Features 20-35
push_sign_abs(M3_pos_pos_max)
push_sign_abs(M3_pos_pos_min)
push_sign_abs(M3_pos_neg_max)
push_sign_abs(M3_pos_neg_min)
push_sign_abs(M3_neg_pos_max)
push_sign_abs(M3_neg_pos_min)
push_sign_abs(M3_neg_neg_max)
push_sign_abs(M3_neg_neg_min)
# Feature 36: only available during B&B
# Feature 37
if values is not None:
push(
np.minimum(
values - np.floor(values),
np.ceil(values) - values,
)
)
# Features 38-43: only available during B&B
# Feature 44
if c_sa_up is not None:
assert c_sa_down is not None
# Features 44 and 46
push(np.sign(c_sa_up))
push(np.sign(c_sa_down))
# Feature 45 is duplicated
# Feature 47-48
push(np.log(c - c_sa_down / np.sign(c)))
push(np.log(c - c_sa_up / np.sign(c)))
# Features 49-64: only available during B&B
features = features[:, 0:curr]
_fix_infinity(features)
return features
def get_constr_features(self, h5: H5File) -> np.ndarray:
raise NotImplementedError()
def _fix_infinity(m: Optional[np.ndarray]) -> None:
if m is None:
return
masked = np.ma.masked_invalid(m) # type: ignore
max_values = np.max(masked, axis=0)
min_values = np.min(masked, axis=0)
m[:] = np.maximum(np.minimum(m, max_values), min_values)
m[~np.isfinite(m)] = 0.0

View File

View File

@@ -0,0 +1,19 @@
from abc import ABC, abstractmethod
import numpy as np
from miplearn.h5 import H5File
class FeaturesExtractor(ABC):
@abstractmethod
def get_instance_features(self, h5: H5File) -> np.ndarray:
pass
@abstractmethod
def get_var_features(self, h5: H5File) -> np.ndarray:
pass
@abstractmethod
def get_constr_features(self, h5: H5File) -> np.ndarray:
pass

Some files were not shown because too many files have changed in this diff Show More