mirror of
https://github.com/ANL-CEEESA/MIPLearn.git
synced 2025-12-06 01:18:52 -06:00
MIPLearn v0.3
This commit is contained in:
277
docs/guide/collectors.ipynb
Normal file
277
docs/guide/collectors.ipynb
Normal file
@@ -0,0 +1,277 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "505cea0b-5f5d-478a-9107-42bb5515937d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Training Data Collectors\n",
|
||||
"The first step in solving mixed-integer optimization problems with the assistance of supervised machine learning methods is solving a large set of training instances and collecting the raw training data. In this section, we describe the various training data collectors included in MIPLearn. Additionally, the framework follows the convention of storing all training data in files with a specific data format (namely, HDF5). In this section, we briefly describe this format and the rationale for choosing it.\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"In MIPLearn, a **collector** is a class that solves or analyzes the problem and collects raw data which may be later useful for machine learning methods. Collectors, by convention, take as input: (i) a list of problem data filenames, in gzipped pickle format, ending with `.pkl.gz`; (ii) a function that builds the optimization model, such as `build_tsp_model`. After processing is done, collectors store the training data in a HDF5 file located alongside with the problem data. For example, if the problem data is stored in file `problem.pkl.gz`, then the collector writes to `problem.h5`. Collectors are, in general, very time consuming, as they may need to solve the problem to optimality, potentially multiple times.\n",
|
||||
"\n",
|
||||
"## HDF5 Format\n",
|
||||
"\n",
|
||||
"MIPLearn stores all training data in [HDF5](HDF5) (Hierarchical Data Format, Version 5) files. The HDF format was originally developed by the [National Center for Supercomputing Applications][NCSA] (NCSA) for storing and organizing large amounts of data, and supports a variety of data types, including integers, floating-point numbers, strings, and arrays. Compared to other formats, such as CSV, JSON or SQLite, the HDF5 format provides several advantages for MIPLearn, including:\n",
|
||||
"\n",
|
||||
"- *Storage of multiple scalars, vectors and matrices in a single file* --- This allows MIPLearn to store all training data related to a given problem instance in a single file, which makes training data easier to store, organize and transfer.\n",
|
||||
"- *High-performance partial I/O* --- Partial I/O allows MIPLearn to read a single element from the training data (e.g. value of the optimal solution) without loading the entire file to memory or reading it from beginning to end, which dramatically improves performance and reduces memory requirements. This is especially important when processing a large number of training data files.\n",
|
||||
"- *On-the-fly compression* --- HDF5 files can be transparently compressed, using the gzip method, which reduces storage requirements and accelerates network transfers.\n",
|
||||
"- *Stable, portable and well-supported data format* --- Training data files are typically expensive to generate. Having a stable and well supported data format ensures that these files remain usable in the future, potentially even by other non-Python MIP/ML frameworks.\n",
|
||||
"\n",
|
||||
"MIPLearn currently uses HDF5 as simple key-value storage for numerical data; more advanced features of the format, such as metadata, are not currently used. Although files generated by MIPLearn can be read with any HDF5 library, such as [h5py][h5py], some convenience functions are provided to make the access more simple and less error-prone. Specifically, the class [H5File][H5File], which is built on top of h5py, provides the methods [put_scalar][put_scalar], [put_array][put_array], [put_sparse][put_sparse], [put_bytes][put_bytes] to store, respectively, scalar values, dense multi-dimensional arrays, sparse multi-dimensional arrays and arbitrary binary data. The corresponding *get* methods are also provided. Compared to pure h5py methods, these methods automatically perform type-checking and gzip compression. The example below shows their usage.\n",
|
||||
"\n",
|
||||
"[HDF5]: https://en.wikipedia.org/wiki/Hierarchical_Data_Format\n",
|
||||
"[NCSA]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications\n",
|
||||
"[h5py]: https://www.h5py.org/\n",
|
||||
"[H5File]: ../../api/helpers/#miplearn.h5.H5File\n",
|
||||
"[put_scalar]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n",
|
||||
"[put_array]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n",
|
||||
"[put_sparse]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n",
|
||||
"[put_bytes]: ../../api/helpers/#miplearn.h5.H5File.put_scalar\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "f906fe9c",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"x1 = 1\n",
|
||||
"x2 = hello world\n",
|
||||
"x3 = [1 2 3]\n",
|
||||
"x4 = [[0.37454012 0.9507143 0.7319939 ]\n",
|
||||
" [0.5986585 0.15601864 0.15599452]\n",
|
||||
" [0.05808361 0.8661761 0.601115 ]]\n",
|
||||
"x5 = (2, 3)\t0.68030757\n",
|
||||
" (3, 2)\t0.45049927\n",
|
||||
" (4, 0)\t0.013264962\n",
|
||||
" (0, 2)\t0.94220173\n",
|
||||
" (4, 2)\t0.5632882\n",
|
||||
" (2, 1)\t0.3854165\n",
|
||||
" (1, 1)\t0.015966251\n",
|
||||
" (3, 0)\t0.23089382\n",
|
||||
" (4, 4)\t0.24102546\n",
|
||||
" (1, 3)\t0.68326354\n",
|
||||
" (3, 1)\t0.6099967\n",
|
||||
" (0, 3)\t0.8331949\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import numpy as np\n",
|
||||
"import scipy.sparse\n",
|
||||
"\n",
|
||||
"from miplearn.h5 import H5File\n",
|
||||
"\n",
|
||||
"# Set random seed to make example reproducible\n",
|
||||
"np.random.seed(42)\n",
|
||||
"\n",
|
||||
"# Create a new empty HDF5 file\n",
|
||||
"with H5File(\"test.h5\", \"w\") as h5:\n",
|
||||
" # Store a scalar\n",
|
||||
" h5.put_scalar(\"x1\", 1)\n",
|
||||
" h5.put_scalar(\"x2\", \"hello world\")\n",
|
||||
"\n",
|
||||
" # Store a dense array and a dense matrix\n",
|
||||
" h5.put_array(\"x3\", np.array([1, 2, 3]))\n",
|
||||
" h5.put_array(\"x4\", np.random.rand(3, 3))\n",
|
||||
"\n",
|
||||
" # Store a sparse matrix\n",
|
||||
" h5.put_sparse(\"x5\", scipy.sparse.random(5, 5, 0.5))\n",
|
||||
"\n",
|
||||
"# Re-open the file we just created and print\n",
|
||||
"# previously-stored data\n",
|
||||
"with H5File(\"test.h5\", \"r\") as h5:\n",
|
||||
" print(\"x1 =\", h5.get_scalar(\"x1\"))\n",
|
||||
" print(\"x2 =\", h5.get_scalar(\"x2\"))\n",
|
||||
" print(\"x3 =\", h5.get_array(\"x3\"))\n",
|
||||
" print(\"x4 =\", h5.get_array(\"x4\"))\n",
|
||||
" print(\"x5 =\", h5.get_sparse(\"x5\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "50441907",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d0000c8d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Basic collector\n",
|
||||
"\n",
|
||||
"[BasicCollector][BasicCollector] is the most fundamental collector, and performs the following steps:\n",
|
||||
"\n",
|
||||
"1. Extracts all model data, such as objective function and constraint right-hand sides into numpy arrays, which can later be easily and efficiently accessed without rebuilding the model or invoking the solver;\n",
|
||||
"2. Solves the linear relaxation of the problem and stores its optimal solution, basis status and sensitivity information, among other information;\n",
|
||||
"3. Solves the original mixed-integer optimization problem to optimality and stores its optimal solution, along with solve statistics, such as number of explored nodes and wallclock time.\n",
|
||||
"\n",
|
||||
"Data extracted in Phases 1, 2 and 3 above are prefixed, respectively as `static_`, `lp_` and `mip_`. The entire set of fields is shown in the table below.\n",
|
||||
"\n",
|
||||
"[BasicCollector]: ../../api/collectors/#miplearn.collectors.basic.BasicCollector\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6529f667",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Data fields\n",
|
||||
"\n",
|
||||
"| Field | Type | Description |\n",
|
||||
"|-----------------------------------|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------|\n",
|
||||
"| `static_constr_lhs` | `(nconstrs, nvars)` | Constraint left-hand sides, in sparse matrix format |\n",
|
||||
"| `static_constr_names` | `(nconstrs,)` | Constraint names |\n",
|
||||
"| `static_constr_rhs` | `(nconstrs,)` | Constraint right-hand sides |\n",
|
||||
"| `static_constr_sense` | `(nconstrs,)` | Constraint senses (`\"<\"`, `\">\"` or `\"=\"`) |\n",
|
||||
"| `static_obj_offset` | `float` | Constant value added to the objective function |\n",
|
||||
"| `static_sense` | `str` | `\"min\"` if minimization problem or `\"max\"` otherwise |\n",
|
||||
"| `static_var_lower_bounds` | `(nvars,)` | Variable lower bounds |\n",
|
||||
"| `static_var_names` | `(nvars,)` | Variable names |\n",
|
||||
"| `static_var_obj_coeffs` | `(nvars,)` | Objective coefficients |\n",
|
||||
"| `static_var_types` | `(nvars,)` | Types of the decision variables (`\"C\"`, `\"B\"` and `\"I\"` for continuous, binary and integer, respectively) |\n",
|
||||
"| `static_var_upper_bounds` | `(nvars,)` | Variable upper bounds |\n",
|
||||
"| `lp_constr_basis_status` | `(nconstr,)` | Constraint basis status (`0` for basic, `-1` for non-basic) |\n",
|
||||
"| `lp_constr_dual_values` | `(nconstr,)` | Constraint dual value (or shadow price) |\n",
|
||||
"| `lp_constr_sa_rhs_{up,down}` | `(nconstr,)` | Sensitivity information for the constraint RHS |\n",
|
||||
"| `lp_constr_slacks` | `(nconstr,)` | Constraint slack in the solution to the LP relaxation |\n",
|
||||
"| `lp_obj_value` | `float` | Optimal value of the LP relaxation |\n",
|
||||
"| `lp_var_basis_status` | `(nvars,)` | Variable basis status (`0`, `-1`, `-2` or `-3` for basic, non-basic at lower bound, non-basic at upper bound, and superbasic, respectively) |\n",
|
||||
"| `lp_var_reduced_costs` | `(nvars,)` | Variable reduced costs |\n",
|
||||
"| `lp_var_sa_{obj,ub,lb}_{up,down}` | `(nvars,)` | Sensitivity information for the variable objective coefficient, lower and upper bound. |\n",
|
||||
"| `lp_var_values` | `(nvars,)` | Optimal solution to the LP relaxation |\n",
|
||||
"| `lp_wallclock_time` | `float` | Time taken to solve the LP relaxation (in seconds) |\n",
|
||||
"| `mip_constr_slacks` | `(nconstrs,)` | Constraint slacks in the best MIP solution |\n",
|
||||
"| `mip_gap` | `float` | Relative MIP optimality gap |\n",
|
||||
"| `mip_node_count` | `float` | Number of explored branch-and-bound nodes |\n",
|
||||
"| `mip_obj_bound` | `float` | Dual bound |\n",
|
||||
"| `mip_obj_value` | `float` | Value of the best MIP solution |\n",
|
||||
"| `mip_var_values` | `(nvars,)` | Best MIP solution |\n",
|
||||
"| `mip_wallclock_time` | `float` | Time taken to solve the MIP (in seconds) |"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f2894594",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example\n",
|
||||
"\n",
|
||||
"The example below shows how to generate a few random instances of the traveling salesman problem, store its problem data, run the collector and print some of the training data to screen."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "ac6f8c6f",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"lp_obj_value = 2909.0\n",
|
||||
"mip_obj_value = 2921.0\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import random\n",
|
||||
"import numpy as np\n",
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from glob import glob\n",
|
||||
"\n",
|
||||
"from miplearn.problems.tsp import (\n",
|
||||
" TravelingSalesmanGenerator,\n",
|
||||
" build_tsp_model,\n",
|
||||
")\n",
|
||||
"from miplearn.io import write_pkl_gz\n",
|
||||
"from miplearn.h5 import H5File\n",
|
||||
"from miplearn.collectors.basic import BasicCollector\n",
|
||||
"\n",
|
||||
"# Set random seed to make example reproducible.\n",
|
||||
"random.seed(42)\n",
|
||||
"np.random.seed(42)\n",
|
||||
"\n",
|
||||
"# Generate a few instances of the traveling salesman problem.\n",
|
||||
"data = TravelingSalesmanGenerator(\n",
|
||||
" n=randint(low=10, high=11),\n",
|
||||
" x=uniform(loc=0.0, scale=1000.0),\n",
|
||||
" y=uniform(loc=0.0, scale=1000.0),\n",
|
||||
" gamma=uniform(loc=0.90, scale=0.20),\n",
|
||||
" fix_cities=True,\n",
|
||||
" round=True,\n",
|
||||
").generate(10)\n",
|
||||
"\n",
|
||||
"# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...\n",
|
||||
"write_pkl_gz(data, \"data/tsp\")\n",
|
||||
"\n",
|
||||
"# Solve all instances and collect basic solution information.\n",
|
||||
"# Process at most four instances in parallel.\n",
|
||||
"bc = BasicCollector()\n",
|
||||
"bc.collect(glob(\"data/tsp/*.pkl.gz\"), build_tsp_model, n_jobs=4)\n",
|
||||
"\n",
|
||||
"# Read and print some training data for the first instance.\n",
|
||||
"with H5File(\"data/tsp/00000.h5\", \"r\") as h5:\n",
|
||||
" print(\"lp_obj_value = \", h5.get_scalar(\"lp_obj_value\"))\n",
|
||||
" print(\"mip_obj_value = \", h5.get_scalar(\"mip_obj_value\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "78f0b07a",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
334
docs/guide/features.ipynb
Normal file
334
docs/guide/features.ipynb
Normal file
@@ -0,0 +1,334 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cdc6ebe9-d1d4-4de1-9b5a-4fc8ef57b11b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Feature Extractors\n",
|
||||
"\n",
|
||||
"In the previous page, we introduced *training data collectors*, which solve the optimization problem and collect raw training data, such as the optimal solution. In this page, we introduce **feature extractors**, which take the raw training data, stored in HDF5 files, and extract relevant information in order to train a machine learning model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b4026de5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"Feature extraction is an important step of the process of building a machine learning model because it helps to reduce the complexity of the data and convert it into a format that is more easily processed. Previous research has proposed converting absolute variable coefficients, for example, into relative values which are invariant to various transformations, such as problem scaling, making them more amenable to learning. Various other transformations have also been described.\n",
|
||||
"\n",
|
||||
"In the framework, we treat data collection and feature extraction as two separate steps to accelerate the model development cycle. Specifically, collectors are typically time-consuming, as they often need to solve the problem to optimality, and therefore focus on collecting and storing all data that may or may not be relevant, in its raw format. Feature extractors, on the other hand, focus entirely on filtering the data and improving its representation, and are therefore much faster to run. Experimenting with new data representations, therefore, can be done without resolving the instances.\n",
|
||||
"\n",
|
||||
"In MIPLearn, extractors implement the abstract class [FeatureExtractor][FeatureExtractor], which has methods that take as input an [H5File][H5File] and produce either: (i) instance features, which describe the entire instances; (ii) variable features, which describe a particular decision variables; or (iii) constraint features, which describe a particular constraint. The extractor is free to implement only a subset of these methods, if it is known that it will not be used with a machine learning component that requires the other types of features.\n",
|
||||
"\n",
|
||||
"[FeatureExtractor]: ../../api/collectors/#miplearn.features.fields.FeaturesExtractor\n",
|
||||
"[H5File]: ../../api/helpers/#miplearn.h5.H5File"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b2d9736c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"## H5FieldsExtractor\n",
|
||||
"\n",
|
||||
"[H5FieldsExtractor][H5FieldsExtractor], the most simple extractor in MIPLearn, simple extracts data that is already available in the HDF5 file, assembles it into a matrix and returns it as-is. The fields used to build instance, variable and constraint features are user-specified. The class also performs checks to ensure that the shapes of the returned matrices make sense."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e8184dff",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example\n",
|
||||
"\n",
|
||||
"The example below demonstrates the usage of H5FieldsExtractor in a randomly generated instance of the multi-dimensional knapsack problem."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "ed9a18c8",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"instance features (11,) \n",
|
||||
" [-1531.24308771 -350. -692. -454.\n",
|
||||
" -709. -605. -543. -321.\n",
|
||||
" -674. -571. -341. ]\n",
|
||||
"variable features (10, 4) \n",
|
||||
" [[-1.53124309e+03 -3.50000000e+02 0.00000000e+00 9.43468018e+01]\n",
|
||||
" [-1.53124309e+03 -6.92000000e+02 2.51703322e-01 0.00000000e+00]\n",
|
||||
" [-1.53124309e+03 -4.54000000e+02 0.00000000e+00 8.25504150e+01]\n",
|
||||
" [-1.53124309e+03 -7.09000000e+02 1.11373022e-01 0.00000000e+00]\n",
|
||||
" [-1.53124309e+03 -6.05000000e+02 1.00000000e+00 -1.26055283e+02]\n",
|
||||
" [-1.53124309e+03 -5.43000000e+02 0.00000000e+00 1.68693771e+02]\n",
|
||||
" [-1.53124309e+03 -3.21000000e+02 1.07488781e-01 0.00000000e+00]\n",
|
||||
" [-1.53124309e+03 -6.74000000e+02 8.82293701e-01 0.00000000e+00]\n",
|
||||
" [-1.53124309e+03 -5.71000000e+02 0.00000000e+00 1.41129074e+02]\n",
|
||||
" [-1.53124309e+03 -3.41000000e+02 1.28830120e-01 0.00000000e+00]]\n",
|
||||
"constraint features (5, 3) \n",
|
||||
" [[ 1.3100000e+03 -1.5978307e-01 0.0000000e+00]\n",
|
||||
" [ 9.8800000e+02 -3.2881632e-01 0.0000000e+00]\n",
|
||||
" [ 1.0040000e+03 -4.0601316e-01 0.0000000e+00]\n",
|
||||
" [ 1.2690000e+03 -1.3659772e-01 0.0000000e+00]\n",
|
||||
" [ 1.0070000e+03 -2.8800571e-01 0.0000000e+00]]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from glob import glob\n",
|
||||
"from shutil import rmtree\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"\n",
|
||||
"from miplearn.collectors.basic import BasicCollector\n",
|
||||
"from miplearn.extractors.fields import H5FieldsExtractor\n",
|
||||
"from miplearn.h5 import H5File\n",
|
||||
"from miplearn.io import write_pkl_gz\n",
|
||||
"from miplearn.problems.multiknapsack import (\n",
|
||||
" MultiKnapsackGenerator,\n",
|
||||
" build_multiknapsack_model,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Set random seed to make example reproducible\n",
|
||||
"np.random.seed(42)\n",
|
||||
"\n",
|
||||
"# Generate some random multiknapsack instances\n",
|
||||
"rmtree(\"data/multiknapsack/\", ignore_errors=True)\n",
|
||||
"write_pkl_gz(\n",
|
||||
" MultiKnapsackGenerator(\n",
|
||||
" n=randint(low=10, high=11),\n",
|
||||
" m=randint(low=5, high=6),\n",
|
||||
" w=uniform(loc=0, scale=1000),\n",
|
||||
" K=uniform(loc=100, scale=0),\n",
|
||||
" u=uniform(loc=1, scale=0),\n",
|
||||
" alpha=uniform(loc=0.25, scale=0),\n",
|
||||
" w_jitter=uniform(loc=0.95, scale=0.1),\n",
|
||||
" p_jitter=uniform(loc=0.75, scale=0.5),\n",
|
||||
" fix_w=True,\n",
|
||||
" ).generate(10),\n",
|
||||
" \"data/multiknapsack\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Run the basic collector\n",
|
||||
"BasicCollector().collect(\n",
|
||||
" glob(\"data/multiknapsack/*\"),\n",
|
||||
" build_multiknapsack_model,\n",
|
||||
" n_jobs=4,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"ext = H5FieldsExtractor(\n",
|
||||
" # Use as instance features the value of the LP relaxation and the\n",
|
||||
" # vector of objective coefficients.\n",
|
||||
" instance_fields=[\n",
|
||||
" \"lp_obj_value\",\n",
|
||||
" \"static_var_obj_coeffs\",\n",
|
||||
" ],\n",
|
||||
" # For each variable, use as features the optimal value of the LP\n",
|
||||
" # relaxation, the variable objective coefficient, the variable's\n",
|
||||
" # value its reduced cost.\n",
|
||||
" var_fields=[\n",
|
||||
" \"lp_obj_value\",\n",
|
||||
" \"static_var_obj_coeffs\",\n",
|
||||
" \"lp_var_values\",\n",
|
||||
" \"lp_var_reduced_costs\",\n",
|
||||
" ],\n",
|
||||
" # For each constraint, use as features the RHS, dual value and slack.\n",
|
||||
" constr_fields=[\n",
|
||||
" \"static_constr_rhs\",\n",
|
||||
" \"lp_constr_dual_values\",\n",
|
||||
" \"lp_constr_slacks\",\n",
|
||||
" ],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"with H5File(\"data/multiknapsack/00000.h5\") as h5:\n",
|
||||
" # Extract and print instance features\n",
|
||||
" x1 = ext.get_instance_features(h5)\n",
|
||||
" print(\"instance features\", x1.shape, \"\\n\", x1)\n",
|
||||
"\n",
|
||||
" # Extract and print variable features\n",
|
||||
" x2 = ext.get_var_features(h5)\n",
|
||||
" print(\"variable features\", x2.shape, \"\\n\", x2)\n",
|
||||
"\n",
|
||||
" # Extract and print constraint features\n",
|
||||
" x3 = ext.get_constr_features(h5)\n",
|
||||
" print(\"constraint features\", x3.shape, \"\\n\", x3)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2da2e74e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"[H5FieldsExtractor]: ../../api/collectors/#miplearn.features.fields.H5FieldsExtractor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d879c0d3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<div class=\"alert alert-warning\">\n",
|
||||
"Warning\n",
|
||||
"\n",
|
||||
"You should ensure that the number of features remains the same for all relevant HDF5 files. In the previous example, to illustrate this issue, we used variable objective coefficients as instance features. While this is allowed, note that this requires all problem instances to have the same number of variables; otherwise the number of features would vary from instance to instance and MIPLearn would be unable to concatenate the matrices.\n",
|
||||
"</div>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cd0ba071",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## AlvLouWeh2017Extractor\n",
|
||||
"\n",
|
||||
"Alvarez, Louveaux and Wehenkel (2017) proposed a set features to describe a particular decision variable in a given node of the branch-and-bound tree, and applied it to the problem of mimicking strong branching decisions. The class [AlvLouWeh2017Extractor][] implements a subset of these features (40 out of 64), which are available outside of the branch-and-bound tree. Some features are derived from the static defintion of the problem (i.e. from objective function and constraint data), while some features are derived from the solution to the LP relaxation. The features have been designed to be: (i) independent of the size of the problem; (ii) invariant with respect to irrelevant problem transformations, such as row and column permutation; and (iii) independent of the scale of the problem. We refer to the paper for a more complete description.\n",
|
||||
"\n",
|
||||
"### Example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "a1bc38fe",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"x1 (10, 40) \n",
|
||||
" [[-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 6.00e-01\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 6.00e-01 1.00e+00 1.75e+01 1.00e+00 2.00e-01\n",
|
||||
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n",
|
||||
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 1.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 7.00e-01 1.00e+00 5.10e+00 1.00e+00 2.00e-01\n",
|
||||
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 3.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
|
||||
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 9.00e-01\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 5.00e-01 1.00e+00 1.30e+01 1.00e+00 2.00e-01\n",
|
||||
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n",
|
||||
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 2.00e-01 1.00e+00 9.00e-01\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 8.00e-01 1.00e+00 3.40e+00 1.00e+00 2.00e-01\n",
|
||||
" 1.00e+00 1.00e-01 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
|
||||
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 7.00e-01\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 6.00e-01 1.00e+00 3.80e+00 1.00e+00 2.00e-01\n",
|
||||
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
|
||||
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 8.00e-01\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 7.00e-01 1.00e+00 3.30e+00 1.00e+00 2.00e-01\n",
|
||||
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n",
|
||||
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 3.00e-01\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 1.00e+00 1.00e+00 5.70e+00 1.00e+00 1.00e-01\n",
|
||||
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
|
||||
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 1.00e-01 1.00e+00 6.00e-01\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 8.00e-01 1.00e+00 6.80e+00 1.00e+00 2.00e-01\n",
|
||||
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]\n",
|
||||
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 4.00e-01 1.00e+00 6.00e-01\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 8.00e-01 1.00e+00 1.40e+00 1.00e+00 1.00e-01\n",
|
||||
" 1.00e+00 1.00e-01 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 -1.00e+00 0.00e+00 1.00e+20]\n",
|
||||
" [-1.00e+00 1.00e+20 1.00e-01 1.00e+00 0.00e+00 1.00e+00 5.00e-01\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 1.00e+00 5.00e-01 1.00e+00 7.60e+00 1.00e+00 1.00e-01\n",
|
||||
" 1.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00\n",
|
||||
" 1.00e-01 -1.00e+00 -1.00e+00 0.00e+00 0.00e+00]]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n",
|
||||
"from miplearn.h5 import H5File\n",
|
||||
"\n",
|
||||
"# Build the extractor\n",
|
||||
"ext = AlvLouWeh2017Extractor()\n",
|
||||
"\n",
|
||||
"# Open previously-created multiknapsack training data\n",
|
||||
"with H5File(\"data/multiknapsack/00000.h5\") as h5:\n",
|
||||
" # Extract and print variable features\n",
|
||||
" x1 = ext.get_var_features(h5)\n",
|
||||
" print(\"x1\", x1.shape, \"\\n\", x1.round(1))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "286c9927",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<div class=\"alert alert-info\">\n",
|
||||
"References\n",
|
||||
"\n",
|
||||
"* **Alvarez, Alejandro Marcos.** *Computational and theoretical synergies between linear optimization and supervised machine learning.* (2016). University of Liège.\n",
|
||||
"* **Alvarez, Alejandro Marcos, Quentin Louveaux, and Louis Wehenkel.** *A machine learning-based approximation of strong branching.* INFORMS Journal on Computing 29.1 (2017): 185-195.\n",
|
||||
"\n",
|
||||
"</div>"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
291
docs/guide/primal.ipynb
Normal file
291
docs/guide/primal.ipynb
Normal file
@@ -0,0 +1,291 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "880cf4c7-d3c4-4b92-85c7-04a32264cdae",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Primal Components\n",
|
||||
"\n",
|
||||
"In MIPLearn, a **primal component** is class that uses machine learning to predict a (potentially partial) assignment of values to the decision variables of the problem. Predicting high-quality primal solutions may be beneficial, as they allow the MIP solver to prune potentially large portions of the search space. Alternatively, if proof of optimality is not required, the MIP solver can be used to complete the partial solution generated by the machine learning model and and double-check its feasibility. MIPLearn allows both of these usage patterns.\n",
|
||||
"\n",
|
||||
"In this page, we describe the four primal components currently included in MIPLearn, which employ machine learning in different ways. Each component is highly configurable, and accepts an user-provided machine learning model, which it uses for all predictions. Each component can also be configured to provide the solution to the solver in multiple ways, depending on whether proof of optimality is required.\n",
|
||||
"\n",
|
||||
"## Primal component actions\n",
|
||||
"\n",
|
||||
"Before presenting the primal components themselves, we briefly discuss the three ways a solution may be provided to the solver. Each approach has benefits and limitations, which we also discuss in this section. All primal components can be configured to use any of the following approaches.\n",
|
||||
"\n",
|
||||
"The first approach is to provide the solution to the solver as a **warm start**. This is implemented by the class [SetWarmStart](SetWarmStart). The main advantage is that this method maintains all optimality and feasibility guarantees of the MIP solver, while still providing significant performance benefits for various classes of problems. If the machine learning model is able to predict multiple solutions, it is also possible to set multiple warm starts. In this case, the solver evaluates each warm start, discards the infeasible ones, then proceeds with the one that has the best objective value. The main disadvantage of this approach, compared to the next two, is that it provides relatively modest speedups for most problem classes, and no speedup at all for many others, even when the machine learning predictions are 100% accurate.\n",
|
||||
"\n",
|
||||
"[SetWarmStart]: ../../api/components/#miplearn.components.primal.actions.SetWarmStart\n",
|
||||
"\n",
|
||||
"The second approach is to **fix the decision variables** to their predicted values, then solve a restricted optimization problem on the remaining variables. This approach is implemented by the class `FixVariables`. The main advantage is its potential speedup: if machine learning can accurately predict values for a significant portion of the decision variables, then the MIP solver can typically complete the solution in a small fraction of the time it would take to find the same solution from scratch. The main disadvantage of this approach is that it loses optimality guarantees; that is, the complete solution found by the MIP solver may no longer be globally optimal. Also, if the machine learning predictions are not sufficiently accurate, there might not even be a feasible assignment for the variables that were left free.\n",
|
||||
"\n",
|
||||
"Finally, the third approach, which tries to strike a balance between the two previous ones, is to **enforce proximity** to a given solution. This strategy is implemented by the class `EnforceProximity`. More precisely, given values $\\bar{x}_1,\\ldots,\\bar{x}_n$ for a subset of binary decision variables $x_1,\\ldots,x_n$, this approach adds the constraint\n",
|
||||
"\n",
|
||||
"$$\n",
|
||||
"\\sum_{i : \\bar{x}_i=0} x_i + \\sum_{i : \\bar{x}_i=1} \\left(1 - x_i\\right) \\leq k,\n",
|
||||
"$$\n",
|
||||
"to the problem, where $k$ is a user-defined parameter, which indicates how many of the predicted variables are allowed to deviate from the machine learning suggestion. The main advantage of this approach, compared to fixing variables, is its tolerance to lower-quality machine learning predictions. Its main disadvantage is that it typically leads to smaller speedups, especially for larger values of $k$. This approach also loses optimality guarantees.\n",
|
||||
"\n",
|
||||
"## Memorizing primal component\n",
|
||||
"\n",
|
||||
"A simple machine learning strategy for the prediction of primal solutions is to memorize all distinct solutions seen during training, then try to predict, during inference time, which of those memorized solutions are most likely to be feasible and to provide a good objective value for the current instance. The most promising solutions may alternatively be combined into a single partial solution, which is then provided to the MIP solver. Both variations of this strategy are implemented by the `MemorizingPrimalComponent` class. Note that it is only applicable if the problem size, and in fact if the meaning of the decision variables, remains the same across problem instances.\n",
|
||||
"\n",
|
||||
"More precisely, let $I_1,\\ldots,I_n$ be the training instances, and let $\\bar{x}^1,\\ldots,\\bar{x}^n$ be their respective optimal solutions. Given a new instance $I_{n+1}$, `MemorizingPrimalComponent` expects a user-provided binary classifier that assigns (through the `predict_proba` method, following scikit-learn's conventions) a score $\\delta_i$ to each solution $\\bar{x}^i$, such that solutions with higher score are more likely to be good solutions for $I_{n+1}$. The features provided to the classifier are the instance features computed by an user-provided extractor. Given these scores, the component then performs one of the following to actions, as decided by the user:\n",
|
||||
"\n",
|
||||
"1. Selects the top $k$ solutions with the highest scores and provides them to the solver; this is implemented by `SelectTopSolutions`, and it is typically used with the `SetWarmStart` action.\n",
|
||||
"\n",
|
||||
"2. Merges the top $k$ solutions into a single partial solution, then provides it to the solver. This is implemented by `MergeTopSolutions`. More precisely, suppose that the machine learning regressor ordered the solutions in the sequence $\\bar{x}^{i_1},\\ldots,\\bar{x}^{i_n}$, with the most promising solutions appearing first, and with ties being broken arbitrarily. The component starts by keeping only the $k$ most promising solutions $\\bar{x}^{i_1},\\ldots,\\bar{x}^{i_k}$. Then it computes, for each binary decision variable $x_l$, its average assigned value $\\tilde{x}_l$:\n",
|
||||
"$$\n",
|
||||
" \\tilde{x}_l = \\frac{1}{k} \\sum_{j=1}^k \\bar{x}^{i_j}_l.\n",
|
||||
"$$\n",
|
||||
" Finally, the component constructs a merged solution $y$, defined as:\n",
|
||||
"$$\n",
|
||||
" y_j = \\begin{cases}\n",
|
||||
" 0 & \\text{ if } \\tilde{x}_l \\le \\theta_0 \\\\\n",
|
||||
" 1 & \\text{ if } \\tilde{x}_l \\ge \\theta_1 \\\\\n",
|
||||
" \\square & \\text{otherwise,}\n",
|
||||
" \\end{cases}\n",
|
||||
"$$\n",
|
||||
" where $\\theta_0$ and $\\theta_1$ are user-specified parameters, and where $\\square$ indicates that the variable is left undefined. The solution $y$ is then provided by the solver using any of the three approaches defined in the previous section.\n",
|
||||
"\n",
|
||||
"The above specification of `MemorizingPrimalComponent` is meant to be as general as possible. Simpler strategies can be implemented by configuring this component in specific ways. For example, a simpler approach employed in the literature is to collect all optimal solutions, then provide the entire list of solutions to the solver as warm starts, without any filtering or post-processing. This strategy can be implemented with `MemorizingPrimalComponent` by using a model that returns a constant value for all solutions (e.g. [scikit-learn's DummyClassifier][DummyClassifier]), then selecting the top $n$ (instead of $k$) solutions. See example below. Another simple approach is taking the solution to the most similar instance, and using it, by itself, as a warm start. This can be implemented by using a model that computes distances between the current instance and the training ones (e.g. [scikit-learn's KNeighborsClassifier][KNeighborsClassifier]), then select the solution to the nearest one. See also example below. More complex strategies, of course, can also be configured.\n",
|
||||
"\n",
|
||||
"[DummyClassifier]: https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html\n",
|
||||
"[KNeighborsClassifier]: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html\n",
|
||||
"\n",
|
||||
"### Examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "253adbf4",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn.dummy import DummyClassifier\n",
|
||||
"from sklearn.neighbors import KNeighborsClassifier\n",
|
||||
"\n",
|
||||
"from miplearn.components.primal.actions import (\n",
|
||||
" SetWarmStart,\n",
|
||||
" FixVariables,\n",
|
||||
" EnforceProximity,\n",
|
||||
")\n",
|
||||
"from miplearn.components.primal.mem import (\n",
|
||||
" MemorizingPrimalComponent,\n",
|
||||
" SelectTopSolutions,\n",
|
||||
" MergeTopSolutions,\n",
|
||||
")\n",
|
||||
"from miplearn.extractors.dummy import DummyExtractor\n",
|
||||
"from miplearn.extractors.fields import H5FieldsExtractor\n",
|
||||
"\n",
|
||||
"# Configures a memorizing primal component that collects\n",
|
||||
"# all distinct solutions seen during training and provides\n",
|
||||
"# them to the solver without any filtering or post-processing.\n",
|
||||
"comp1 = MemorizingPrimalComponent(\n",
|
||||
" clf=DummyClassifier(),\n",
|
||||
" extractor=DummyExtractor(),\n",
|
||||
" constructor=SelectTopSolutions(1_000_000),\n",
|
||||
" action=SetWarmStart(),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Configures a memorizing primal component that finds the\n",
|
||||
"# training instance with the closest objective function, then\n",
|
||||
"# fixes the decision variables to the values they assumed\n",
|
||||
"# at the optimal solution for that instance.\n",
|
||||
"comp2 = MemorizingPrimalComponent(\n",
|
||||
" clf=KNeighborsClassifier(n_neighbors=1),\n",
|
||||
" extractor=H5FieldsExtractor(\n",
|
||||
" instance_fields=[\"static_var_obj_coeffs\"],\n",
|
||||
" ),\n",
|
||||
" constructor=SelectTopSolutions(1),\n",
|
||||
" action=FixVariables(),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Configures a memorizing primal component that finds the distinct\n",
|
||||
"# solutions to the 10 most similar training problem instances,\n",
|
||||
"# selects the 3 solutions that were most often optimal to these\n",
|
||||
"# training instances, combines them into a single partial solution,\n",
|
||||
"# then enforces proximity, allowing at most 3 variables to deviate\n",
|
||||
"# from the machine learning suggestion.\n",
|
||||
"comp3 = MemorizingPrimalComponent(\n",
|
||||
" clf=KNeighborsClassifier(n_neighbors=10),\n",
|
||||
" extractor=H5FieldsExtractor(instance_fields=[\"static_var_obj_coeffs\"]),\n",
|
||||
" constructor=MergeTopSolutions(k=3, thresholds=[0.25, 0.75]),\n",
|
||||
" action=EnforceProximity(3),\n",
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f194a793",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Independent vars primal component\n",
|
||||
"\n",
|
||||
"Instead of memorizing previously-seen primal solutions, it is also natural to use machine learning models to directly predict the values of the decision variables, constructing a solution from scratch. This approach has the benefit of potentially constructing novel high-quality solutions, never observed in the training data. Two variations of this strategy are supported by MIPLearn: (i) predicting the values of the decision variables independently, using multiple ML models; or (ii) predicting the values jointly, with a single model. We describe the first variation in this section, and the second variation in the next section.\n",
|
||||
"\n",
|
||||
"Let $I_1,\\ldots,I_n$ be the training instances, and let $\\bar{x}^1,\\ldots,\\bar{x}^n$ be their respective optimal solutions. For each binary decision variable $x_j$, the component `IndependentVarsPrimalComponent` creates a copy of a user-provided binary classifier and trains it to predict the optimal value of $x_j$, given $\\bar{x}^1_j,\\ldots,\\bar{x}^n_j$ as training labels. The features provided to the model are the variable features computed by an user-provided extractor. During inference time, the component uses these $n$ binary classifiers to construct a solution and provides it to the solver using one of the available actions.\n",
|
||||
"\n",
|
||||
"Three issues often arise in practice when using this approach:\n",
|
||||
"\n",
|
||||
" 1. For certain binary variables $x_j$, it is frequently the case that its optimal value is either always zero or always one in the training dataset, which poses problems to some standard scikit-learn classifiers, since they do not expect a single class. The wrapper `SingleClassFix` can be used to fix this issue (see example below).\n",
|
||||
"2. It is also frequently the case that machine learning classifier can only reliably predict the values of some variables with high accuracy, not all of them. In this situation, instead of computing a complete primal solution, it may be more beneficial to construct a partial solution containing values only for the variables for which the ML made a high-confidence prediction. The meta-classifier `MinProbabilityClassifier` can be used for this purpose. It asks the base classifier for the probability of the value being zero or one (using the `predict_proba` method) and erases from the primal solution all values whose probabilities are below a given threshold.\n",
|
||||
"3. To make multiple copies of the provided ML classifier, MIPLearn uses the standard `sklearn.base.clone` method, which may not be suitable for classifiers from other frameworks. To handle this, it is possible to override the clone function using the `clone_fn` constructor argument.\n",
|
||||
"\n",
|
||||
"### Examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "3fc0b5d1",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn.linear_model import LogisticRegression\n",
|
||||
"from miplearn.classifiers.minprob import MinProbabilityClassifier\n",
|
||||
"from miplearn.classifiers.singleclass import SingleClassFix\n",
|
||||
"from miplearn.components.primal.indep import IndependentVarsPrimalComponent\n",
|
||||
"from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n",
|
||||
"from miplearn.components.primal.actions import SetWarmStart\n",
|
||||
"\n",
|
||||
"# Configures a primal component that independently predicts the value of each\n",
|
||||
"# binary variable using logistic regression and provides it to the solver as\n",
|
||||
"# warm start. Erases predictions with probability less than 99%; applies\n",
|
||||
"# single-class fix; and uses AlvLouWeh2017 features.\n",
|
||||
"comp = IndependentVarsPrimalComponent(\n",
|
||||
" base_clf=SingleClassFix(\n",
|
||||
" MinProbabilityClassifier(\n",
|
||||
" base_clf=LogisticRegression(),\n",
|
||||
" thresholds=[0.99, 0.99],\n",
|
||||
" ),\n",
|
||||
" ),\n",
|
||||
" extractor=AlvLouWeh2017Extractor(),\n",
|
||||
" action=SetWarmStart(),\n",
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45107a0c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Joint vars primal component\n",
|
||||
"In the previous subsection, we used multiple machine learning models to independently predict the values of the binary decision variables. When these values are correlated, an alternative approach is to jointly predict the values of all binary variables using a single machine learning model. This strategy is implemented by `JointVarsPrimalComponent`. Compared to the previous ones, this component is much more straightforwad. It simply extracts instance features, using the user-provided feature extractor, then directly trains the user-provided binary classifier (using the `fit` method), without making any copies. The trained classifier is then used to predict entire solutions (using the `predict` method), which are given to the solver using one of the previously discussed methods. In the example below, we illustrate the usage of this component with a simple feed-forward neural network.\n",
|
||||
"\n",
|
||||
"`JointVarsPrimalComponent` can also be used to implement strategies that use multiple machine learning models, but not indepedently. For example, a common strategy in multioutput prediction is building a *classifier chain*. In this approach, the first decision variable is predicted using the instance features alone; but the $n$-th decision variable is predicted using the instance features plus the predicted values of the $n-1$ previous variables. This can be easily implemented using scikit-learn's `ClassifierChain` estimator, as shown in the example below.\n",
|
||||
"\n",
|
||||
"### Examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "cf9b52dd",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn.multioutput import ClassifierChain\n",
|
||||
"from sklearn.neural_network import MLPClassifier\n",
|
||||
"from miplearn.components.primal.joint import JointVarsPrimalComponent\n",
|
||||
"from miplearn.extractors.fields import H5FieldsExtractor\n",
|
||||
"from miplearn.components.primal.actions import SetWarmStart\n",
|
||||
"\n",
|
||||
"# Configures a primal component that uses a feedforward neural network\n",
|
||||
"# to jointly predict the values of the binary variables, based on the\n",
|
||||
"# objective cost function, and provides the solution to the solver as\n",
|
||||
"# a warm start.\n",
|
||||
"comp = JointVarsPrimalComponent(\n",
|
||||
" clf=MLPClassifier(),\n",
|
||||
" extractor=H5FieldsExtractor(\n",
|
||||
" instance_fields=[\"static_var_obj_coeffs\"],\n",
|
||||
" ),\n",
|
||||
" action=SetWarmStart(),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Configures a primal component that uses a chain of logistic regression\n",
|
||||
"# models to jointly predict the values of the binary variables, based on\n",
|
||||
"# the objective function.\n",
|
||||
"comp = JointVarsPrimalComponent(\n",
|
||||
" clf=ClassifierChain(SingleClassFix(LogisticRegression())),\n",
|
||||
" extractor=H5FieldsExtractor(\n",
|
||||
" instance_fields=[\"static_var_obj_coeffs\"],\n",
|
||||
" ),\n",
|
||||
" action=SetWarmStart(),\n",
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "dddf7be4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Expert primal component\n",
|
||||
"\n",
|
||||
"Before spending time and effort choosing a machine learning strategy and tweaking its parameters, it is usually a good idea to evaluate what would be the performance impact of the model if its predictions were 100% accurate. This is especially important for the prediction of warm starts, since they are not always very beneficial. To simplify this task, MIPLearn provides `ExpertPrimalComponent`, a component which simply loads the optimal solution from the HDF5 file, assuming that it has already been computed, then directly provides it to the solver using one of the available methods. This component is useful in benchmarks, to evaluate how close to the best theoretical performance the machine learning components are.\n",
|
||||
"\n",
|
||||
"### Example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "9e2e81b9",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from miplearn.components.primal.expert import ExpertPrimalComponent\n",
|
||||
"from miplearn.components.primal.actions import SetWarmStart\n",
|
||||
"\n",
|
||||
"# Configures an expert primal component, which reads a pre-computed\n",
|
||||
"# optimal solution from the HDF5 file and provides it to the solver\n",
|
||||
"# as warm start.\n",
|
||||
"comp = ExpertPrimalComponent(action=SetWarmStart())\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
1567
docs/guide/problems.ipynb
Normal file
1567
docs/guide/problems.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
247
docs/guide/solvers.ipynb
Normal file
247
docs/guide/solvers.ipynb
Normal file
@@ -0,0 +1,247 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "9ec1907b-db93-4840-9439-c9005902b968",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Learning Solver\n",
|
||||
"\n",
|
||||
"On previous pages, we discussed various components of the MIPLearn framework, including training data collectors, feature extractors, and individual machine learning components. In this page, we introduce **LearningSolver**, the main class of the framework which integrates all the aforementioned components into a cohesive whole. Using **LearningSolver** involves three steps: (i) configuring the solver; (ii) training the ML components; and (iii) solving new MIP instances. In the following, we describe each of these steps, then conclude with a complete runnable example.\n",
|
||||
"\n",
|
||||
"### Configuring the solver\n",
|
||||
"\n",
|
||||
"**LearningSolver** is composed by multiple individual machine learning components, each targeting a different part of the solution process, or implementing a different machine learning strategy. This architecture allows strategies to be easily enabled, disabled or customized, making the framework flexible. By default, no components are provided and **LearningSolver** is equivalent to a traditional MIP solver. To specify additional components, the `components` constructor argument may be used:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"solver = LearningSolver(\n",
|
||||
" components=[\n",
|
||||
" comp1,\n",
|
||||
" comp2,\n",
|
||||
" comp3,\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"In this example, three components `comp1`, `comp2` and `comp3` are provided. The strategies implemented by these components are applied sequentially when solving the problem. For example, `comp1` and `comp2` could fix a subset of decision variables, while `comp3` constructs a warm start for the remaining problem.\n",
|
||||
"\n",
|
||||
"### Training and solving new instances\n",
|
||||
"\n",
|
||||
"Once a solver is configured, its ML components need to be trained. This can be achieved by the `solver.fit` method, as illustrated below. The method accepts a list of HDF5 files and trains each individual component sequentially. Once the solver is trained, new instances can be solved using `solver.optimize`. The method returns a dictionary of statistics collected by each component, such as the number of variables fixed.\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"# Build instances\n",
|
||||
"train_data = ...\n",
|
||||
"test_data = ...\n",
|
||||
"\n",
|
||||
"# Collect training data\n",
|
||||
"bc = BasicCollector()\n",
|
||||
"bc.collect(train_data, build_model)\n",
|
||||
"\n",
|
||||
"# Build solver\n",
|
||||
"solver = LearningSolver(...)\n",
|
||||
"\n",
|
||||
"# Train components\n",
|
||||
"solver.fit(train_data)\n",
|
||||
"\n",
|
||||
"# Solve a new test instance\n",
|
||||
"stats = solver.optimize(test_data[0], build_model)\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"### Complete example\n",
|
||||
"\n",
|
||||
"In the example below, we illustrate the usage of **LearningSolver** by building instances of the Traveling Salesman Problem, collecting training data, training the ML components, then solving a new instance."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "92b09b98",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
|
||||
"Model fingerprint: 0x6ddcd141\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 1e+00]\n",
|
||||
" Objective range [4e+01, 1e+03]\n",
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [2e+00, 2e+00]\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 10 rows, 45 columns, 90 nonzeros\n",
|
||||
"\n",
|
||||
"Iteration Objective Primal Inf. Dual Inf. Time\n",
|
||||
" 0 6.3600000e+02 1.700000e+01 0.000000e+00 0s\n",
|
||||
" 15 2.7610000e+03 0.000000e+00 0.000000e+00 0s\n",
|
||||
"\n",
|
||||
"Solved in 15 iterations and 0.00 seconds (0.00 work units)\n",
|
||||
"Optimal objective 2.761000000e+03\n",
|
||||
"Set parameter LazyConstraints to value 1\n",
|
||||
"Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (linux64)\n",
|
||||
"\n",
|
||||
"CPU model: AMD Ryzen 9 7950X 16-Core Processor, instruction set [SSE2|AVX|AVX2|AVX512]\n",
|
||||
"Thread count: 16 physical cores, 32 logical processors, using up to 32 threads\n",
|
||||
"\n",
|
||||
"Optimize a model with 10 rows, 45 columns and 90 nonzeros\n",
|
||||
"Model fingerprint: 0x74ca3d0a\n",
|
||||
"Variable types: 0 continuous, 45 integer (45 binary)\n",
|
||||
"Coefficient statistics:\n",
|
||||
" Matrix range [1e+00, 1e+00]\n",
|
||||
" Objective range [4e+01, 1e+03]\n",
|
||||
" Bounds range [1e+00, 1e+00]\n",
|
||||
" RHS range [2e+00, 2e+00]\n",
|
||||
"\n",
|
||||
"User MIP start produced solution with objective 2796 (0.00s)\n",
|
||||
"Loaded user MIP start with objective 2796\n",
|
||||
"\n",
|
||||
"Presolve time: 0.00s\n",
|
||||
"Presolved: 10 rows, 45 columns, 90 nonzeros\n",
|
||||
"Variable types: 0 continuous, 45 integer (45 binary)\n",
|
||||
"\n",
|
||||
"Root relaxation: objective 2.761000e+03, 14 iterations, 0.00 seconds (0.00 work units)\n",
|
||||
"\n",
|
||||
" Nodes | Current Node | Objective Bounds | Work\n",
|
||||
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
|
||||
"\n",
|
||||
" 0 0 2761.00000 0 - 2796.00000 2761.00000 1.25% - 0s\n",
|
||||
" 0 0 cutoff 0 2796.00000 2796.00000 0.00% - 0s\n",
|
||||
"\n",
|
||||
"Cutting planes:\n",
|
||||
" Lazy constraints: 3\n",
|
||||
"\n",
|
||||
"Explored 1 nodes (16 simplex iterations) in 0.01 seconds (0.00 work units)\n",
|
||||
"Thread count was 32 (of 32 available processors)\n",
|
||||
"\n",
|
||||
"Solution count 1: 2796 \n",
|
||||
"\n",
|
||||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||||
"Best objective 2.796000000000e+03, best bound 2.796000000000e+03, gap 0.0000%\n",
|
||||
"\n",
|
||||
"User-callback calls 110, time in user-callback 0.00 sec\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'WS: Count': 1, 'WS: Number of variables set': 41.0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import random\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"from scipy.stats import uniform, randint\n",
|
||||
"from sklearn.linear_model import LogisticRegression\n",
|
||||
"\n",
|
||||
"from miplearn.classifiers.minprob import MinProbabilityClassifier\n",
|
||||
"from miplearn.classifiers.singleclass import SingleClassFix\n",
|
||||
"from miplearn.collectors.basic import BasicCollector\n",
|
||||
"from miplearn.components.primal.actions import SetWarmStart\n",
|
||||
"from miplearn.components.primal.indep import IndependentVarsPrimalComponent\n",
|
||||
"from miplearn.extractors.AlvLouWeh2017 import AlvLouWeh2017Extractor\n",
|
||||
"from miplearn.io import write_pkl_gz\n",
|
||||
"from miplearn.problems.tsp import (\n",
|
||||
" TravelingSalesmanGenerator,\n",
|
||||
" build_tsp_model,\n",
|
||||
")\n",
|
||||
"from miplearn.solvers.learning import LearningSolver\n",
|
||||
"\n",
|
||||
"# Set random seed to make example reproducible.\n",
|
||||
"random.seed(42)\n",
|
||||
"np.random.seed(42)\n",
|
||||
"\n",
|
||||
"# Generate a few instances of the traveling salesman problem.\n",
|
||||
"data = TravelingSalesmanGenerator(\n",
|
||||
" n=randint(low=10, high=11),\n",
|
||||
" x=uniform(loc=0.0, scale=1000.0),\n",
|
||||
" y=uniform(loc=0.0, scale=1000.0),\n",
|
||||
" gamma=uniform(loc=0.90, scale=0.20),\n",
|
||||
" fix_cities=True,\n",
|
||||
" round=True,\n",
|
||||
").generate(50)\n",
|
||||
"\n",
|
||||
"# Save instance data to data/tsp/00000.pkl.gz, data/tsp/00001.pkl.gz, ...\n",
|
||||
"all_data = write_pkl_gz(data, \"data/tsp\")\n",
|
||||
"\n",
|
||||
"# Split train/test data\n",
|
||||
"train_data = all_data[:40]\n",
|
||||
"test_data = all_data[40:]\n",
|
||||
"\n",
|
||||
"# Collect training data\n",
|
||||
"bc = BasicCollector()\n",
|
||||
"bc.collect(train_data, build_tsp_model, n_jobs=4)\n",
|
||||
"\n",
|
||||
"# Build learning solver\n",
|
||||
"solver = LearningSolver(\n",
|
||||
" components=[\n",
|
||||
" IndependentVarsPrimalComponent(\n",
|
||||
" base_clf=SingleClassFix(\n",
|
||||
" MinProbabilityClassifier(\n",
|
||||
" base_clf=LogisticRegression(),\n",
|
||||
" thresholds=[0.95, 0.95],\n",
|
||||
" ),\n",
|
||||
" ),\n",
|
||||
" extractor=AlvLouWeh2017Extractor(),\n",
|
||||
" action=SetWarmStart(),\n",
|
||||
" )\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Train ML models\n",
|
||||
"solver.fit(train_data)\n",
|
||||
"\n",
|
||||
"# Solve a test instance\n",
|
||||
"solver.optimize(test_data[0], build_tsp_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e27d2cbd-5341-461d-bbc1-8131aee8d949",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
Reference in New Issue
Block a user