Update docs

This commit is contained in:
2020-12-03 12:22:10 -06:00
parent 070c4adc73
commit ed15ffe119
12 changed files with 179 additions and 142 deletions

View File

@@ -59,7 +59,9 @@
<!-- Main title -->
<a class="navbar-brand" href="..">MIPLearn</a>
<a class="navbar-brand" href="..">MIPLearn</a>
</div>
<!-- Expanded navigation -->
@@ -146,7 +148,7 @@
<h1 id="benchmarks-utilities">Benchmarks Utilities</h1>
<h3 id="using-benchmarkrunner">Using <code>BenchmarkRunner</code></h3>
<p>MIPLearn provides the utility class <code>BenchmarkRunner</code>, which simplifies the task of comparing the performance of different solvers. The snippet below shows its basic usage:</p>
<pre><code class="python">from miplearn import BenchmarkRunner, LearningSolver
<pre><code class="language-python">from miplearn import BenchmarkRunner, LearningSolver
# Create train and test instances
train_instances = [...]
@@ -168,7 +170,6 @@ benchmark.fit(train_instances)
benchmark.parallel_solve(test_instances, n_jobs=2)
print(benchmark.raw_results())
</code></pre>
<p>The method <code>fit</code> trains the ML models for each individual solver. The method <code>parallel_solve</code> solves the test instances in parallel, and collects solver statistics such as running time and optimal value. Finally, <code>raw_results</code> produces a table of results (Pandas DataFrame) with the following columns:</p>
<ul>
<li><strong>Solver,</strong> the name of the solver.</li>
@@ -182,7 +183,7 @@ print(benchmark.raw_results())
<p>In addition to the above, there is also a "Relative" version of most columns, where the raw number is compared to the solver which provided the best performance. The <em>Relative Wallclock Time</em> for example, indicates how many times slower this run was when compared to the best time achieved by any solver when processing this instance. For example, if this run took 10 seconds, but the fastest solver took only 5 seconds to solve the same instance, the relative wallclock time would be 2.</p>
<h3 id="saving-and-loading-benchmark-results">Saving and loading benchmark results</h3>
<p>When iteratively exploring new formulations, encoding and solver parameters, it is often desirable to avoid repeating parts of the benchmark suite. For example, if the baseline solver has not been changed, there is no need to evaluate its performance again and again when making small changes to the remaining solvers. <code>BenchmarkRunner</code> provides the methods <code>save_results</code> and <code>load_results</code>, which can be used to avoid this repetition, as the next example shows:</p>
<pre><code class="python"># Benchmark baseline solvers and save results to a file.
<pre><code class="language-python"># Benchmark baseline solvers and save results to a file.
benchmark = BenchmarkRunner(baseline_solvers)
benchmark.parallel_solve(test_instances)
benchmark.save_results(&quot;baseline_results.csv&quot;)
@@ -197,18 +198,22 @@ benchmark.parallel_solve(test_instances)
</div>
<footer class="col-md-12 text-center">
<hr>
<p>
<small>Copyright © 2020, UChicago Argonne, LLC. All Rights Reserved.</small><br>
<small>Documentation built with <a href="http://www.mkdocs.org/">MkDocs</a>.</small>
</p>
<footer class="col-md-12 text-center">
<hr>
<p>
<small>Copyright © 2020, UChicago Argonne, LLC. All Rights Reserved.</small><br>
<small>Documentation built with <a href="http://www.mkdocs.org/">MkDocs</a>.</small>
</p>
</footer>
</footer>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script src="../js/bootstrap-3.0.3.min.js"></script>