diff --git a/0.4/.documenter-siteinfo.json b/0.4/.documenter-siteinfo.json index 56442e3..022fc15 100644 --- a/0.4/.documenter-siteinfo.json +++ b/0.4/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.3","generation_timestamp":"2024-05-21T10:56:56","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.12.1","generation_timestamp":"2025-11-05T09:58:10","documenter_version":"1.15.0"}} \ No newline at end of file diff --git a/0.4/api/index.html b/0.4/api/index.html index 95d2cae..5286408 100644 --- a/0.4/api/index.html +++ b/0.4/api/index.html @@ -1,5 +1,5 @@ -API Reference · UnitCommitment.jl

API Reference

Read data, build model & optimize

UnitCommitment.readFunction
read(path::AbstractString)::UnitCommitmentInstance

Read a deterministic test case from the given file. The file may be gzipped.

Example

instance = UnitCommitment.read("s1.json.gz")
source
read(path::Vector{String})::UnitCommitmentInstance

Read a stochastic unit commitment instance from the given files. Each file describes a scenario. The files may be gzipped.

Example

instance = UnitCommitment.read(["s1.json.gz", "s2.json.gz"])
source
UnitCommitment.read_benchmarkFunction
read_benchmark(name::AbstractString)::UnitCommitmentInstance

Read one of the benchmark instances included in the package. See Instances for the entire list of benchmark instances available.

Example

instance = UnitCommitment.read_benchmark("matpower/case3375wp/2017-02-01")
source
UnitCommitment.build_modelFunction
function build_model(;
+API Reference · UnitCommitment.jl

API Reference

Read data, build model & optimize

UnitCommitment.readFunction
read(path::AbstractString)::UnitCommitmentInstance

Read a deterministic test case from the given file. The file may be gzipped.

Example

instance = UnitCommitment.read("s1.json.gz")
source
read(path::Vector{String})::UnitCommitmentInstance

Read a stochastic unit commitment instance from the given files. Each file describes a scenario. The files may be gzipped.

Example

instance = UnitCommitment.read(["s1.json.gz", "s2.json.gz"])
source
UnitCommitment.read_benchmarkFunction
read_benchmark(name::AbstractString)::UnitCommitmentInstance

Read one of the benchmark instances included in the package. See Instances for the entire list of benchmark instances available.

Example

instance = UnitCommitment.read_benchmark("matpower/case3375wp/2017-02-01")
source
UnitCommitment.build_modelFunction
function build_model(;
     instance::UnitCommitmentInstance,
     optimizer = nothing,
     formulation = Formulation(),
@@ -26,9 +26,9 @@ model = UnitCommitment.build_model(
             lodf_cutoff = 0.001,
         ),
     ),
-)
source
UnitCommitment.optimize!Function
optimize!(model::JuMP.Model)::Nothing

Solve the given unit commitment model. Unlike JuMP.optimize!, this uses more advanced methods to accelerate the solution process and to enforce transmission and N-1 security constraints.

source
UnitCommitment.solutionFunction
solution(model::JuMP.Model)::OrderedDict

Extracts the optimal solution from the UC.jl model. The model must be solved beforehand.

Example

UnitCommitment.optimize!(model)
-solution = UnitCommitment.solution(model)
source
UnitCommitment.validateFunction
validate(instance, solution)::Bool

Verifies that the given solution is feasible for the problem. If feasible, silently returns true. In infeasible, returns false and prints the validation errors to the screen.

This function is implemented independently from the optimization model in model.jl, and therefore can be used to verify that the model is indeed producing valid solutions. It can also be used to verify the solutions produced by other optimization packages.

source
UnitCommitment.writeFunction
write(filename::AbstractString, solution::AbstractDict)::Nothing

Write the given solution to a JSON file.

Example

solution = UnitCommitment.solution(model)
-UnitCommitment.write("/tmp/output.json", solution)
source

Locational Marginal Prices

Conventional LMPs

UnitCommitment.optimize!Function
optimize!(model::JuMP.Model)::Nothing

Solve the given unit commitment model. Unlike JuMP.optimize!, this uses more advanced methods to accelerate the solution process and to enforce transmission and N-1 security constraints.

source
UnitCommitment.solutionFunction
solution(model::JuMP.Model)::OrderedDict

Extracts the optimal solution from the UC.jl model. The model must be solved beforehand.

Example

UnitCommitment.optimize!(model)
+solution = UnitCommitment.solution(model)
source
UnitCommitment.validateFunction
validate(instance, solution)::Bool

Verifies that the given solution is feasible for the problem. If feasible, silently returns true. In infeasible, returns false and prints the validation errors to the screen.

This function is implemented independently from the optimization model in model.jl, and therefore can be used to verify that the model is indeed producing valid solutions. It can also be used to verify the solutions produced by other optimization packages.

source
UnitCommitment.writeFunction
write(filename::AbstractString, solution::AbstractDict)::Nothing

Write the given solution to a JSON file.

Example

solution = UnitCommitment.solution(model)
+UnitCommitment.write("/tmp/output.json", solution)
source

Locational Marginal Prices

Conventional LMPs

UnitCommitment.compute_lmpMethod
function compute_lmp(
     model::JuMP.Model,
     method::ConventionalLMP;
     optimizer,
@@ -58,14 +58,14 @@ lmp = UnitCommitment.compute_lmp(
 
 # Access the LMPs
 # Example: "s1" is the scenario name, "b1" is the bus name, 1 is the first time slot
-@show lmp["s1", "b1", 1]
source

Approximated Extended LMPs

Approximated Extended LMPs

UnitCommitment.AELMPType
struct AELMP <: PricingMethod 
     allow_offline_participation::Bool = true
     consider_startup_costs::Bool = true
-end

Approximate Extended LMPs.

Arguments

  • allow_offline_participation: If true, offline assets are allowed to participate in pricing.
  • consider_startup_costs: If true, the start-up costs are averaged over each unit production; otherwise the production costs stay the same.
source
UnitCommitment.compute_lmpMethod
function compute_lmp(
+end

Approximate Extended LMPs.

Arguments

  • allow_offline_participation: If true, offline assets are allowed to participate in pricing.
  • consider_startup_costs: If true, the start-up costs are averaged over each unit production; otherwise the production costs stay the same.
source
UnitCommitment.compute_lmpMethod
function compute_lmp(
     model::JuMP.Model,
     method::AELMP;
     optimizer,
-)::OrderedDict{Tuple{String,Int},Float64}

Calculates the approximate extended locational marginal prices of the given unit commitment instance.

The AELPM does the following three things:

1. It sets the minimum power output of each generator to zero
+)::OrderedDict{Tuple{String,Int},Float64}

Calculates the approximate extended locational marginal prices of the given unit commitment instance.

The AELPM does the following three things:

1. It sets the minimum power output of each generator to zero
 2. It averages the start-up cost over the offer blocks for each generator
 3. It relaxes all integrality constraints

Returns a dictionary mapping (bus_name, time) to the marginal price.

WARNING: This approximation method is not fully developed. The implementation is based on MISO Phase I only.

  1. It only supports Fast Start resources. More specifically, the minimum up/down time has to be zero.
  2. The method does NOT support time-varying start-up costs.
  3. An asset is considered offline if it is never on throughout all time periods.
  4. The method does NOT support multiple scenarios.

Arguments

  • model: the UnitCommitment model, must be solved before calling this function if offline participation is not allowed.

  • method: the AELMP method.

  • optimizer: the optimizer for solving the LP problem.

Examples

using UnitCommitment
 using HiGHS
@@ -97,54 +97,54 @@ aelmp = UnitCommitment.compute_lmp(
 # Access the AELMPs
 # Example: "s1" is the scenario name, "b1" is the bus name, 1 is the first time slot
 # Note: although scenario is supported, the query still keeps the scenario keys for consistency.
-@show aelmp["s1", "b1", 1]
source

Modify instance

UnitCommitment.sliceFunction
slice(instance, range)

Creates a new instance, with only a subset of the time periods. This function does not modify the provided instance. The initial conditions are also not modified.

Example

# Build a 2-hour UC instance
+@show aelmp["s1", "b1", 1]
source

Modify instance

UnitCommitment.sliceFunction
slice(instance, range)

Creates a new instance, with only a subset of the time periods. This function does not modify the provided instance. The initial conditions are also not modified.

Example

# Build a 2-hour UC instance
 instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
-modified = UnitCommitment.slice(instance, 1:2)
source
UnitCommitment.randomize!Method
function randomize!(
     instance::UnitCommitmentInstance;
     method = UnitCommitment.XavQiuAhm2021.Randomization();
     rng = MersenneTwister(),
 )::Nothing

Randomizes instance parameters according to the provided randomization method.

Example

instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
 UnitCommitment.randomize!(instance)
-model = UnitCommitment.build_model(; instance)
source
UnitCommitment.generate_initial_conditions!Function
generate_initial_conditions!(instance, optimizer)

Generates feasible initial conditions for the given instance, by constructing and solving a single-period mixed-integer optimization problem, using the given optimizer. The instance is modified in-place.

source

Formulations

UnitCommitment.generate_initial_conditions!Function
generate_initial_conditions!(instance, optimizer)

Generates feasible initial conditions for the given instance, by constructing and solving a single-period mixed-integer optimization problem, using the given optimizer. The instance is modified in-place.

source

Formulations

UnitCommitment.FormulationType
struct Formulation
     prod_vars::ProductionVarsFormulation
     pwl_costs::PiecewiseLinearCostsFormulation
     ramping::RampingFormulation
     startup_costs::StartupCostsFormulation
     status_vars::StatusVarsFormulation
     transmission::TransmissionFormulation
-end

Struct provided to build_model that holds various formulation components.

Fields

  • prod_vars: Formulation for the production decision variables
  • pwl_costs: Formulation for the piecewise linear costs
  • ramping: Formulation for ramping constraints
  • startup_costs: Formulation for time-dependent start-up costs
  • status_vars: Formulation for the status variables (e.g. is_on, is_off)
  • transmission: Formulation for transmission and N-1 security constraints
source
UnitCommitment.ShiftFactorsFormulationType
struct ShiftFactorsFormulation <: TransmissionFormulation
+end

Struct provided to build_model that holds various formulation components.

Fields

  • prod_vars: Formulation for the production decision variables
  • pwl_costs: Formulation for the piecewise linear costs
  • ramping: Formulation for ramping constraints
  • startup_costs: Formulation for time-dependent start-up costs
  • status_vars: Formulation for the status variables (e.g. is_on, is_off)
  • transmission: Formulation for transmission and N-1 security constraints
source
UnitCommitment.ShiftFactorsFormulationType
struct ShiftFactorsFormulation <: TransmissionFormulation
     isf_cutoff::Float64 = 0.005
     lodf_cutoff::Float64 = 0.001
     precomputed_isf=nothing
     precomputed_lodf=nothing
-end

Transmission formulation based on Injection Shift Factors (ISF) and Line Outage Distribution Factors (LODF). Constraints are enforced in a lazy way.

Arguments

  • precomputed_isf: the injection shift factors matrix. If not provided, it will be computed.
  • precomputed_lodf: the line outage distribution factors matrix. If not provided, it will be computed.
  • isf_cutoff: the cutoff that should be applied to the ISF matrix. Entries with magnitude smaller than this value will be set to zero.
  • lodf_cutoff: the cutoff that should be applied to the LODF matrix. Entries with magnitude smaller than this value will be set to zero.
source
UnitCommitment.ArrCon2000Module

Formulation described in:

Arroyo, J. M., & Conejo, A. J. (2000). Optimal response of a thermal unit
+end

Transmission formulation based on Injection Shift Factors (ISF) and Line Outage Distribution Factors (LODF). Constraints are enforced in a lazy way.

Arguments

  • precomputed_isf: the injection shift factors matrix. If not provided, it will be computed.
  • precomputed_lodf: the line outage distribution factors matrix. If not provided, it will be computed.
  • isf_cutoff: the cutoff that should be applied to the ISF matrix. Entries with magnitude smaller than this value will be set to zero.
  • lodf_cutoff: the cutoff that should be applied to the LODF matrix. Entries with magnitude smaller than this value will be set to zero.
source
UnitCommitment.ArrCon2000Module

Formulation described in:

Arroyo, J. M., & Conejo, A. J. (2000). Optimal response of a thermal unit
 to an electricity spot market. IEEE Transactions on power systems, 15(3), 
-1098-1104. DOI: https://doi.org/10.1109/59.871739
source
UnitCommitment.CarArr2006Module

Formulation described in:

Carrión, M., & Arroyo, J. M. (2006). A computationally efficient
+1098-1104. DOI: https://doi.org/10.1109/59.871739
source
UnitCommitment.CarArr2006Module

Formulation described in:

Carrión, M., & Arroyo, J. M. (2006). A computationally efficient
 mixed-integer linear formulation for the thermal unit commitment problem.
 IEEE Transactions on power systems, 21(3), 1371-1378.
-DOI: https://doi.org/10.1109/TPWRS.2006.876672
source
UnitCommitment.DamKucRajAta2016Module

Formulation described in:

Damcı-Kurt, P., Küçükyavuz, S., Rajan, D., & Atamtürk, A. (2016). A polyhedral
+DOI: https://doi.org/10.1109/TPWRS.2006.876672
source
UnitCommitment.DamKucRajAta2016Module

Formulation described in:

Damcı-Kurt, P., Küçükyavuz, S., Rajan, D., & Atamtürk, A. (2016). A polyhedral
 study of production ramping. Mathematical Programming, 158(1), 175-205.
-DOI: https://doi.org/10.1007/s10107-015-0919-9
source
UnitCommitment.Gar1962Module

Formulation described in:

Garver, L. L. (1962). Power generation scheduling by integer
+DOI: https://doi.org/10.1007/s10107-015-0919-9
source
UnitCommitment.Gar1962Module

Formulation described in:

Garver, L. L. (1962). Power generation scheduling by integer
 programming-development of theory. Transactions of the American Institute
 of Electrical Engineers. Part III: Power Apparatus and Systems, 81(3), 730-734.
-DOI: https://doi.org/10.1109/AIEEPAS.1962.4501405
source
UnitCommitment.KnuOstWat2018Module

Formulation described in:

Knueven, B., Ostrowski, J., & Watson, J. P. (2018). Exploiting identical
+DOI: https://doi.org/10.1109/AIEEPAS.1962.4501405
source
UnitCommitment.KnuOstWat2018Module

Formulation described in:

Knueven, B., Ostrowski, J., & Watson, J. P. (2018). Exploiting identical
 generators in unit commitment. IEEE Transactions on Power Systems, 33(4),
-4496-4507. DOI: https://doi.org/10.1109/TPWRS.2017.2783850
source
UnitCommitment.MorLatRam2013Module

Formulation described in:

Morales-España, G., Latorre, J. M., & Ramos, A. (2013). Tight and compact
+4496-4507. DOI: https://doi.org/10.1109/TPWRS.2017.2783850
source
UnitCommitment.MorLatRam2013Module

Formulation described in:

Morales-España, G., Latorre, J. M., & Ramos, A. (2013). Tight and compact
 MILP formulation for the thermal unit commitment problem. IEEE Transactions
-on Power Systems, 28(4), 4897-4908. DOI: https://doi.org/10.1109/TPWRS.2013.2251373
source
UnitCommitment.PanGua2016Module

Formulation described in:

Pan, K., & Guan, Y. (2016). Strong formulations for multistage stochastic
+on Power Systems, 28(4), 4897-4908. DOI: https://doi.org/10.1109/TPWRS.2013.2251373
source
UnitCommitment.PanGua2016Module

Formulation described in:

Pan, K., & Guan, Y. (2016). Strong formulations for multistage stochastic
 self-scheduling unit commitment. Operations Research, 64(6), 1482-1498.
-DOI: https://doi.org/10.1287/opre.2016.1520
source
UnitCommitment.WanHob2016Module

Formulation described in:

B. Wang and B. F. Hobbs, "Real-Time Markets for Flexiramp: A Stochastic 
+DOI: https://doi.org/10.1287/opre.2016.1520
source
UnitCommitment.WanHob2016Module

Formulation described in:

B. Wang and B. F. Hobbs, "Real-Time Markets for Flexiramp: A Stochastic 
 Unit Commitment-Based Analysis," in IEEE Transactions on Power Systems, 
-vol. 31, no. 2, pp. 846-860, March 2016, doi: 10.1109/TPWRS.2015.2411268.
source

Solution Methods

Solution Methods

UnitCommitment.XavQiuWanThi2019.MethodType
mutable struct Method
     time_limit::Float64
     gap_limit::Float64
     two_phase_gap::Bool
     max_violations_per_line::Int
     max_violations_per_period::Int
-end

Lazy constraint solution method described in:

Xavier, A. S., Qiu, F., Wang, F., & Thimmapuram, P. R. (2019). Transmission
+end

Lazy constraint solution method described in:

Xavier, A. S., Qiu, F., Wang, F., & Thimmapuram, P. R. (2019). Transmission
 constraint filtering in large-scale security-constrained unit commitment. 
 IEEE Transactions on Power Systems, 34(3), 2457-2460.
-DOI: https://doi.org/10.1109/TPWRS.2019.2892620

Fields

  • time_limit: the time limit over the entire optimization procedure.
  • gap_limit: the desired relative optimality gap. Only used when two_phase_gap=true.
  • two_phase_gap: if true, solve the problem with large gap tolerance first, then reduce the gap tolerance when no further violated constraints are found.
  • max_violations_per_line: maximum number of violated transmission constraints to add to the formulation per transmission line.
  • max_violations_per_period: maximum number of violated transmission constraints to add to the formulation per time period.
source

Randomization Methods

UnitCommitment.XavQiuAhm2021.RandomizationType
struct Randomization
+DOI: https://doi.org/10.1109/TPWRS.2019.2892620

Fields

  • time_limit: the time limit over the entire optimization procedure.
  • gap_limit: the desired relative optimality gap. Only used when two_phase_gap=true.
  • two_phase_gap: if true, solve the problem with large gap tolerance first, then reduce the gap tolerance when no further violated constraints are found.
  • max_violations_per_line: maximum number of violated transmission constraints to add to the formulation per transmission line.
  • max_violations_per_period: maximum number of violated transmission constraints to add to the formulation per time period.
source

Randomization Methods

UnitCommitment.XavQiuAhm2021.RandomizationType
struct Randomization
     cost = Uniform(0.95, 1.05)
     load_profile_mu = [...]
     load_profile_sigma = [...]
@@ -153,4 +153,4 @@ DOI: https://doi.org/10.1109/TPWRS.2019.2892620

Fields

Randomization method that changes: (1) production and startup costs, (2) share of load coming from each bus, (3) peak system load, and (4) temporal load profile, as follows:

  1. Production and startup costs: For each unit u, the vectors u.min_power_cost and u.cost_segments are multiplied by a constant α[u] sampled from the provided cost distribution. If randomize_costs is false, skips this step.

  2. Load share: For each bus b and time t, the value b.load[t] is multiplied by (β[b] * b.load[t]) / sum(β[b2] * b2.load[t] for b2 in buses), where β[b] is sampled from the provided load_share distribution. If randomize_load_share is false, skips this step.

  3. Peak system load and temporal load profile: Sets the peak load to ρ * C, where ρ is sampled from peak_load and C is the maximum system capacity, at any time. Also scales the loads of all buses, so that system_load[t+1] becomes equal to system_load[t] * γ[t], where γ[t] is sampled from Normal(load_profile_mu[t], load_profile_sigma[t]).

    The system load for the first time period is set so that the peak load matches ρ * C. If load_profile_sigma and load_profile_mu have fewer elements than instance.time, wraps around. If randomize_load_profile is false, skips this step.

The default parameters were obtained based on an analysis of publicly available bid and hourly data from PJM, corresponding to the month of January, 2017. For more details, see Section 4.2 of the paper.

References

  • Xavier, Álinson S., Feng Qiu, and Shabbir Ahmed. "Learning to solve large-scale security-constrained unit commitment problems." INFORMS Journal on Computing 33.2 (2021): 739-756. DOI: 10.1287/ijoc.2020.0976
source
+end

Randomization method that changes: (1) production and startup costs, (2) share of load coming from each bus, (3) peak system load, and (4) temporal load profile, as follows:

  1. Production and startup costs: For each unit u, the vectors u.min_power_cost and u.cost_segments are multiplied by a constant α[u] sampled from the provided cost distribution. If randomize_costs is false, skips this step.

  2. Load share: For each bus b and time t, the value b.load[t] is multiplied by (β[b] * b.load[t]) / sum(β[b2] * b2.load[t] for b2 in buses), where β[b] is sampled from the provided load_share distribution. If randomize_load_share is false, skips this step.

  3. Peak system load and temporal load profile: Sets the peak load to ρ * C, where ρ is sampled from peak_load and C is the maximum system capacity, at any time. Also scales the loads of all buses, so that system_load[t+1] becomes equal to system_load[t] * γ[t], where γ[t] is sampled from Normal(load_profile_mu[t], load_profile_sigma[t]).

    The system load for the first time period is set so that the peak load matches ρ * C. If load_profile_sigma and load_profile_mu have fewer elements than instance.time, wraps around. If randomize_load_profile is false, skips this step.

The default parameters were obtained based on an analysis of publicly available bid and hourly data from PJM, corresponding to the month of January, 2017. For more details, see Section 4.2 of the paper.

References

source diff --git a/0.4/assets/documenter.js b/0.4/assets/documenter.js index f531160..99d5f7f 100644 --- a/0.4/assets/documenter.js +++ b/0.4/assets/documenter.js @@ -4,7 +4,6 @@ requirejs.config({ 'highlight-julia': 'https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.8.0/languages/julia.min', 'headroom': 'https://cdnjs.cloudflare.com/ajax/libs/headroom/0.12.0/headroom.min', 'jqueryui': 'https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.13.2/jquery-ui.min', - 'minisearch': 'https://cdn.jsdelivr.net/npm/minisearch@6.1.0/dist/umd/index.min', 'katex-auto-render': 'https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.16.8/contrib/auto-render.min', 'jquery': 'https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.0/jquery.min', 'headroom-jquery': 'https://cdnjs.cloudflare.com/ajax/libs/headroom/0.12.0/jQuery.headroom.min', @@ -78,48 +77,54 @@ require(['jquery'], function($) { let timer = 0; var isExpanded = true; -$(document).on("click", ".docstring header", function () { - let articleToggleTitle = "Expand docstring"; +$(document).on( + "click", + ".docstring .docstring-article-toggle-button", + function () { + let articleToggleTitle = "Expand docstring"; + const parent = $(this).parent(); - debounce(() => { - if ($(this).siblings("section").is(":visible")) { - $(this) - .find(".docstring-article-toggle-button") - .removeClass("fa-chevron-down") - .addClass("fa-chevron-right"); - } else { - $(this) - .find(".docstring-article-toggle-button") - .removeClass("fa-chevron-right") - .addClass("fa-chevron-down"); + debounce(() => { + if (parent.siblings("section").is(":visible")) { + parent + .find("a.docstring-article-toggle-button") + .removeClass("fa-chevron-down") + .addClass("fa-chevron-right"); + } else { + parent + .find("a.docstring-article-toggle-button") + .removeClass("fa-chevron-right") + .addClass("fa-chevron-down"); - articleToggleTitle = "Collapse docstring"; - } + articleToggleTitle = "Collapse docstring"; + } - $(this) - .find(".docstring-article-toggle-button") - .prop("title", articleToggleTitle); - $(this).siblings("section").slideToggle(); - }); -}); + parent + .children(".docstring-article-toggle-button") + .prop("title", articleToggleTitle); + parent.siblings("section").slideToggle(); + }); + }, +); -$(document).on("click", ".docs-article-toggle-button", function () { +$(document).on("click", ".docs-article-toggle-button", function (event) { let articleToggleTitle = "Expand docstring"; let navArticleToggleTitle = "Expand all docstrings"; + let animationSpeed = event.noToggleAnimation ? 0 : 400; debounce(() => { if (isExpanded) { $(this).removeClass("fa-chevron-up").addClass("fa-chevron-down"); - $(".docstring-article-toggle-button") + $("a.docstring-article-toggle-button") .removeClass("fa-chevron-down") .addClass("fa-chevron-right"); isExpanded = false; - $(".docstring section").slideUp(); + $(".docstring section").slideUp(animationSpeed); } else { $(this).removeClass("fa-chevron-down").addClass("fa-chevron-up"); - $(".docstring-article-toggle-button") + $("a.docstring-article-toggle-button") .removeClass("fa-chevron-right") .addClass("fa-chevron-down"); @@ -127,7 +132,7 @@ $(document).on("click", ".docs-article-toggle-button", function () { articleToggleTitle = "Collapse docstring"; navArticleToggleTitle = "Collapse all docstrings"; - $(".docstring section").slideDown(); + $(".docstring section").slideDown(animationSpeed); } $(this).prop("title", navArticleToggleTitle); @@ -224,393 +229,656 @@ $(document).ready(function () { }) //////////////////////////////////////////////////////////////////////////////// -require(['jquery', 'minisearch'], function($, minisearch) { +require(['jquery'], function($) { -// In general, most search related things will have "search" as a prefix. -// To get an in-depth about the thought process you can refer: https://hetarth02.hashnode.dev/series/gsoc +$(document).ready(function () { + let meta = $("div[data-docstringscollapsed]").data(); -let results = []; -let timer = undefined; + if (meta?.docstringscollapsed) { + $("#documenter-article-toggle-button").trigger({ + type: "click", + noToggleAnimation: true, + }); -let data = documenterSearchIndex["docs"].map((x, key) => { - x["id"] = key; // minisearch requires a unique for each object - return x; + setTimeout(function () { + if (window.location.hash) { + const targetId = window.location.hash.substring(1); + const targetElement = document.getElementById(targetId); + + if (targetElement) { + targetElement.scrollIntoView({ + behavior: "smooth", + block: "center", + }); + } + } + }, 100); + } }); -// list below is the lunr 2.1.3 list minus the intersect with names(Base) -// (all, any, get, in, is, only, which) and (do, else, for, let, where, while, with) -// ideally we'd just filter the original list but it's not available as a variable -const stopWords = new Set([ - "a", - "able", - "about", - "across", - "after", - "almost", - "also", - "am", - "among", - "an", - "and", - "are", - "as", - "at", - "be", - "because", - "been", - "but", - "by", - "can", - "cannot", - "could", - "dear", - "did", - "does", - "either", - "ever", - "every", - "from", - "got", - "had", - "has", - "have", - "he", - "her", - "hers", - "him", - "his", - "how", - "however", - "i", - "if", - "into", - "it", - "its", - "just", - "least", - "like", - "likely", - "may", - "me", - "might", - "most", - "must", - "my", - "neither", - "no", - "nor", - "not", - "of", - "off", - "often", - "on", - "or", - "other", - "our", - "own", - "rather", - "said", - "say", - "says", - "she", - "should", - "since", - "so", - "some", - "than", - "that", - "the", - "their", - "them", - "then", - "there", - "these", - "they", - "this", - "tis", - "to", - "too", - "twas", - "us", - "wants", - "was", - "we", - "were", - "what", - "when", - "who", - "whom", - "why", - "will", - "would", - "yet", - "you", - "your", -]); +}) +//////////////////////////////////////////////////////////////////////////////// +require(['jquery'], function($) { -let index = new minisearch({ - fields: ["title", "text"], // fields to index for full-text search - storeFields: ["location", "title", "text", "category", "page"], // fields to return with search results - processTerm: (term) => { - let word = stopWords.has(term) ? null : term; - if (word) { - // custom trimmer that doesn't strip @ and !, which are used in julia macro and function names - word = word - .replace(/^[^a-zA-Z0-9@!]+/, "") - .replace(/[^a-zA-Z0-9@!]+$/, ""); - } +/* +To get an in-depth about the thought process you can refer: https://hetarth02.hashnode.dev/series/gsoc - return word ?? null; - }, - // add . as a separator, because otherwise "title": "Documenter.Anchors.add!", would not find anything if searching for "add!", only for the entire qualification - tokenize: (string) => string.split(/[\s\-\.]+/), - // options which will be applied during the search - searchOptions: { - boost: { title: 100 }, - fuzzy: 2, +PSEUDOCODE: + +Searching happens automatically as the user types or adjusts the selected filters. +To preserve responsiveness, as much as possible of the slow parts of the search are done +in a web worker. Searching and result generation are done in the worker, and filtering and +DOM updates are done in the main thread. The filters are in the main thread as they should +be very quick to apply. This lets filters be changed without re-searching with minisearch +(which is possible even if filtering is on the worker thread) and also lets filters be +changed _while_ the worker is searching and without message passing (neither of which are +possible if filtering is on the worker thread) + +SEARCH WORKER: + +Import minisearch + +Build index + +On message from main thread + run search + find the first 200 unique results from each category, and compute their divs for display + note that this is necessary and sufficient information for the main thread to find the + first 200 unique results from any given filter set + post results to main thread + +MAIN: + +Launch worker + +Declare nonconstant globals (worker_is_running, last_search_text, unfiltered_results) + +On text update + if worker is not running, launch_search() + +launch_search + set worker_is_running to true, set last_search_text to the search text + post the search query to worker + +on message from worker + if last_search_text is not the same as the text in the search field, + the latest search result is not reflective of the latest search query, so update again + launch_search() + otherwise + set worker_is_running to false + + regardless, display the new search results to the user + save the unfiltered_results as a global + update_search() + +on filter click + adjust the filter selection + update_search() + +update_search + apply search filters by looping through the unfiltered_results and finding the first 200 + unique results that match the filters + + Update the DOM +*/ + +/////// SEARCH WORKER /////// + +function worker_function(documenterSearchIndex, documenterBaseURL, filters) { + importScripts( + "https://cdn.jsdelivr.net/npm/minisearch@6.1.0/dist/umd/index.min.js", + ); + + let data = documenterSearchIndex.map((x, key) => { + x["id"] = key; // minisearch requires a unique for each object + return x; + }); + + // list below is the lunr 2.1.3 list minus the intersect with names(Base) + // (all, any, get, in, is, only, which) and (do, else, for, let, where, while, with) + // ideally we'd just filter the original list but it's not available as a variable + const stopWords = new Set([ + "a", + "able", + "about", + "across", + "after", + "almost", + "also", + "am", + "among", + "an", + "and", + "are", + "as", + "at", + "be", + "because", + "been", + "but", + "by", + "can", + "cannot", + "could", + "dear", + "did", + "does", + "either", + "ever", + "every", + "from", + "got", + "had", + "has", + "have", + "he", + "her", + "hers", + "him", + "his", + "how", + "however", + "i", + "if", + "into", + "it", + "its", + "just", + "least", + "like", + "likely", + "may", + "me", + "might", + "most", + "must", + "my", + "neither", + "no", + "nor", + "not", + "of", + "off", + "often", + "on", + "or", + "other", + "our", + "own", + "rather", + "said", + "say", + "says", + "she", + "should", + "since", + "so", + "some", + "than", + "that", + "the", + "their", + "them", + "then", + "there", + "these", + "they", + "this", + "tis", + "to", + "too", + "twas", + "us", + "wants", + "was", + "we", + "were", + "what", + "when", + "who", + "whom", + "why", + "will", + "would", + "yet", + "you", + "your", + ]); + + let index = new MiniSearch({ + fields: ["title", "text"], // fields to index for full-text search + storeFields: ["location", "title", "text", "category", "page"], // fields to return with results processTerm: (term) => { let word = stopWords.has(term) ? null : term; if (word) { + // custom trimmer that doesn't strip @ and !, which are used in julia macro and function names word = word .replace(/^[^a-zA-Z0-9@!]+/, "") .replace(/[^a-zA-Z0-9@!]+$/, ""); + + word = word.toLowerCase(); } return word ?? null; }, + // add . as a separator, because otherwise "title": "Documenter.Anchors.add!", would not + // find anything if searching for "add!", only for the entire qualification tokenize: (string) => string.split(/[\s\-\.]+/), - }, -}); + // options which will be applied during the search + searchOptions: { + prefix: true, + boost: { title: 100 }, + fuzzy: 2, + }, + }); -index.addAll(data); + index.addAll(data); -let filters = [...new Set(data.map((x) => x.category))]; -var modal_filters = make_modal_body_filters(filters); -var filter_results = []; + /** + * Used to map characters to HTML entities. + * Refer: https://github.com/lodash/lodash/blob/main/src/escape.ts + */ + const htmlEscapes = { + "&": "&", + "<": "<", + ">": ">", + '"': """, + "'": "'", + }; -$(document).on("keyup", ".documenter-search-input", function (event) { - // Adding a debounce to prevent disruptions from super-speed typing! - debounce(() => update_search(filter_results), 300); -}); + /** + * Used to match HTML entities and HTML characters. + * Refer: https://github.com/lodash/lodash/blob/main/src/escape.ts + */ + const reUnescapedHtml = /[&<>"']/g; + const reHasUnescapedHtml = RegExp(reUnescapedHtml.source); -$(document).on("click", ".search-filter", function () { - if ($(this).hasClass("search-filter-selected")) { - $(this).removeClass("search-filter-selected"); - } else { - $(this).addClass("search-filter-selected"); + /** + * Escape function from lodash + * Refer: https://github.com/lodash/lodash/blob/main/src/escape.ts + */ + function escape(string) { + return string && reHasUnescapedHtml.test(string) + ? string.replace(reUnescapedHtml, (chr) => htmlEscapes[chr]) + : string || ""; } - // Adding a debounce to prevent disruptions from crazy clicking! - debounce(() => get_filters(), 300); -}); + /** + * RegX escape function from MDN + * Refer: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ + function escapeRegExp(string) { + return string.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + } -/** - * A debounce function, takes a function and an optional timeout in milliseconds - * - * @function callback - * @param {number} timeout - */ -function debounce(callback, timeout = 300) { - clearTimeout(timer); - timer = setTimeout(callback, timeout); -} + /** + * Make the result component given a minisearch result data object and the value + * of the search input as queryString. To view the result object structure, refer: + * https://lucaong.github.io/minisearch/modules/_minisearch_.html#searchresult + * + * @param {object} result + * @param {string} querystring + * @returns string + */ + function make_search_result(result, querystring) { + let search_divider = `
`; + let display_link = + result.location.slice(Math.max(0), Math.min(50, result.location.length)) + + (result.location.length > 30 ? "..." : ""); // To cut-off the link because it messes with the overflow of the whole div -/** - * Make/Update the search component - * - * @param {string[]} selected_filters - */ -function update_search(selected_filters = []) { - let initial_search_body = ` -
Type something to get started!
- `; + if (result.page !== "") { + display_link += ` (${result.page})`; + } + searchstring = escapeRegExp(querystring); + let textindex = new RegExp(`${searchstring}`, "i").exec(result.text); + let text = + textindex !== null + ? result.text.slice( + Math.max(textindex.index - 100, 0), + Math.min( + textindex.index + querystring.length + 100, + result.text.length, + ), + ) + : ""; // cut-off text before and after from the match - let querystring = $(".documenter-search-input").val(); + text = text.length ? escape(text) : ""; - if (querystring.trim()) { - results = index.search(querystring, { + let display_result = text.length + ? "..." + + text.replace( + new RegExp(`${escape(searchstring)}`, "i"), // For first occurrence + '$&', + ) + + "..." + : ""; // highlights the match + + let in_code = false; + if (!["page", "section"].includes(result.category.toLowerCase())) { + in_code = true; + } + + // We encode the full url to escape some special characters which can lead to broken links + let result_div = ` + +
+
${escape(result.title)}
+
${result.category}
+
+

+ ${display_result} +

+
+ ${display_link} +
+
+ ${search_divider} + `; + + return result_div; + } + + self.onmessage = function (e) { + let query = e.data; + let results = index.search(query, { filter: (result) => { - // Filtering results - if (selected_filters.length === 0) { - return result.score >= 1; - } else { - return ( - result.score >= 1 && selected_filters.includes(result.category) - ); - } + // Only return relevant results + return result.score >= 1; }, + combineWith: "AND", }); - let search_result_container = ``; - let search_divider = `
`; + // Pre-filter to deduplicate and limit to 200 per category to the extent + // possible without knowing what the filters are. + let filtered_results = []; + let counts = {}; + for (let filter of filters) { + counts[filter] = 0; + } + let present = {}; - if (results.length) { - let links = []; - let count = 0; - let search_results = ""; - - results.forEach(function (result) { - if (result.location) { - // Checking for duplication of results for the same page - if (!links.includes(result.location)) { - search_results += make_search_result(result, querystring); - count++; - } - - links.push(result.location); + for (let result of results) { + cat = result.category; + cnt = counts[cat]; + if (cnt < 200) { + id = cat + "---" + result.location; + if (present[id]) { + continue; } - }); - - let result_count = `
${count} result(s)
`; - - search_result_container = ` -
- ${modal_filters} - ${search_divider} - ${result_count} -
- ${search_results} -
-
- `; - } else { - search_result_container = ` -
- ${modal_filters} - ${search_divider} -
0 result(s)
-
-
No result found!
- `; + present[id] = true; + filtered_results.push({ + location: result.location, + category: cat, + div: make_search_result(result, query), + }); + } } - if ($(".search-modal-card-body").hasClass("is-justify-content-center")) { - $(".search-modal-card-body").removeClass("is-justify-content-center"); - } - - $(".search-modal-card-body").html(search_result_container); - } else { - filter_results = []; - modal_filters = make_modal_body_filters(filters, filter_results); - - if (!$(".search-modal-card-body").hasClass("is-justify-content-center")) { - $(".search-modal-card-body").addClass("is-justify-content-center"); - } - - $(".search-modal-card-body").html(initial_search_body); - } + postMessage(filtered_results); + }; } -/** - * Make the modal filter html - * - * @param {string[]} filters - * @param {string[]} selected_filters - * @returns string - */ -function make_modal_body_filters(filters, selected_filters = []) { - let str = ``; +/////// SEARCH MAIN /////// - filters.forEach((val) => { - if (selected_filters.includes(val)) { - str += `${val}`; +function runSearchMainCode() { + // `worker = Threads.@spawn worker_function(documenterSearchIndex)`, but in JavaScript! + const filters = [ + ...new Set(documenterSearchIndex["docs"].map((x) => x.category)), + ]; + const worker_str = + "(" + + worker_function.toString() + + ")(" + + JSON.stringify(documenterSearchIndex["docs"]) + + "," + + JSON.stringify(documenterBaseURL) + + "," + + JSON.stringify(filters) + + ")"; + const worker_blob = new Blob([worker_str], { type: "text/javascript" }); + const worker = new Worker(URL.createObjectURL(worker_blob)); + + // Whether the worker is currently handling a search. This is a boolean + // as the worker only ever handles 1 or 0 searches at a time. + var worker_is_running = false; + + // The last search text that was sent to the worker. This is used to determine + // if the worker should be launched again when it reports back results. + var last_search_text = ""; + + // The results of the last search. This, in combination with the state of the filters + // in the DOM, is used compute the results to display on calls to update_search. + var unfiltered_results = []; + + // Which filter is currently selected + var selected_filter = ""; + + document.addEventListener("reset-filter", function () { + selected_filter = ""; + update_search(); + }); + + //update the url with search query + function updateSearchURL(query) { + const url = new URL(window.location); + + if (query && query.trim() !== "") { + url.searchParams.set("q", query); } else { - str += `${val}`; + // remove the 'q' param if it exists + if (url.searchParams.has("q")) { + url.searchParams.delete("q"); + } + } + + // Add or remove the filter parameter based on selected_filter + if (selected_filter && selected_filter.trim() !== "") { + url.searchParams.set("filter", selected_filter); + } else { + // remove the 'filter' param if it exists + if (url.searchParams.has("filter")) { + url.searchParams.delete("filter"); + } + } + + // Only update history if there are parameters, otherwise use the base URL + if (url.search) { + window.history.replaceState({}, "", url); + } else { + window.history.replaceState({}, "", url.pathname + url.hash); + } + } + + $(document).on("input", ".documenter-search-input", function (event) { + if (!worker_is_running) { + launch_search(); } }); - let filter_html = ` -
- Filters: - ${str} -
- `; - - return filter_html; -} - -/** - * Make the result component given a minisearch result data object and the value of the search input as queryString. - * To view the result object structure, refer: https://lucaong.github.io/minisearch/modules/_minisearch_.html#searchresult - * - * @param {object} result - * @param {string} querystring - * @returns string - */ -function make_search_result(result, querystring) { - let search_divider = `
`; - let display_link = - result.location.slice(Math.max(0), Math.min(50, result.location.length)) + - (result.location.length > 30 ? "..." : ""); // To cut-off the link because it messes with the overflow of the whole div - - if (result.page !== "") { - display_link += ` (${result.page})`; + function launch_search() { + worker_is_running = true; + last_search_text = $(".documenter-search-input").val(); + updateSearchURL(last_search_text); + worker.postMessage(last_search_text); } - let textindex = new RegExp(`\\b${querystring}\\b`, "i").exec(result.text); - let text = - textindex !== null - ? result.text.slice( - Math.max(textindex.index - 100, 0), - Math.min( - textindex.index + querystring.length + 100, - result.text.length - ) - ) - : ""; // cut-off text before and after from the match + worker.onmessage = function (e) { + if (last_search_text !== $(".documenter-search-input").val()) { + launch_search(); + } else { + worker_is_running = false; + } - let display_result = text.length - ? "..." + - text.replace( - new RegExp(`\\b${querystring}\\b`, "i"), // For first occurrence - '$&' - ) + - "..." - : ""; // highlights the match + unfiltered_results = e.data; + update_search(); + }; - let in_code = false; - if (!["page", "section"].includes(result.category.toLowerCase())) { - in_code = true; + $(document).on("click", ".search-filter", function () { + let search_input = $(".documenter-search-input"); + let cursor_position = search_input[0].selectionStart; + + if ($(this).hasClass("search-filter-selected")) { + selected_filter = ""; + } else { + selected_filter = $(this).text().toLowerCase(); + } + + // This updates search results and toggles classes for UI: + update_search(); + + search_input.focus(); + search_input.setSelectionRange(cursor_position, cursor_position); + }); + + /** + * Make/Update the search component + */ + function update_search() { + let querystring = $(".documenter-search-input").val(); + updateSearchURL(querystring); + + if (querystring.trim()) { + if (selected_filter == "") { + results = unfiltered_results; + } else { + results = unfiltered_results.filter((result) => { + return selected_filter == result.category.toLowerCase(); + }); + } + + let search_result_container = ``; + let modal_filters = make_modal_body_filters(); + let search_divider = `
`; + + if (results.length) { + let links = []; + let count = 0; + let search_results = ""; + + for (var i = 0, n = results.length; i < n && count < 200; ++i) { + let result = results[i]; + if (result.location && !links.includes(result.location)) { + search_results += result.div; + count++; + links.push(result.location); + } + } + + if (count == 1) { + count_str = "1 result"; + } else if (count == 200) { + count_str = "200+ results"; + } else { + count_str = count + " results"; + } + let result_count = `
${count_str}
`; + + search_result_container = ` +
+ ${modal_filters} + ${search_divider} + ${result_count} +
+ ${search_results} +
+
+ `; + } else { + search_result_container = ` +
+ ${modal_filters} + ${search_divider} +
0 result(s)
+
+
No result found!
+ `; + } + + if ($(".search-modal-card-body").hasClass("is-justify-content-center")) { + $(".search-modal-card-body").removeClass("is-justify-content-center"); + } + + $(".search-modal-card-body").html(search_result_container); + } else { + if (!$(".search-modal-card-body").hasClass("is-justify-content-center")) { + $(".search-modal-card-body").addClass("is-justify-content-center"); + } + + $(".search-modal-card-body").html(` +
Type something to get started!
+ `); + } } - // We encode the full url to escape some special characters which can lead to broken links - let result_div = ` - -
-
${result.title}
-
${result.category}
-
-

- ${display_result} -

-
- ${display_link} -
-
- ${search_divider} - `; + //url param checking + function checkURLForSearch() { + const urlParams = new URLSearchParams(window.location.search); + const searchQuery = urlParams.get("q"); + const filterParam = urlParams.get("filter"); - return result_div; + // Set the selected filter if present in URL + if (filterParam) { + selected_filter = filterParam.toLowerCase(); + } + + // Trigger input event if there's a search query to perform the search + if (searchQuery) { + $(".documenter-search-input").val(searchQuery).trigger("input"); + } + } + setTimeout(checkURLForSearch, 100); + + /** + * Make the modal filter html + * + * @returns string + */ + function make_modal_body_filters() { + let str = filters + .map((val) => { + if (selected_filter == val.toLowerCase()) { + return `${val}`; + } else { + return `${val}`; + } + }) + .join(""); + + return ` +
+ Filters: + ${str} +
`; + } } -/** - * Get selected filters, remake the filter html and lastly update the search modal - */ -function get_filters() { - let ele = $(".search-filters .search-filter-selected").get(); - filter_results = ele.map((x) => $(x).text().toLowerCase()); - modal_filters = make_modal_body_filters(filters, filter_results); - update_search(filter_results); +function waitUntilSearchIndexAvailable() { + // It is possible that the documenter.js script runs before the page + // has finished loading and documenterSearchIndex gets defined. + // So we need to wait until the search index actually loads before setting + // up all the search-related stuff. + if ( + typeof documenterSearchIndex !== "undefined" && + typeof $ !== "undefined" + ) { + runSearchMainCode(); + } else { + console.warn("Search Index or jQuery not available, waiting"); + setTimeout(waitUntilSearchIndexAvailable, 100); + } } +// The actual entry point to the search code +waitUntilSearchIndexAvailable(); + }) //////////////////////////////////////////////////////////////////////////////// require(['jquery'], function($) { @@ -635,104 +903,176 @@ $(document).ready(function () { //////////////////////////////////////////////////////////////////////////////// require(['jquery'], function($) { -let search_modal_header = ` - -`; - -let initial_search_body = ` -
Type something to get started!
-`; - -let search_modal_footer = ` - -`; - -$(document.body).append( - ` -
diff --git a/0.4/tutorials/decomposition/index.html b/0.4/tutorials/decomposition/index.html index 75be047..7ed7b8e 100644 --- a/0.4/tutorials/decomposition/index.html +++ b/0.4/tutorials/decomposition/index.html @@ -1,5 +1,5 @@ -Decomposition methods · UnitCommitment.jl

Decomposition methods

1. Time decomposition

Solving unit commitment instances that have long time horizons (for example, year-long 8760-hour instances) requires a substantial amount of computational power. To address this issue, UC.jl offers a time decomposition method, which breaks the instance down into multiple overlapping subproblems, solves them sequentially, then reassembles the solution.

When solving a unit commitment instance with a dense time slot structure, computational complexity can become a significant challenge. For instance, if the instance contains hourly data for an entire year (8760 hours), solving such a model can require a substantial amount of computational power. To address this issue, UC.jl provides a time_decomposition method within the optimize! function. This method decomposes the problem into multiple sub-problems, solving them sequentially.

The optimize! function takes 5 parameters: a unit commitment instance, a TimeDecomposition method, an optimizer, and two optional functions after_build and after_optimize. It returns a solution dictionary. The TimeDecomposition method itself requires four arguments: time_window, time_increment, inner_method (optional), and formulation (optional). These arguments define the time window for each sub-problem, the time increment to move to the next sub-problem, the method used to solve each sub-problem, and the formulation employed, respectively. The two functions, namely after_build and after_optimize, are invoked subsequent to the construction and optimization of each sub-model, respectively. It is imperative that the after_build function requires its two arguments to be consistently mapped to model and instance, while the after_optimize function necessitates its three arguments to be consistently mapped to solution, model, and instance.

The code snippet below illustrates an example of solving an instance by decomposing the model into multiple 36-hour sub-problems using the XavQiuWanThi2019 method. Each sub-problem advances 24 hours at a time. The first sub-problem covers time steps 1 to 36, the second covers time steps 25 to 60, the third covers time steps 49 to 84, and so on. The initial power levels and statuses of the second and subsequent sub-problems are set based on the results of the first 24 hours from each of their immediate prior sub-problems. In essence, this approach addresses the complexity of solving a large problem by tackling it in 24-hour intervals, while incorporating an additional 12-hour buffer to mitigate the closing window effect for each sub-problem. Furthermore, the after_build function imposes the restriction that g3 and g4 cannot be activated simultaneously during the initial time slot of each sub-problem. On the other hand, the after_optimize function is invoked to calculate the conventional Locational Marginal Prices (LMPs) for each sub-problem, and subsequently appends the computed values to the lmps vector.

Warning Specifying TimeDecomposition as the value of the inner_method field of another TimeDecomposition causes errors when calling the optimize! function due to the different argument structures between the two optimize! functions.

using UnitCommitment, JuMP, Cbc, HiGHS
+Decomposition methods · UnitCommitment.jl

Decomposition methods

1. Time decomposition for production cost modeling

Solving unit commitment instances that have long time horizons (for example, year-long 8760-hour instances in production cost modeling) requires a substantial amount of computational power. To address this issue, UC.jl offers a time decomposition method, which breaks the instance down into multiple overlapping subproblems, solves them sequentially, then reassembles the solution.

When solving a unit commitment instance with a dense time slot structure, computational complexity can become a significant challenge. For instance, if the instance contains hourly data for an entire year (8760 hours), solving such a model can require a substantial amount of computational power. To address this issue, UC.jl provides a time_decomposition method within the optimize! function. This method decomposes the problem into multiple sub-problems, solving them sequentially.

The optimize! function takes 5 parameters: a unit commitment instance, a TimeDecomposition method, an optimizer, and two optional functions after_build and after_optimize. It returns a solution dictionary. The TimeDecomposition method itself requires four arguments: time_window, time_increment, inner_method (optional), and formulation (optional). These arguments define the time window for each sub-problem, the time increment to move to the next sub-problem, the method used to solve each sub-problem, and the formulation employed, respectively. The two functions, namely after_build and after_optimize, are invoked subsequent to the construction and optimization of each sub-model, respectively. It is imperative that the after_build function requires its two arguments to be consistently mapped to model and instance, while the after_optimize function necessitates its three arguments to be consistently mapped to solution, model, and instance.

The code snippet below illustrates an example of solving an instance by decomposing the model into multiple 36-hour sub-problems using the XavQiuWanThi2019 method. Each sub-problem advances 24 hours at a time. The first sub-problem covers time steps 1 to 36, the second covers time steps 25 to 60, the third covers time steps 49 to 84, and so on. The initial power levels and statuses of the second and subsequent sub-problems are set based on the results of the first 24 hours from each of their immediate prior sub-problems. In essence, this approach addresses the complexity of solving a large problem by tackling it in 24-hour intervals, while incorporating an additional 12-hour buffer to mitigate the closing window effect for each sub-problem. Furthermore, the after_build function imposes the restriction that g3 and g4 cannot be activated simultaneously during the initial time slot of each sub-problem. On the other hand, the after_optimize function is invoked to calculate the conventional Locational Marginal Prices (LMPs) for each sub-problem, and subsequently appends the computed values to the lmps vector.

Warning Specifying TimeDecomposition as the value of the inner_method field of another TimeDecomposition causes errors when calling the optimize! function due to the different argument structures between the two optimize! functions.

using UnitCommitment, JuMP, Cbc, HiGHS
 
 import UnitCommitment:
     TimeDecomposition,
@@ -39,7 +39,7 @@ solution = UnitCommitment.optimize!(
     optimizer = Cbc.Optimizer,
     after_build = after_build,
     after_optimize = after_optimize,
-)

2. Scenario decomposition with Progressive Hedging

By default, UC.jl uses the Extensive Form (EF) when solving stochastic instances. This approach involves constructing a single JuMP model that contains data and decision variables for all scenarios. Although EF has optimality guarantees and performs well with small test cases, it can become computationally intractable for large instances or substantial number of scenarios.

Progressive Hedging (PH) is an alternative (heuristic) solution method provided by UC.jl in which the problem is decomposed into smaller scenario-based subproblems, which are then solved in parallel in separate Julia processes, potentially across multiple machines. Quadratic penalty terms are used to enforce convergence of first-stage decision variables. The method is closely related to the Alternative Direction Method of Multipliers (ADMM) and can handle larger instances, although it is not guaranteed to converge to the optimal solution. Our implementation of PH relies on Message Passing Interface (MPI) for communication. We refer to MPI.jl Documentation for more details on installing MPI.

The following example shows how to solve SCUC instances using progressive hedging. The script should be saved in a file, say ph.jl, and executed using mpiexec -n <num-scenarios> julia ph.jl.

using HiGHS
+)

2. Scenario decomposition with Progressive Hedging for stochstic UC

By default, UC.jl uses the Extensive Form (EF) when solving stochastic instances. This approach involves constructing a single JuMP model that contains data and decision variables for all scenarios. Although EF has optimality guarantees and performs well with small test cases, it can become computationally intractable for large instances or substantial number of scenarios.

Progressive Hedging (PH) is an alternative (heuristic) solution method provided by UC.jl in which the problem is decomposed into smaller scenario-based subproblems, which are then solved in parallel in separate Julia processes, potentially across multiple machines. Quadratic penalty terms are used to enforce convergence of first-stage decision variables. The method is closely related to the Alternative Direction Method of Multipliers (ADMM) and can handle larger instances, although it is not guaranteed to converge to the optimal solution. Our implementation of PH relies on Message Passing Interface (MPI) for communication. We refer to MPI.jl Documentation for more details on installing MPI.

The following example shows how to solve SCUC instances using progressive hedging. The script should be saved in a file, say ph.jl, and executed using mpiexec -n <num-scenarios> julia ph.jl.

using HiGHS
 using MPI
 using UnitCommitment
 using Glob
@@ -66,4 +66,4 @@ UnitCommitment.optimize!(model, ph)
 solution = UnitCommitment.solution(model, ph)
 
 # 7. Close MPI
-MPI.Finalize()

When using PH, the model can be customized as usual, with different formulations or additional user-provided constraints. Note that read, in this case, takes ph as an argument. This allows each Julia process to read only the instance files that are relevant to it. Similarly, the solution function gathers the optimal solution of each processes and returns a combined dictionary.

Each process solves a sub-problem with $\frac{s}{p}$ scenarios, where $s$ is the total number of scenarios and $p$ is the number of MPI processes. For instance, if we have 15 scenario files and 5 processes, then each process will solve a JuMP model that contains data for 3 scenarios. If the total number of scenarios is not divisible by the number of processes, then an error will be thrown.

Warning

Currently, PH can handle only equiprobable scenarios. Further, solution(model, ph) can only handle cases where only one scenario is modeled in each process.

+MPI.Finalize()

When using PH, the model can be customized as usual, with different formulations or additional user-provided constraints. Note that read, in this case, takes ph as an argument. This allows each Julia process to read only the instance files that are relevant to it. Similarly, the solution function gathers the optimal solution of each processes and returns a combined dictionary.

Each process solves a sub-problem with $\frac{s}{p}$ scenarios, where $s$ is the total number of scenarios and $p$ is the number of MPI processes. For instance, if we have 15 scenario files and 5 processes, then each process will solve a JuMP model that contains data for 3 scenarios. If the total number of scenarios is not divisible by the number of processes, then an error will be thrown.

Warning

Currently, PH can handle only equiprobable scenarios. Further, solution(model, ph) can only handle cases where only one scenario is modeled in each process.

diff --git a/0.4/tutorials/lmp/index.html b/0.4/tutorials/lmp/index.html index 3224700..94bfe73 100644 --- a/0.4/tutorials/lmp/index.html +++ b/0.4/tutorials/lmp/index.html @@ -1,5 +1,5 @@ -Locational Marginal Prices · UnitCommitment.jl

Locational Marginal Prices

Locational Marginal Prices (LMPs) refer to the cost of supplying electricity at specific locations of the network. LMPs are crucial for the operation of electricity markets and have many other applications, such as indicating what areas of the network may require additional generation or transmission capacity. UnitCommitment.jl implements two methods for calculating LMPS: Conventional LMPs and Approximated Extended LMPs (AELMPs). In this tutorial, we introduce each method and illustrate their usage.

Conventional LMPs

Conventional LMPs work by (1) solving the original SCUC problem, (2) fixing all binary variables to their optimal values, and (3) re-solving the resulting linear programming model. In this approach, the LMPs are defined as the values of the dual variables associated with the net injection constraints.

The first step to use this method is to load and optimize an instance, as explained in previous tutorials:

using UnitCommitment
+Locational Marginal Prices · UnitCommitment.jl

Locational Marginal Prices

Locational Marginal Prices (LMPs) refer to the cost of supplying electricity at specific locations of the network. LMPs are crucial for the operation of electricity markets and have many other applications, such as indicating what areas of the network may require additional generation or transmission capacity. UnitCommitment.jl implements two methods for calculating LMPS: Conventional LMPs and Approximated Extended LMPs (AELMPs). In this tutorial, we introduce each method and illustrate their usage.

Conventional LMPs

Conventional LMPs work by (1) solving the original SCUC problem, (2) fixing all binary variables to their optimal values, and (3) re-solving the resulting linear programming model. In this approach, the LMPs are defined as the values of the dual variables associated with the net injection constraints.

The first step to use this method is to load and optimize an instance, as explained in previous tutorials:

using UnitCommitment
 using HiGHS
 
 instance = UnitCommitment.read_benchmark("matpower/case14/2017-01-01")
@@ -15,38 +15,52 @@ UnitCommitment.optimize!(model)
[ Info: Built model in 0.01 seconds
 [ Info: Setting MILP time limit to 86400.00 seconds
 [ Info: Solving MILP...
-Running HiGHS 1.6.0: Copyright (c) 2023 HiGHS under MIT licence terms
+Running HiGHS 1.12.0 (git hash: 755a8e027a): Copyright (c) 2025 HiGHS under MIT licence terms
+MIP has 4744 rows; 4104 cols; 15633 nonzeros; 1080 integer variables (1080 binary)
+Coefficient ranges:
+  Matrix  [1e+00, 3e+02]
+  Cost    [3e+01, 3e+04]
+  Bound   [1e+00, 1e+02]
+  RHS     [1e+00, 4e+02]
 Presolving model
-4382 rows, 2704 cols, 14776 nonzeros
-3279 rows, 2195 cols, 11913 nonzeros
-3160 rows, 2085 cols, 12983 nonzeros
+4382 rows, 2704 cols, 14776 nonzeros  0s
+3177 rows, 1985 cols, 11301 nonzeros  0s
+3148 rows, 1965 cols, 11570 nonzeros  0s
+Presolve reductions: rows 3148(-1596); columns 1965(-2139); nonzeros 11570(-4063)
 
 Solving MIP model with:
-   3160 rows
-   2085 cols (865 binary, 0 integer, 0 implied int., 1220 continuous)
-   12983 nonzeros
+   3148 rows
+   1965 cols (862 binary, 0 integer, 0 implied int., 1103 continuous, 0 domain fixed)
+   11570 nonzeros
+
+Src: B => Branching; C => Central rounding; F => Feasibility pump; H => Heuristic;
+     I => Shifting; J => Feasibility jump; L => Sub-MIP; P => Empty MIP; R => Randomized rounding;
+     S => Solve LP; T => Evaluate node; U => Unbounded; X => User solution; Y => HiGHS solution;
+     Z => ZI Round; l => Trivial lower; p => Trivial point; u => Trivial upper; z => Trivial zero
 
         Nodes      |    B&B Tree     |            Objective Bounds              |  Dynamic Constraints |       Work
-     Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time
+Src  Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time
 
-         0       0         0   0.00%   1.86264515e-09  inf                  inf        0      0      0         0     0.0s
- R       0       0         0   0.00%   360642.328869   360642.544974      0.00%        0      0      0       861     0.1s
+         0       0         0   0.00%   1.86264515e-09  inf                  inf        0      0      0         0     0.1s
+ R       0       0         0   0.00%   360642.328869   360642.544974      0.00%        0      0      0       958     0.1s
+         1       0         1 100.00%   360642.328869   360642.544974      0.00%        0      0      0       958     0.1s
 
 Solving report
   Status            Optimal
   Primal bound      360642.544974
   Dual bound        360642.328869
   Gap               6e-05% (tolerance: 0.01%)
+  P-D integral      5.20032049852e-10
   Solution status   feasible
                     360642.544974 (objective)
                     0 (bound viol.)
                     0 (int. viol.)
                     0 (row viol.)
-  Timing            0.05 (total)
-                    0.03 (presolve)
-                    0.00 (postsolve)
+  Timing            0.11
+  Max sub-MIP depth 0
   Nodes             1
-  LP iterations     861 (total)
+  Repair LPs        0
+  LP iterations     958
                     0 (strong br.)
                     0 (separation)
                     0 (heuristics)
@@ -76,7 +90,7 @@ Solving report
   ("s1", "b3", 2)  => 44.4253
   ("s1", "b4", 2)  => 44.4253
   ("s1", "b5", 2)  => 44.4253
-  ⋮                => ⋮

For example, the following code queries the LMP of bus b1 in scenario s1 at time 1:

@show lmp["s1", "b1", 1]
44.425275404366815

Approximate Extended LMPs

Approximate Extended LMPs (AELMPs) are an alternative method to calculate locational marginal prices which attemps to minimize uplift payments. The method internally works by modifying the instance data in three ways: (1) it sets the minimum power output of each generator to zero, (2) it averages the start-up cost over the offer blocks for each generator, and (3) it relaxes all integrality constraints. To compute AELMPs, as shown in the example below, we call compute_lmp and provide UnitCommitment.AELMP() as the second argument.

This method has two configurable parameters: allow_offline_participation and consider_startup_costs. If allow_offline_participation = true, then offline generators are allowed to participate in the pricing. If instead allow_offline_participation = false, offline generators are not allowed and therefore are excluded from the system. A solved UC model is optional if offline participation is allowed, but is required if not allowed. The method forces offline participation to be allowed if the UC model supplied by the user is not solved. For the second field, If consider_startup_costs = true, then start-up costs are integrated and averaged over each unit production; otherwise the production costs stay the same. By default, both fields are set to true.

Warning

This method is still under active research, and has several limitations. The implementation provided in the package is based on MISO Phase I only. It only supports fast start resources. More specifically, the minimum up/down time of all generators must be 1, the initial power of all generators must be 0, and the initial status of all generators must be negative. The method does not support time-varying start-up costs, and only currently works for deterministic instances. If offline participation is not allowed, AELMPs treats an asset to be offline if it is never on throughout all time periods.

instance = UnitCommitment.read_benchmark("test/aelmp_simple")
+  ⋮                => ⋮

For example, the following code queries the LMP of bus b1 in scenario s1 at time 1:

@show lmp["s1", "b1", 1]
44.425275404366815

Approximate Extended LMPs

Approximate Extended LMPs (AELMPs) are an alternative method to calculate locational marginal prices which attemps to minimize uplift payments. The method internally works by modifying the instance data in three ways: (1) it sets the minimum power output of each generator to zero, (2) it averages the start-up cost over the offer blocks for each generator, and (3) it relaxes all integrality constraints. To compute AELMPs, as shown in the example below, we call compute_lmp and provide UnitCommitment.AELMP() as the second argument.

This method has two configurable parameters: allow_offline_participation and consider_startup_costs. If allow_offline_participation = true, then offline generators are allowed to participate in the pricing. If instead allow_offline_participation = false, offline generators are not allowed and therefore are excluded from the system. A solved UC model is optional if offline participation is allowed, but is required if not allowed. The method forces offline participation to be allowed if the UC model supplied by the user is not solved. For the second field, If consider_startup_costs = true, then start-up costs are integrated and averaged over each unit production; otherwise the production costs stay the same. By default, both fields are set to true.

Warning

This method is still under active research, and has several limitations. The implementation provided in the package is based on MISO Phase I only. It only supports fast start resources. More specifically, the minimum up/down time of all generators must be 1, the initial power of all generators must be 0, and the initial status of all generators must be negative. The method does not support time-varying start-up costs, and only currently works for deterministic instances. If offline participation is not allowed, AELMPs treats an asset to be offline if it is never on throughout all time periods.

instance = UnitCommitment.read_benchmark("test/aelmp_simple")
 
 model =
     UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer)
@@ -92,4 +106,4 @@ lmp = UnitCommitment.compute_lmp(
     optimizer = HiGHS.Optimizer,
 )
 
-@show lmp["s1", "B1", 1]
274.3333333333333
+@show lmp["s1", "B1", 1]
274.3333333333333
diff --git a/0.4/tutorials/market/index.html b/0.4/tutorials/market/index.html index e44fa77..8f90db5 100644 --- a/0.4/tutorials/market/index.html +++ b/0.4/tutorials/market/index.html @@ -1,5 +1,5 @@ -Market Clearing · UnitCommitment.jl

Market Clearing

In North America, electricity markets are structured around two primary types of markets: the day-ahead (DA) market and the real-time (RT) market. The DA market schedules electricity generation and consumption for the next day, based on forecasts and bids from electricity suppliers and consumers. The RT market, on the other hand, operates continuously throughout the day, addressing the discrepancies between the DA schedule and actual demand, typically every five minutes. UnitCommitment.jl is able to simulate the DA and RT market clearing process. Specifically, the package provides the function UnitCommitment.solve_market which performs the following steps:

  1. Solve the DA market problem.
  2. Extract commitment status of all generators.
  3. Solve a sequence of RT market problems, fixing the commitment status of each generator to the corresponding optimal solution of the DA problem.

To use this function, we need to prepare an instance file corresponding to the DA market problem and multiple instance files corresponding to the RT market problems. The number of required files depends on the time granularity and window. For example, suppose that the DA problem is solved at hourly granularity and has 24 time periods, whereas the RT problems are solved at 5-minute granularity and have a single time period. Then we would need to prepare one files for the DA problem and 288 files $\left(24 \times \frac{60}{5}\right)$ for the RT market problems.

A small example

For simplicity, in this tutorial we illustate the usage of UnitCommitment.solve_market with a very small example, in which the DA problem has only two time periods. We start by creating the DA instance file:

da_contents = """
+Market Clearing · UnitCommitment.jl

Market Clearing

In North America, electricity markets are structured around two primary types of markets: the day-ahead (DA) market and the real-time (RT) market. The DA market schedules electricity generation and consumption for the next day, based on forecasts and bids from electricity suppliers and consumers. The RT market, on the other hand, operates continuously throughout the day, addressing the discrepancies between the DA schedule and actual demand, typically every five minutes. UnitCommitment.jl is able to simulate the DA and RT market clearing process. Specifically, the package provides the function UnitCommitment.solve_market which performs the following steps:

  1. Solve the DA market problem.
  2. Extract commitment status of all generators.
  3. Solve a sequence of RT market problems, fixing the commitment status of each generator to the corresponding optimal solution of the DA problem.

To use this function, we need to prepare an instance file corresponding to the DA market problem and multiple instance files corresponding to the RT market problems. The number of required files depends on the time granularity and window. For example, suppose that the DA problem is solved at hourly granularity and has 24 time periods, whereas the RT problems are solved at 5-minute granularity and have a single time period. Then we would need to prepare one files for the DA problem and 288 files $\left(24 \times \frac{60}{5}\right)$ for the RT market problems.

A small example

For simplicity, in this tutorial we illustate the usage of UnitCommitment.solve_market with a very small example, in which the DA problem has only two time periods. We start by creating the DA instance file:

da_contents = """
 {
     "Parameters": {
         "Version": "0.4",
@@ -171,4 +171,4 @@ UnitCommitment.solve_market(
     optimizer = HiGHS.Optimizer,
 )
OrderedCollections.OrderedDict{Any, Any} with 2 entries:
   "DA" => OrderedDict{Any, Any}("Thermal production (MW)"=>OrderedDict("g1"=>[2…
-  "RT" => Any[OrderedDict{Any, Any}("Thermal production (MW)"=>OrderedDict("g1"…

Additional considerations

  • UC.jl supports two-stage stochastic DA market problems. In this case, we need one file for each DA market scenario. All RT market problems must be deterministic.
  • UC.jl also supports multi-period RT market problems. Assume, for example, that the DA market problem is an hourly problem with 24 time periods, whereas the RT market problem uses 5-minute granularity with 4 time periods. UC.jl assumes that the first RT file covers period 0:00 to 0:20, the second covers 0:05 to 0:25 and so on. We therefore still need 288 RT market files. To avoid going beyond the 24-hour period covered by the DA market solution, however, the last few RT market problems must have only 3, 2, and 1 time periods, covering 23:45 to 24:00, 23:50 to 24:00 and 23:55 to 24:00, respectively.
  • Some MILP solvers (such as Cbc) have issues handling linear programming problems, which are required for the RT market. In this case, a separate linear programming solver can be provided to solve_market using the lp_optimizer argument. For example, solve_market(da_file, rt_files, optimizer=Cbc.Optimizer, lp_optimizer=Clp.Optimizer).
+ "RT" => Any[OrderedDict{Any, Any}("Thermal production (MW)"=>OrderedDict("g1"…

Additional considerations

  • UC.jl supports two-stage stochastic DA market problems. In this case, we need one file for each DA market scenario. All RT market problems must be deterministic.
  • UC.jl also supports multi-period RT market problems. Assume, for example, that the DA market problem is an hourly problem with 24 time periods, whereas the RT market problem uses 5-minute granularity with 4 time periods. UC.jl assumes that the first RT file covers period 0:00 to 0:20, the second covers 0:05 to 0:25 and so on. We therefore still need 288 RT market files. To avoid going beyond the 24-hour period covered by the DA market solution, however, the last few RT market problems must have only 3, 2, and 1 time periods, covering 23:45 to 24:00, 23:50 to 24:00 and 23:55 to 24:00, respectively.
  • Some MILP solvers (such as Cbc) have issues handling linear programming problems, which are required for the RT market. In this case, a separate linear programming solver can be provided to solve_market using the lp_optimizer argument. For example, solve_market(da_file, rt_files, optimizer=Cbc.Optimizer, lp_optimizer=Clp.Optimizer).
diff --git a/0.4/tutorials/solution.json b/0.4/tutorials/solution.json index b87d703..576711a 100644 --- a/0.4/tutorials/solution.json +++ b/0.4/tutorials/solution.json @@ -3,40 +3,40 @@ "g1": [ 181.92024301829747, 172.8503730182975, - 166.8067830182975, - 163.2384530182975, + 166.80678301829755, + 163.23845301829746, 165.5149530182975, 169.95052301829747, 182.3719130182975, - 191.43277301829744, - 196.67234301829745, - 202.53524301829742, + 191.4327730182975, + 196.6723430182975, + 202.53524301829748, 200.33101301829748, 201.4963530182975, - 193.55569301829746, + 193.55569301829752, 191.4869630182975, 190.44809301829747, 194.39583301829748, 215.15536301829752, 239.7813630182975, 232.7169930182975, - 237.45065301829743, + 237.4506530182975, 236.98991301829744, 223.7916330182975, - 209.51833301829748, + 209.51833301829754, 192.3993830182975, 180.73681301829748, 172.16378301829752, 162.23570301829744, 165.0181030182975, - 167.24040301829746, + 167.2404030182975, 180.89039301829752, - 196.95238301829744, + 196.9523830182975, 206.5552530182975, - 214.18877301829747, - 225.48092301829746, - 226.81792301829745, - 222.7256530182974 + 214.18877301829752, + 225.48092301829752, + 226.8179230182975, + 222.72565301829746 ], "g2": [ 0.0, @@ -193,42 +193,42 @@ }, "Thermal production cost ($)": { "g1": [ - 7241.945066727594, + 7241.945066727592, 6839.01359409579, - 6570.525443914714, - 6412.001400931049, - 6513.13554038909, + 6570.525443914716, + 6412.001400931047, + 6513.135540389088, 6710.186959214436, 7262.010630869485, 7669.483607423414, 7905.5011921482255, - 8169.596814018547, + 8169.596814018549, 8070.306788161854, 8122.799785141226, - 7765.111006420659, + 7765.1110064206605, 7671.924607909377, 7625.1284799364585, 7802.955297771037, 8738.072899143339, 9847.356493299576, 9529.140390641836, - 9742.36914798422, + 9742.369147984222, 9721.615013201, 9127.095583529908, - 8484.151641173446, + 8484.151641173448, 7713.024767780247, - 7189.370863055805, + 7189.370863055803, 6808.5116442559065, 6367.453956019317, 6491.062842304429, - 6589.789131835552, + 6589.789131835554, 7196.193696852408, 7918.115655630216, 8350.679049921417, - 8694.53263969091, + 8694.532639690911, 9203.19002366522, - 9263.415483154657, - 9079.078279635702 + 9263.415483154658, + 9079.078279635703 ], "g2": [ 0.0, @@ -1725,6 +1725,17 @@ 0.0 ], "b2": [ + 0.0, + -0.0, + -0.0, + 0.0, + 0.0, + -0.0, + -0.0, + 0.0, + -0.0, + 0.0, + 0.0, -0.0, -0.0, -0.0, @@ -1733,34 +1744,23 @@ -0.0, -0.0, -0.0, + 0.0, -0.0, -0.0, -0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + -0.0, + 0.0, -0.0, -0.0, -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0 + 0.0 ], "b3": [ 0.0, @@ -1802,28 +1802,6 @@ ], "b4": [ 0.0, - -0.0, - -0.0, - 0.0, - 0.0, - -0.0, - -0.0, - 0.0, - -0.0, - 0.0, - 0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - 0.0, - -0.0, - -0.0, - -0.0, 0.0, 0.0, 0.0, @@ -1831,11 +1809,33 @@ 0.0, 0.0, 0.0, - -0.0, 0.0, - -0.0, - -0.0, - -0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, 0.0 ], "b5": [ @@ -1877,42 +1877,42 @@ 0.0 ], "b6": [ - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0, - -0.0 + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0, + 0.0 ], "b7": [ 0.0, @@ -2985,39 +2985,39 @@ "r1": { "g1": [ 4.61, - 4.43, 0.0, + 4.31, 4.24, + 4.29, 0.0, 0.0, 0.0, 0.0, 0.0, + 4.98, 0.0, - 0.0, - 0.0, - 0.0, + 4.85, 4.81, 4.79, 4.86, - 0.0, + 5.28, 5.77, 5.63, 5.73, + 5.72, 0.0, 0.0, - 5.17, + 4.82, + 0.0, 0.0, - 4.59, - 4.42, 0.0, 4.28, - 4.32, - 4.59, - 4.92, 0.0, 0.0, - 5.49, + 0.0, + 5.11, + 0.0, + 0.0, 5.51, 0.0 ], @@ -3060,40 +3060,40 @@ 0.0 ], "g3": [ + 0.0, + 4.43, 0.0, 0.0, - 4.31, 0.0, - 4.29, 4.38, 4.62, 4.81, 4.91, 5.03, - 4.98, + 0.0, 5.01, - 4.85, 0.0, 0.0, 0.0, - 5.28, 0.0, 0.0, 0.0, - 5.72, + 0.0, + 0.0, + 0.0, 5.45, + 5.17, 0.0, - 4.82, - 0.0, - 0.0, + 4.59, + 4.42, 4.22, 0.0, + 4.32, + 4.59, + 4.92, 0.0, - 0.0, - 0.0, - 5.11, 5.26, - 0.0, + 5.49, 0.0, 5.43 ], diff --git a/0.4/tutorials/usage/index.html b/0.4/tutorials/usage/index.html index 88044ac..f089391 100644 --- a/0.4/tutorials/usage/index.html +++ b/0.4/tutorials/usage/index.html @@ -1,5 +1,5 @@ -Getting started · UnitCommitment.jl

Getting started

Installing the package

UnitCommitment.jl was tested and developed with Julia 1.10. To install Julia, please follow the installation guide on the official Julia website. To install UnitCommitment.jl, run the Julia interpreter, type ] to open the package manager, then type:

pkg> add UnitCommitment@0.4

To solve the optimization models, a mixed-integer linear programming (MILP) solver is also required. Please see the JuMP installation guide for more instructions on installing a solver. Typical open-source choices are HiGHS, Cbc and GLPK. In the instructions below, HiGHS will be used, but any other MILP solver should also be compatible.

Solving a benchmark instance

We start this tutorial by illustrating how to use UnitCommitment.jl to solve one of the provided benchmark instances. The package contains a large number of deterministic benchmark instances collected from the literature and converted into a common data format, which can be used to evaluate the performance of different solution methods. See Instances for more details. The first step is to import UnitCommitment and HiGHS.

using HiGHS
+Getting started · UnitCommitment.jl

Getting started

Installing the package

UnitCommitment.jl was tested and developed with Julia 1.10. To install Julia, please follow the installation guide on the official Julia website. To install UnitCommitment.jl, run the Julia interpreter, type ] to open the package manager, then type:

pkg> add UnitCommitment@0.4

To solve the optimization models, a mixed-integer linear programming (MILP) solver is also required. Please see the JuMP installation guide for more instructions on installing a solver. Typical open-source choices are HiGHS, Cbc and GLPK. In the instructions below, HiGHS will be used, but any other MILP solver should also be compatible.

Solving a benchmark instance

We start this tutorial by illustrating how to use UnitCommitment.jl to solve one of the provided benchmark instances. The package contains a large number of deterministic benchmark instances collected from the literature and converted into a common data format, which can be used to evaluate the performance of different solution methods. See Instances for more details. The first step is to import UnitCommitment and HiGHS.

using HiGHS
 using UnitCommitment

Next, we use the function UnitCommitment.read_benchmark to read the instance.

instance = UnitCommitment.read_benchmark("matpower/case14/2017-01-01");

Now that we have the instance loaded in memory, we build the JuMP optimization model using UnitCommitment.build_model:

model =
     UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer);
[ Info: Building model...
 [ Info: Building scenario s1 with probability 1.0
@@ -10,38 +10,52 @@ using UnitCommitment

Next, we use the function UnitCommitme [ Info: Applying PTDF and LODF cutoffs (0.00500, 0.00100) [ Info: Built model in 0.01 seconds

Next, we run the optimization process, with UnitCommitment.optimize!:

UnitCommitment.optimize!(model)
[ Info: Setting MILP time limit to 86400.00 seconds
 [ Info: Solving MILP...
-Running HiGHS 1.6.0: Copyright (c) 2023 HiGHS under MIT licence terms
+Running HiGHS 1.12.0 (git hash: 755a8e027a): Copyright (c) 2025 HiGHS under MIT licence terms
+MIP has 4744 rows; 4104 cols; 15633 nonzeros; 1080 integer variables (1080 binary)
+Coefficient ranges:
+  Matrix  [1e+00, 3e+02]
+  Cost    [3e+01, 3e+04]
+  Bound   [1e+00, 1e+02]
+  RHS     [1e+00, 4e+02]
 Presolving model
-4382 rows, 2704 cols, 14776 nonzeros
-3279 rows, 2195 cols, 11913 nonzeros
-3160 rows, 2085 cols, 12983 nonzeros
+4382 rows, 2704 cols, 14776 nonzeros  0s
+3177 rows, 1985 cols, 11301 nonzeros  0s
+3148 rows, 1965 cols, 11570 nonzeros  0s
+Presolve reductions: rows 3148(-1596); columns 1965(-2139); nonzeros 11570(-4063)
 
 Solving MIP model with:
-   3160 rows
-   2085 cols (865 binary, 0 integer, 0 implied int., 1220 continuous)
-   12983 nonzeros
+   3148 rows
+   1965 cols (862 binary, 0 integer, 0 implied int., 1103 continuous, 0 domain fixed)
+   11570 nonzeros
+
+Src: B => Branching; C => Central rounding; F => Feasibility pump; H => Heuristic;
+     I => Shifting; J => Feasibility jump; L => Sub-MIP; P => Empty MIP; R => Randomized rounding;
+     S => Solve LP; T => Evaluate node; U => Unbounded; X => User solution; Y => HiGHS solution;
+     Z => ZI Round; l => Trivial lower; p => Trivial point; u => Trivial upper; z => Trivial zero
 
         Nodes      |    B&B Tree     |            Objective Bounds              |  Dynamic Constraints |       Work
-     Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time
+Src  Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time
 
-         0       0         0   0.00%   1.86264515e-09  inf                  inf        0      0      0         0     0.0s
- R       0       0         0   0.00%   360642.328869   360642.544974      0.00%        0      0      0       861     0.1s
+         0       0         0   0.00%   1.86264515e-09  inf                  inf        0      0      0         0     0.1s
+ R       0       0         0   0.00%   360642.328869   360642.544974      0.00%        0      0      0       958     0.1s
+         1       0         1 100.00%   360642.328869   360642.544974      0.00%        0      0      0       958     0.1s
 
 Solving report
   Status            Optimal
   Primal bound      360642.544974
   Dual bound        360642.328869
   Gap               6e-05% (tolerance: 0.01%)
+  P-D integral      4.76315069837e-10
   Solution status   feasible
                     360642.544974 (objective)
                     0 (bound viol.)
                     0 (int. viol.)
                     0 (row viol.)
-  Timing            0.05 (total)
-                    0.03 (presolve)
-                    0.00 (postsolve)
+  Timing            0.10
+  Max sub-MIP depth 0
   Nodes             1
-  LP iterations     861 (total)
+  Repair LPs        0
+  LP iterations     958
                     0 (strong br.)
                     0 (separation)
                     0 (heuristics)
@@ -65,24 +79,24 @@ Solving report
   "Down-flexiramp shortfall (MW)"   => OrderedDict{Any, Any}()

We can then explore the solution using Julia:

@show solution["Thermal production (MW)"]["g1"]
36-element Vector{Float64}:
  181.92024301829747
  172.8503730182975
- 166.8067830182975
- 163.2384530182975
+ 166.80678301829755
+ 163.23845301829746
  165.5149530182975
  169.95052301829747
  182.3719130182975
- 191.43277301829744
- 196.67234301829745
- 202.53524301829742
+ 191.4327730182975
+ 196.6723430182975
+ 202.53524301829748
    ⋮
  165.0181030182975
- 167.24040301829746
+ 167.2404030182975
  180.89039301829752
- 196.95238301829744
+ 196.9523830182975
  206.5552530182975
- 214.18877301829747
- 225.48092301829746
- 226.81792301829745
- 222.7256530182974

Or export the entire solution to a JSON file:

UnitCommitment.write("solution.json", solution)

Solving a custom deterministic instance

In the previous example, we solved a benchmark instance provided by the package. To solve a custom instance, the first step is to create an input file describing the list of elements (generators, loads and transmission lines) in the network. See Data Format for a complete description of the data format UC.jl expects. To keep this tutorial self-contained, we will create the input JSON file using Julia; however, this step can also be done with a simple text editor. First, we define the contents of the file:

json_contents = """
+ 214.18877301829752
+ 225.48092301829752
+ 226.8179230182975
+ 222.72565301829746

Or export the entire solution to a JSON file:

UnitCommitment.write("solution.json", solution)

Solving a custom deterministic instance

In the previous example, we solved a benchmark instance provided by the package. To solve a custom instance, the first step is to create an input file describing the list of elements (generators, loads and transmission lines) in the network. See Data Format for a complete description of the data format UC.jl expects. To keep this tutorial self-contained, we will create the input JSON file using Julia; however, this step can also be done with a simple text editor. First, we define the contents of the file:

json_contents = """
 {
     "Parameters": {
         "Version": "0.4",
@@ -122,36 +136,47 @@ UnitCommitment.optimize!(model)
[ Info: Built model in 0.00 seconds
 [ Info: Setting MILP time limit to 86400.00 seconds
 [ Info: Solving MILP...
-Running HiGHS 1.6.0: Copyright (c) 2023 HiGHS under MIT licence terms
+Running HiGHS 1.12.0 (git hash: 755a8e027a): Copyright (c) 2025 HiGHS under MIT licence terms
+MIP has 108 rows; 64 cols; 230 nonzeros; 32 integer variables (32 binary)
+Coefficient ranges:
+  Matrix  [1e+00, 3e+02]
+  Cost    [5e+00, 1e+03]
+  Bound   [1e+00, 3e+02]
+  RHS     [1e+00, 1e+06]
 Presolving model
-74 rows, 36 cols, 166 nonzeros
-63 rows, 29 cols, 164 nonzeros
-49 rows, 25 cols, 120 nonzeros
-39 rows, 19 cols, 97 nonzeros
-32 rows, 15 cols, 90 nonzeros
-24 rows, 13 cols, 64 nonzeros
-8 rows, 6 cols, 21 nonzeros
-0 rows, 0 cols, 0 nonzeros
+74 rows, 36 cols, 166 nonzeros  0s
+47 rows, 21 cols, 121 nonzeros  0s
+28 rows, 12 cols, 78 nonzeros  0s
+0 rows, 0 cols, 0 nonzeros  0s
+Presolve reductions: rows 0(-108); columns 0(-64); nonzeros 0(-230) - Reduced to empty
 Presolve: Optimal
 
+Src: B => Branching; C => Central rounding; F => Feasibility pump; H => Heuristic;
+     I => Shifting; J => Feasibility jump; L => Sub-MIP; P => Empty MIP; R => Randomized rounding;
+     S => Solve LP; T => Evaluate node; U => Unbounded; X => User solution; Y => HiGHS solution;
+     Z => ZI Round; l => Trivial lower; p => Trivial point; u => Trivial upper; z => Trivial zero
+
+        Nodes      |    B&B Tree     |            Objective Bounds              |  Dynamic Constraints |       Work
+Src  Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time
+
+         0       0         0   0.00%   3750            3750               0.00%        0      0      0         0     0.0s
+
 Solving report
   Status            Optimal
   Primal bound      3750
   Dual bound        3750
   Gap               0% (tolerance: 0.01%)
+  P-D integral      0
   Solution status   feasible
                     3750 (objective)
                     0 (bound viol.)
                     0 (int. viol.)
                     0 (row viol.)
-  Timing            0.00 (total)
-                    0.00 (presolve)
-                    0.00 (postsolve)
+  Timing            0.00
+  Max sub-MIP depth 0
   Nodes             0
-  LP iterations     0 (total)
-                    0 (strong br.)
-                    0 (separation)
-                    0 (heuristics)

Finally, we extract and display the solution:

solution = UnitCommitment.solution(model)
OrderedCollections.OrderedDict{Any, Any} with 14 entries:
+  Repair LPs        0
+  LP iterations     0

Finally, we extract and display the solution:

solution = UnitCommitment.solution(model)
OrderedCollections.OrderedDict{Any, Any} with 14 entries:
   "Thermal production (MW)"         => OrderedDict("g1"=>[100.0, 150.0, 200.0, …
   "Thermal production cost (\$)"    => OrderedDict("g1"=>[500.0, 750.0, 1000.0,…
   "Startup cost (\$)"               => OrderedDict("g1"=>[0.0, 0.0, 0.0, 0.0], …
@@ -249,54 +274,63 @@ instance = UnitCommitment.read(glob("example_s*.json"))

[ Info: Building model...
 [ Info: Building scenario s1 with probability 0.75
 [ Info: Building scenario s2 with probability 0.25
-[ Info: Built model in 0.00 seconds
+[ Info: Built model in 0.01 seconds
 [ Info: Setting MILP time limit to 86400.00 seconds
 [ Info: Solving MILP...
-Running HiGHS 1.6.0: Copyright (c) 2023 HiGHS under MIT licence terms
+Running HiGHS 1.12.0 (git hash: 755a8e027a): Copyright (c) 2025 HiGHS under MIT licence terms
+MIP has 174 rows; 96 cols; 366 nonzeros; 32 integer variables (32 binary)
+Coefficient ranges:
+  Matrix  [1e+00, 3e+02]
+  Cost    [1e+00, 8e+02]
+  Bound   [1e+00, 5e+02]
+  RHS     [1e+00, 1e+06]
 Presolving model
-115 rows, 47 cols, 251 nonzeros
-47 rows, 43 cols, 113 nonzeros
-33 rows, 33 cols, 93 nonzeros
-23 rows, 25 cols, 61 nonzeros
-7 rows, 9 cols, 21 nonzeros
-5 rows, 7 cols, 13 nonzeros
+115 rows, 47 cols, 251 nonzeros  0s
+47 rows, 43 cols, 113 nonzeros  0s
+31 rows, 29 cols, 96 nonzeros  0s
+5 rows, 9 cols, 13 nonzeros  0s
+Presolve reductions: rows 5(-169); columns 9(-87); nonzeros 13(-353)
 
 Solving MIP model with:
    5 rows
-   7 cols (4 binary, 0 integer, 0 implied int., 3 continuous)
+   9 cols (3 binary, 0 integer, 0 implied int., 6 continuous, 0 domain fixed)
    13 nonzeros
 
-        Nodes      |    B&B Tree     |            Objective Bounds              |  Dynamic Constraints |       Work
-     Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time
+Src: B => Branching; C => Central rounding; F => Feasibility pump; H => Heuristic;
+     I => Shifting; J => Feasibility jump; L => Sub-MIP; P => Empty MIP; R => Randomized rounding;
+     S => Solve LP; T => Evaluate node; U => Unbounded; X => User solution; Y => HiGHS solution;
+     Z => ZI Round; l => Trivial lower; p => Trivial point; u => Trivial upper; z => Trivial zero
 
-         0       0         0   0.00%   4562.5          inf                  inf        0      0      0         0     0.0s
- T       0       0         0   0.00%   4562.5          5312.5            14.12%        0      0      0         2     0.0s
+        Nodes      |    B&B Tree     |            Objective Bounds              |  Dynamic Constraints |       Work
+Src  Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time
+
+ J       0       0         0   0.00%   -inf            30312.5            Large        0      0      0         0     0.0s
+ T       0       0         0   0.00%   4300            5312.5            19.06%        0      0      0         0     0.0s
+         1       0         1 100.00%   5312.5          5312.5             0.00%        0      0      0         0     0.0s
 
 Solving report
   Status            Optimal
   Primal bound      5312.5
   Dual bound        5312.5
   Gap               0% (tolerance: 0.01%)
+  P-D integral      0.000122052910402
   Solution status   feasible
                     5312.5 (objective)
                     0 (bound viol.)
                     0 (int. viol.)
                     0 (row viol.)
-  Timing            0.00 (total)
-                    0.00 (presolve)
-                    0.00 (postsolve)
+  Timing            0.00
+  Max sub-MIP depth 0
   Nodes             1
-  LP iterations     2 (total)
-                    0 (strong br.)
-                    0 (separation)
-                    0 (heuristics)

The solution to stochastic instances follows a slightly different format, as shown below:

solution = UnitCommitment.solution(model)
OrderedCollections.OrderedDict{Any, Any} with 2 entries:
+  Repair LPs        0
+  LP iterations     0

The solution to stochastic instances follows a slightly different format, as shown below:

solution = UnitCommitment.solution(model)
OrderedCollections.OrderedDict{Any, Any} with 2 entries:
   "s1" => OrderedDict{Any, Any}("Thermal production (MW)"=>OrderedDict("g1"=>[1…
   "s2" => OrderedDict{Any, Any}("Thermal production (MW)"=>OrderedDict("g1"=>[2…

The solution for each scenario can be accessed through solution[scenario_name]. For conveniance, this includes both first- and second-stage optimal decisions:

solution["s1"]
OrderedCollections.OrderedDict{Any, Any} with 14 entries:
   "Thermal production (MW)"         => OrderedDict("g1"=>[100.0, 150.0, 200.0, …
   "Thermal production cost (\$)"    => OrderedDict("g1"=>[500.0, 750.0, 1000.0,…
   "Startup cost (\$)"               => OrderedDict("g1"=>[0.0, 0.0, 0.0, 0.0], …
   "Is on"                           => OrderedDict("g1"=>[1.0, 1.0, 1.0, 1.0], …
-  "Switch on"                       => OrderedDict("g1"=>[1.0, -0.0, 0.0, 0.0],…
+  "Switch on"                       => OrderedDict("g1"=>[1.0, 0.0, 0.0, 0.0], …
   "Switch off"                      => OrderedDict("g1"=>[0.0, 0.0, 0.0, 0.0], …
   "Net injection (MW)"              => OrderedDict("b1"=>[0.0, 0.0, 0.0, 0.0])
   "Load curtail (MW)"               => OrderedDict("b1"=>[0.0, 0.0, 0.0, 0.0])
@@ -305,4 +339,4 @@ Solving report
   "Up-flexiramp (MW)"               => OrderedDict{Any, Any}()
   "Up-flexiramp shortfall (MW)"     => OrderedDict{Any, Any}()
   "Down-flexiramp (MW)"             => OrderedDict{Any, Any}()
-  "Down-flexiramp shortfall (MW)"   => OrderedDict{Any, Any}()
+ "Down-flexiramp shortfall (MW)" => OrderedDict{Any, Any}()