Merge pull request #31 from hejun0524/dev

Time Decomposition and Marketing
pull/33/merge
Alinson S. Xavier 2 years ago committed by GitHub
commit 81d4ff5b9d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -23,7 +23,7 @@ This section describes system-wide parameters, such as power balance penalty, an
| Key | Description | Default | Time series? | Uncertain?
| :----------------------------- | :------------------------------------------------ | :------: | :------------:| :----------:
| `Version` | Version of UnitCommitment.jl this file was written for. Required to ensure that the file remains readable in future versions of the package. If you are following this page to construct the file, this field should equal `0.3`. | Required | No | No
| `Time horizon (h)` | Length of the planning horizon (in hours). | Required | No | No
| `Time horizon (min)` or `Time horizon (h)` | Length of the planning horizon (in minutes or hours). Either `Time horizon (min)` or `Time horizon (h)` is required, but not both. | Required | No | No
| `Time step (min)` | Length of each time step (in minutes). Must be a divisor of 60 (e.g. 60, 30, 20, 15, etc). | `60` | No | No
| `Power balance penalty ($/MW)` | Penalty for system-wide shortage or surplus in production (in $/MW). This is charged per time step. For example, if there is a shortage of 1 MW for three time steps, three times this amount will be charged. | `1000.0` | No | Yes
| `Scenario name` | Name of the scenario. | `"s1"` | No | ---

@ -282,3 +282,87 @@ aelmp = UnitCommitment.compute_lmp(
# Note: although scenario is supported, the query still keeps the scenario keys for consistency.
@show aelmp["s1", "b1", 1]
```
## Time Decomposition Method
When solving a unit commitment instance with a dense time slot structure, computational complexity can become a significant challenge. For instance, if the instance contains hourly data for an entire year (8760 hours), solving such a model can require a substantial amount of computational power. To address this issue, UC.jl provides a time_decomposition method within the `optimize!` function. This method decomposes the problem into multiple sub-problems, solving them sequentially.
The `optimize!` function takes 5 parameters: a unit commitment instance, a `TimeDecomposition` method, an optimizer, and two optional functions `after_build` and `after_optimize`. It returns a solution dictionary. The `TimeDecomposition` method itself requires four arguments: `time_window`, `time_increment`, `inner_method` (optional), and `formulation` (optional). These arguments define the time window for each sub-problem, the time increment to move to the next sub-problem, the method used to solve each sub-problem, and the formulation employed, respectively. The two functions, namely `after_build` and `after_optimize`, are invoked subsequent to the construction and optimization of each sub-model, respectively. It is imperative that the `after_build` function requires its two arguments to be consistently mapped to `model` and `instance`, while the `after_optimize` function necessitates its three arguments to be consistently mapped to `solution`, `model`, and `instance`.
The code snippet below illustrates an example of solving an instance by decomposing the model into multiple 36-hour sub-problems using the `XavQiuWanThi2019` method. Each sub-problem advances 24 hours at a time. The first sub-problem covers time steps 1 to 36, the second covers time steps 25 to 60, the third covers time steps 49 to 84, and so on. The initial power levels and statuses of the second and subsequent sub-problems are set based on the results of the first 24 hours from each of their immediate prior sub-problems. In essence, this approach addresses the complexity of solving a large problem by tackling it in 24-hour intervals, while incorporating an additional 12-hour buffer to mitigate the closing window effect for each sub-problem. Furthermore, the `after_build` function imposes the restriction that `g3` and `g4` cannot be activated simultaneously during the initial time slot of each sub-problem. On the other hand, the `after_optimize` function is invoked to calculate the conventional Locational Marginal Prices (LMPs) for each sub-problem, and subsequently appends the computed values to the `lmps` vector.
> **Warning**
> Specifying `TimeDecomposition` as the value of the `inner_method` field of another `TimeDecomposition` causes errors when calling the `optimize!` function due to the different argument structures between the two `optimize!` functions.
```julia
using UnitCommitment, JuMP, Cbc, HiGHS
import UnitCommitment:
TimeDecomposition,
ConventionalLMP,
XavQiuWanThi2019,
Formulation
# specifying the after_build and after_optimize functions
function after_build(model, instance)
@constraint(
model,
model[:is_on]["g3", 1] + model[:is_on]["g4", 1] <= 1,
)
end
lmps = []
function after_optimize(solution, model, instance)
lmp = UnitCommitment.compute_lmp(
model,
ConventionalLMP(),
optimizer = HiGHS.Optimizer,
)
return push!(lmps, lmp)
end
# assume the instance is given as a 120h problem
instance = UnitCommitment.read("instance.json")
solution = UnitCommitment.optimize!(
instance,
TimeDecomposition(
time_window = 36, # solve 36h problems
time_increment = 24, # advance by 24h each time
inner_method = XavQiuWanThi2019.Method(),
formulation = Formulation(),
),
optimizer = Cbc.Optimizer,
after_build = after_build,
after_optimize = after_optimize,
)
```
## Day-ahead (DA) Market to Real-time (RT) Markets
The UC.jl package offers a comprehensive set of functions for solving marketing problems. The primary function, `solve_market`, facilitates the solution of day-ahead (DA) markets, which can be either deterministic or stochastic in nature. Subsequently, it sequentially maps the commitment status obtained from the DA market to all the real-time (RT) markets, which are deterministic instances. It is essential to ensure that the time span of the DA market encompasses all the RT markets, and the file paths for the RT markets must be specified in chronological order. Each RT market should represent a single time slot, and it is recommended to include a few additional time slots to mitigate the closing window effect.
The `solve_market` function accepts several parameters, including the file path (or a list of file paths in the case of stochastic markets) for the DA market, a list of file paths for the RT markets, the market settings specified by the `MarketSettings` structure, and an optimizer. The `MarketSettings` structure itself requires three optional arguments: `inner_method`, `lmp_method`, and `formulation`. If the computation of Locational Marginal Prices (LMPs) is not desired, the `lmp_method` can be set to `nothing`. Additional optional parameters include a linear programming optimizer for solving LMPs (if a different optimizer than the required one is desired), callback functions `after_build_da` and `after_optimize_da`, which are invoked after the construction and optimization of the DA market, and callback functions `after_build_rt` and `after_optimize_rt`, which are invoked after the construction and optimization of each RT market. It is crucial to note that the `after_build` function requires its two arguments to consistently correspond to `model` and `instance`, while the `after_optimize` function requires its three arguments to consistently correspond to `solution`, `model`, and `instance`.
As an illustrative example, suppose the DA market predicts hourly data for a 24-hour period, while the RT markets represent 5-minute intervals. In this scenario, each RT market file corresponds to a specific 5-minute interval, with the first RT market representing the initial 5 minutes, the second RT market representing the subsequent 5 minutes, and so on. Consequently, there should be 12 RT market files for each hour. To mitigate the closing window effect, except for the last few RT markets, each RT market should contain three time slots, resulting in a total time span of 15 minutes. However, only the first time slot is considered in the final solution. The last two RT markets should only contain 2 and 1 time slot(s), respectively, to ensure that the total time covered by all RT markets does not exceed the time span of the DA market. The code snippet below demonstrates a simplified example of how to utilize the `solve_market` function. Please note that it only serves as a simplified example and may require further customization based on the specific requirements of your use case.
```julia
using UnitCommitment, Cbc, HiGHS
import UnitCommitment:
MarketSettings,
XavQiuWanThi2019,
ConventionalLMP,
Formulation
solution = UnitCommitment.solve_market(
"da_instance.json",
["rt_instance_1.json", "rt_instance_2.json", "rt_instance_3.json"],
MarketSettings(
inner_method = XavQiuWanThi2019.Method(),
lmp_method = ConventionalLMP(),
formulation = Formulation(),
),
optimizer = Cbc.Optimizer,
lp_optimizer = HiGHS.Optimizer,
)
```

@ -10,6 +10,7 @@ include("instance/structs.jl")
include("model/formulations/base/structs.jl")
include("solution/structs.jl")
include("lmp/structs.jl")
include("market/structs.jl")
include("model/formulations/ArrCon2000/structs.jl")
include("model/formulations/CarArr2006/structs.jl")
@ -21,6 +22,7 @@ include("model/formulations/PanGua2016/structs.jl")
include("solution/methods/XavQiuWanThi2019/structs.jl")
include("solution/methods/ProgressiveHedging/structs.jl")
include("model/formulations/WanHob2016/structs.jl")
include("solution/methods/TimeDecomposition/structs.jl")
include("import/egret.jl")
include("instance/read.jl")
@ -50,6 +52,7 @@ include("solution/methods/XavQiuWanThi2019/enforce.jl")
include("solution/methods/XavQiuWanThi2019/filter.jl")
include("solution/methods/XavQiuWanThi2019/find.jl")
include("solution/methods/XavQiuWanThi2019/optimize.jl")
include("solution/methods/TimeDecomposition/optimize.jl")
include("solution/methods/ProgressiveHedging/optimize.jl")
include("solution/methods/ProgressiveHedging/read.jl")
include("solution/methods/ProgressiveHedging/solution.jl")
@ -66,5 +69,6 @@ include("validation/repair.jl")
include("validation/validate.jl")
include("lmp/conventional.jl")
include("lmp/aelmp.jl")
include("market/market.jl")
end

@ -142,16 +142,28 @@ function _from_json(json; repair = true)::UnitCommitmentScenario
return x
end
time_horizon = json["Parameters"]["Time horizon (min)"]
if time_horizon === nothing
time_horizon = json["Parameters"]["Time (h)"]
if time_horizon === nothing
time_horizon = json["Parameters"]["Time horizon (h)"]
end
time_horizon !== nothing || error("Missing parameter: Time horizon (h)")
if time_horizon !== nothing
time_horizon *= 60
end
end
time_horizon !== nothing || error("Missing parameter: Time horizon (min)")
isinteger(time_horizon) ||
error("Time horizon must be an integer in minutes")
time_horizon = Int(time_horizon)
time_step = scalar(json["Parameters"]["Time step (min)"], default = 60)
(60 % time_step == 0) ||
error("Time step $time_step is not a divisor of 60")
(time_horizon % time_step == 0) || error(
"Time step $time_step is not a divisor of time horizon $time_horizon",
)
time_multiplier = 60 ÷ time_step
T = time_horizon * time_multiplier
T = time_horizon ÷ time_step
probability = json["Parameters"]["Scenario weight"]
probability !== nothing || (probability = 1)
@ -408,6 +420,7 @@ function _from_json(json; repair = true)::UnitCommitmentScenario
reserves = reserves,
reserves_by_name = name_to_reserve,
time = T,
time_step = time_step,
thermal_units_by_name = Dict(g.name => g for g in thermal_units),
thermal_units = thermal_units,
profiled_units_by_name = Dict(pu.name => pu for pu in profiled_units),

@ -104,6 +104,7 @@ Base.@kwdef mutable struct UnitCommitmentScenario
thermal_units_by_name::Dict{AbstractString,ThermalUnit}
thermal_units::Vector{ThermalUnit}
time::Int
time_step::Int
end
Base.@kwdef mutable struct UnitCommitmentInstance

@ -0,0 +1,220 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
"""
solve_market(
da_path::Union{String, Vector{String}},
rt_paths::Vector{String},
settings::MarketSettings;
optimizer,
lp_optimizer = nothing,
after_build_da = nothing,
after_optimize_da = nothing,
after_build_rt = nothing,
after_optimize_rt = nothing,
)::OrderedDict
Solve the day-ahead and the real-time markets by the means of commitment status mapping.
The method firstly acquires the commitment status outcomes through the resolution of the day-ahead market;
and secondly resolves each real-time market based on the corresponding results obtained previously.
Arguments
---------
- `da_path`:
the data file path of the day-ahead market, can be stochastic.
- `rt_paths`:
the list of data file paths of the real-time markets, must be deterministic for each market.
- `settings`:
the MarketSettings which include the problem formulation, the solving method, and LMP method.
- `optimizer`:
the optimizer for solving the problem.
- `lp_optimizer`:
the linear programming optimizer for solving the LMP problem, defaults to `nothing`.
If not specified by the user, the program uses `optimizer` instead.
- `after_build_da`:
a user-defined function that allows modifying the DA model after building,
must have 2 arguments `model` and `instance` in order.
- `after_optimize_da`:
a user-defined function that allows handling additional steps after optimizing the DA model,
must have 3 arguments `solution`, `model` and `instance` in order.
- `after_build_rt`:
a user-defined function that allows modifying each RT model after building,
must have 2 arguments `model` and `instance` in order.
- `after_optimize_rt`:
a user-defined function that allows handling additional steps after optimizing each RT model,
must have 3 arguments `solution`, `model` and `instance` in order.
Examples
--------
```julia
using UnitCommitment, Cbc, HiGHS
import UnitCommitment:
MarketSettings,
XavQiuWanThi2019,
ConventionalLMP,
Formulation
solution = UnitCommitment.solve_market(
"da_instance.json",
["rt_instance_1.json", "rt_instance_2.json", "rt_instance_3.json"],
MarketSettings(
inner_method = XavQiuWanThi2019.Method(),
lmp_method = ConventionalLMP(),
formulation = Formulation(),
),
optimizer = Cbc.Optimizer,
lp_optimizer = HiGHS.Optimizer,
)
"""
function solve_market(
da_path::Union{String,Vector{String}},
rt_paths::Vector{String},
settings::MarketSettings;
optimizer,
lp_optimizer = nothing,
after_build_da = nothing,
after_optimize_da = nothing,
after_build_rt = nothing,
after_optimize_rt = nothing,
)::OrderedDict
# solve da instance as usual
@info "Solving the day-ahead market with file $da_path..."
instance_da = UnitCommitment.read(da_path)
# LP optimizer is optional: if not specified, use optimizer
lp_optimizer = lp_optimizer === nothing ? optimizer : lp_optimizer
# build and optimize the DA market
model_da, solution_da = _build_and_optimize(
instance_da,
settings,
optimizer = optimizer,
lp_optimizer = lp_optimizer,
after_build = after_build_da,
after_optimize = after_optimize_da,
)
# prepare the final solution
solution = OrderedDict()
solution["Day-ahead market"] = solution_da
solution["Real-time markets"] = OrderedDict()
# count the time, sc.time = n-slots, sc.time_step = slot-interval
# sufficient to look at only one scenario
sc = instance_da.scenarios[1]
# max time (min) of the DA market
max_time = sc.time * sc.time_step
# current time increments through the RT market list
current_time = 0
# DA market time slots in (min)
da_time_intervals = [sc.time_step * ts for ts in 1:sc.time]
# get the uc status and set each uc fixed
solution_rt = OrderedDict()
prev_initial_status = OrderedDict()
for rt_path in rt_paths
@info "Solving the real-time market with file $rt_path..."
instance_rt = UnitCommitment.read(rt_path)
# check instance time
sc = instance_rt.scenarios[1]
# check each time slot in the RT model
for ts in 1:sc.time
slot_t_end = current_time + ts * sc.time_step
# ensure this RT's slot time ub never exceeds max time of DA
slot_t_end <= max_time || error(
"The time of the real-time market cannot exceed the time of the day-ahead market.",
)
# get the slot start time to determine commitment status
slot_t_start = slot_t_end - sc.time_step
# find the index of the first DA time slot that covers slot_t_start
da_time_slot = findfirst(ti -> slot_t_start < ti, da_time_intervals)
# update thermal unit commitment status
for g in sc.thermal_units
g.commitment_status[ts] =
value(model_da[:is_on][g.name, da_time_slot]) == 1.0
end
end
# update current time by ONE slot only
current_time += sc.time_step
# set initial status for all generators in all scenarios
if !isempty(solution_rt) && !isempty(prev_initial_status)
for g in sc.thermal_units
g.initial_power =
solution_rt["Thermal production (MW)"][g.name][1]
g.initial_status = UnitCommitment._determine_initial_status(
prev_initial_status[g.name],
[solution_rt["Is on"][g.name][1]],
)
end
end
# build and optimize the RT market
_, solution_rt = _build_and_optimize(
instance_rt,
settings,
optimizer = optimizer,
lp_optimizer = lp_optimizer,
after_build = after_build_rt,
after_optimize = after_optimize_rt,
)
prev_initial_status =
OrderedDict(g.name => g.initial_status for g in sc.thermal_units)
# rt_name = first(split(last(split(rt_path, "/")), "."))
solution["Real-time markets"][rt_path] = solution_rt
end # end of for-loop that checks each RT market
return solution
end
function _build_and_optimize(
instance::UnitCommitmentInstance,
settings::MarketSettings;
optimizer,
lp_optimizer,
after_build = nothing,
after_optimize = nothing,
)::Tuple{JuMP.Model,OrderedDict}
# build model with after build
model = UnitCommitment.build_model(
instance = instance,
optimizer = optimizer,
formulation = settings.formulation,
)
if after_build !== nothing
after_build(model, instance)
end
# optimize model
UnitCommitment.optimize!(model, settings.inner_method)
solution = UnitCommitment.solution(model)
# compute lmp and add to solution
if settings.lmp_method !== nothing
lmp = UnitCommitment.compute_lmp(
model,
settings.lmp_method,
optimizer = lp_optimizer,
)
if length(instance.scenarios) == 1
solution["Locational marginal price"] = lmp
else
for sc in instance.scenarios
solution[sc.name]["Locational marginal price"] = OrderedDict(
key => val for (key, val) in lmp if key[1] == sc.name
)
end
end
end
# run after optimize with solution
if after_optimize !== nothing
after_optimize(solution, model, instance)
end
return model, solution
end

@ -0,0 +1,33 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import ..SolutionMethod
import ..PricingMethod
import ..Formulation
"""
struct MarketSettings
inner_method::SolutionMethod = XavQiuWanThi2019.Method()
lmp_method::Union{PricingMethod, Nothing} = ConventionalLMP()
formulation::Formulation = Formulation()
end
Market setting struct, typically used to map a day-ahead market to real-time markets.
Arguments
---------
- `inner_method`:
method to solve each marketing problem.
- `lmp_method`:
a PricingMethod method to calculate the locational marginal prices.
If it is set to `nothing`, the LMPs will not be calculated.
- `formulation`:
problem formulation.
"""
Base.@kwdef struct MarketSettings
inner_method::SolutionMethod = XavQiuWanThi2019.Method()
lmp_method::Union{PricingMethod,Nothing} = ConventionalLMP()
formulation::Formulation = Formulation()
end

@ -0,0 +1,259 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
"""
optimize!(
instance::UnitCommitmentInstance,
method::TimeDecomposition;
optimizer,
after_build = nothing,
after_optimize = nothing,
)::OrderedDict
Solve the given unit commitment instance with time decomposition.
The model solves each sub-problem of a given time length specified by method.time_window,
and proceeds to the next sub-problem by incrementing the time length of `method.time_increment`.
Arguments
---------
- `instance`:
the UnitCommitment instance.
- `method`:
the `TimeDecomposition` method.
- `optimizer`:
the optimizer for solving the problem.
- `after_build`:
a user-defined function that allows modifying the model after building,
must have 2 arguments `model` and `instance` in order.
- `after_optimize`:
a user-defined function that allows handling additional steps after optimizing,
must have 3 arguments `solution`, `model` and `instance` in order.
Examples
--------
```julia
using UnitCommitment, JuMP, Cbc, HiGHS
import UnitCommitment:
TimeDecomposition,
ConventionalLMP,
XavQiuWanThi2019,
Formulation
# specifying the after_build and after_optimize functions
function after_build(model, instance)
@constraint(
model,
model[:is_on]["g3", 1] + model[:is_on]["g4", 1] <= 1,
)
end
lmps = []
function after_optimize(solution, model, instance)
lmp = UnitCommitment.compute_lmp(
model,
ConventionalLMP(),
optimizer = HiGHS.Optimizer,
)
return push!(lmps, lmp)
end
# assume the instance is given as a 120h problem
instance = UnitCommitment.read("instance.json")
solution = UnitCommitment.optimize!(
instance,
TimeDecomposition(
time_window = 36, # solve 36h problems
time_increment = 24, # advance by 24h each time
inner_method = XavQiuWanThi2019.Method(),
formulation = Formulation(),
),
optimizer = Cbc.Optimizer,
after_build = after_build,
after_optimize = after_optimize,
)
"""
function optimize!(
instance::UnitCommitmentInstance,
method::TimeDecomposition;
optimizer,
after_build = nothing,
after_optimize = nothing,
)::OrderedDict
# get instance total length
T = instance.time
solution = OrderedDict()
if length(instance.scenarios) > 1
for sc in instance.scenarios
solution[sc.name] = OrderedDict()
end
end
# for each iteration, time increment by method.time_increment
for t_start in 1:method.time_increment:T
t_end = t_start + method.time_window - 1
# if t_end exceed total T
t_end = t_end > T ? T : t_end
# slice the model
@info "Solving the sub-problem of time $t_start to $t_end..."
sub_instance = UnitCommitment.slice(instance, t_start:t_end)
# build and optimize the model
sub_model = UnitCommitment.build_model(
instance = sub_instance,
optimizer = optimizer,
formulation = method.formulation,
)
if after_build !== nothing
@info "Calling after build..."
after_build(sub_model, sub_instance)
end
UnitCommitment.optimize!(sub_model, method.inner_method)
# get the result of each time period
sub_solution = UnitCommitment.solution(sub_model)
if after_optimize !== nothing
@info "Calling after optimize..."
after_optimize(sub_solution, sub_model, sub_instance)
end
# merge solution
if length(instance.scenarios) == 1
_update_solution!(solution, sub_solution, method.time_increment)
else
for sc in instance.scenarios
_update_solution!(
solution[sc.name],
sub_solution[sc.name],
method.time_increment,
)
end
end
# set the initial status for the next sub-problem
_set_initial_status!(instance, solution, method.time_increment)
end
return solution
end
"""
_set_initial_status!(
instance::UnitCommitmentInstance,
solution::OrderedDict,
time_increment::Int,
)
Set the thermal units' initial power levels and statuses based on the last bunch of time slots
specified by time_increment in the solution dictionary.
"""
function _set_initial_status!(
instance::UnitCommitmentInstance,
solution::OrderedDict,
time_increment::Int,
)
for sc in instance.scenarios
for thermal_unit in sc.thermal_units
if length(instance.scenarios) == 1
prod = solution["Thermal production (MW)"][thermal_unit.name]
is_on = solution["Is on"][thermal_unit.name]
else
prod =
solution[sc.name]["Thermal production (MW)"][thermal_unit.name]
is_on = solution[sc.name]["Is on"][thermal_unit.name]
end
thermal_unit.initial_power = prod[end]
thermal_unit.initial_status = _determine_initial_status(
thermal_unit.initial_status,
is_on[end-time_increment+1:end],
)
end
end
end
"""
_determine_initial_status(
prev_initial_status::Union{Float64,Int},
status_sequence::Vector{Float64},
)::Union{Float64,Int}
Determines a thermal unit's initial status based on its previous initial status, and
the on/off statuses in the last operation.
"""
function _determine_initial_status(
prev_initial_status::Union{Float64,Int},
status_sequence::Vector{Float64},
)::Union{Float64,Int}
# initialize the two flags
on_status = prev_initial_status
off_status = prev_initial_status
# read through the status sequence
# at each time if the unit is on, reset off_status, increment on_status
# if the on_status < 0, set it to 1.0
# at each time if the unit is off, reset on_status, decrement off_status
# if the off_status > 0, set it to -1.0
for status in status_sequence
if status == 1.0
on_status = on_status < 0.0 ? 1.0 : on_status + 1.0
off_status = 0.0
else
on_status = 0.0
off_status = off_status > 0.0 ? -1.0 : off_status - 1.0
end
end
# only one of them has non-zero value
return on_status + off_status
end
"""
_update_solution!(
solution::OrderedDict,
sub_solution::OrderedDict,
time_increment::Int,
)
Updates the solution (of each scenario) by concatenating the first bunch of
time slots of the newly generated sub-solution to the end of the final solution dictionary.
This function traverses through the dictionary keys, finds the vector and finally
does the concatenation. For now, the function is hardcoded to traverse at most 3 layers
of depth until it finds a vector object.
"""
function _update_solution!(
solution::OrderedDict,
sub_solution::OrderedDict,
time_increment::Int,
)
# the solution has at most 3 layers
for (l1_k, l1_v) in sub_solution
for (l2_k, l2_v) in l1_v
if l2_v isa Array
# slice the sub_solution
values_of_interest = l2_v[1:time_increment]
sub_solution[l1_k][l2_k] = values_of_interest
# append to the solution
if !isempty(solution)
append!(solution[l1_k][l2_k], values_of_interest)
end
elseif l2_v isa OrderedDict
for (l3_k, l3_v) in l2_v
# slice the sub_solution
values_of_interest = l3_v[1:time_increment]
sub_solution[l1_k][l2_k][l3_k] = values_of_interest
# append to the solution
if !isempty(solution)
append!(solution[l1_k][l2_k][l3_k], values_of_interest)
end
end
end
end
end
# if solution is never initialized, deep copy the sliced sub_solution
if isempty(solution)
merge!(solution, sub_solution)
end
end

@ -0,0 +1,35 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import ..SolutionMethod
import ..Formulation
"""
mutable struct TimeDecomposition <: SolutionMethod
time_window::Int
time_increment::Int
inner_method::SolutionMethod = XavQiuWanThi2019.Method()
formulation::Formulation = Formulation()
end
Time decomposition method to solve a problem with moving time window.
Fields
------
- `time_window`:
the time window of each sub-problem during the entire optimization procedure.
- `time_increment`:
the time incremented to the next sub-problem.
- `inner_method`:
method to solve each sub-problem.
- `formulation`:
problem formulation.
"""
Base.@kwdef mutable struct TimeDecomposition <: SolutionMethod
time_window::Int
time_increment::Int
inner_method::SolutionMethod = XavQiuWanThi2019.Method()
formulation::Formulation = Formulation()
end

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

@ -13,12 +13,16 @@ include("solution/methods/XavQiuWanThi19/filter_test.jl")
include("solution/methods/XavQiuWanThi19/find_test.jl")
include("solution/methods/XavQiuWanThi19/sensitivity_test.jl")
include("solution/methods/ProgressiveHedging/usage_test.jl")
include("solution/methods/TimeDecomposition/initial_status_test.jl")
include("solution/methods/TimeDecomposition/optimize_test.jl")
include("solution/methods/TimeDecomposition/update_solution_test.jl")
include("transform/initcond_test.jl")
include("transform/slice_test.jl")
include("transform/randomize/XavQiuAhm2021_test.jl")
include("validation/repair_test.jl")
include("lmp/conventional_test.jl")
include("lmp/aelmp_test.jl")
include("market/market_test.jl")
basedir = dirname(@__FILE__)
@ -39,12 +43,17 @@ function runtests()
solution_methods_XavQiuWanThi19_find_test()
solution_methods_XavQiuWanThi19_sensitivity_test()
solution_methods_ProgressiveHedging_usage_test()
solution_methods_TimeDecomposition_initial_status_test()
solution_methods_TimeDecomposition_optimize_test()
solution_methods_TimeDecomposition_update_solution_test()
transform_initcond_test()
transform_slice_test()
transform_randomize_XavQiuAhm2021_test()
validation_repair_test()
lmp_conventional_test()
lmp_aelmp_test()
simple_market_test()
stochastic_market_test()
end
return
end

@ -21,6 +21,7 @@ function instance_read_test()
@test length(sc.contingencies) == 19
@test length(sc.price_sensitive_loads) == 1
@test instance.time == 4
@test sc.time_step == 60
@test sc.lines[5].name == "l5"
@test sc.lines[5].source.name == "b2"

@ -12,7 +12,10 @@ function lmp_aelmp_test()
instance = UnitCommitment.read(path)
model = UnitCommitment.build_model(
instance = instance,
optimizer = Cbc.Optimizer,
optimizer = optimizer_with_attributes(
Cbc.Optimizer,
"logLevel" => 0,
),
variable_names = true,
)
JuMP.set_silent(model)
@ -22,7 +25,10 @@ function lmp_aelmp_test()
aelmp_1 = UnitCommitment.compute_lmp(
model,
AELMP(),
optimizer = HiGHS.Optimizer,
optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"log_to_console" => false,
),
)
@test aelmp_1["s1", "B1", 1] 231.7 atol = 0.1
@ -33,7 +39,10 @@ function lmp_aelmp_test()
allow_offline_participation = false,
consider_startup_costs = true,
),
optimizer = HiGHS.Optimizer,
optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"log_to_console" => false,
),
)
@test aelmp_2["s1", "B1", 1] 274.3 atol = 0.1
end

@ -3,13 +3,12 @@
# Released under the modified BSD license. See COPYING.md for more details.
using UnitCommitment, Cbc, HiGHS, JuMP
import UnitCommitment: ConventionalLMP
function solve_conventional_testcase(path::String)
instance = UnitCommitment.read(path)
model = UnitCommitment.build_model(
instance = instance,
optimizer = Cbc.Optimizer,
optimizer = optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 0),
variable_names = true,
)
JuMP.set_silent(model)
@ -17,7 +16,10 @@ function solve_conventional_testcase(path::String)
lmp = UnitCommitment.compute_lmp(
model,
ConventionalLMP(),
optimizer = HiGHS.Optimizer,
optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"log_to_console" => false,
),
)
return lmp
end

@ -0,0 +1,151 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using UnitCommitment, Cbc, HiGHS, JuMP
import UnitCommitment: MarketSettings
function simple_market_test()
@testset "da-to-rt simple market" begin
da_path = fixture("market_da_simple.json.gz")
rt_paths = [
fixture("market_rt1_simple.json.gz"),
fixture("market_rt2_simple.json.gz"),
fixture("market_rt3_simple.json.gz"),
fixture("market_rt4_simple.json.gz"),
]
# solve market with default setting
solution = UnitCommitment.solve_market(
da_path,
rt_paths,
MarketSettings(), # keep everything default
optimizer = optimizer_with_attributes(
Cbc.Optimizer,
"logLevel" => 0,
),
lp_optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"log_to_console" => false,
),
)
# the commitment status must agree with DA market
da_solution = solution["Day-ahead market"]
@test da_solution["Is on"]["GenY"] == [0.0, 1.0]
@test da_solution["Locational marginal price"][("s1", "B1", 1)] == 50.0
@test da_solution["Locational marginal price"][("s1", "B1", 2)] == 56.0
rt_solution = solution["Real-time markets"]
@test length(rt_solution) == 4
@test rt_solution[rt_paths[1]]["Is on"]["GenY"] == [0.0, 0.0]
@test rt_solution[rt_paths[2]]["Is on"]["GenY"] == [0.0, 1.0]
@test rt_solution[rt_paths[3]]["Is on"]["GenY"] == [1.0, 1.0]
@test rt_solution[rt_paths[4]]["Is on"]["GenY"] == [1.0]
@test length(rt_solution[rt_paths[1]]["Locational marginal price"]) == 2
@test length(rt_solution[rt_paths[2]]["Locational marginal price"]) == 2
@test length(rt_solution[rt_paths[3]]["Locational marginal price"]) == 2
@test length(rt_solution[rt_paths[4]]["Locational marginal price"]) == 1
# solve market with no lmp method
solution_no_lmp = UnitCommitment.solve_market(
da_path,
rt_paths,
MarketSettings(lmp_method = nothing), # no lmp
optimizer = optimizer_with_attributes(
Cbc.Optimizer,
"logLevel" => 0,
),
)
# the commitment status must agree with DA market
da_solution = solution_no_lmp["Day-ahead market"]
@test haskey(da_solution, "Locational marginal price") == false
rt_solution = solution_no_lmp["Real-time markets"]
@test haskey(rt_solution, "Locational marginal price") == false
end
end
function stochastic_market_test()
@testset "da-to-rt stochastic market" begin
da_path = [
fixture("market_da_simple.json.gz"),
fixture("market_da_scenario.json.gz"),
]
rt_paths = [
fixture("market_rt1_simple.json.gz"),
fixture("market_rt2_simple.json.gz"),
fixture("market_rt3_simple.json.gz"),
fixture("market_rt4_simple.json.gz"),
]
# after build and after optimize
function after_build(model, instance)
@constraint(model, model[:is_on]["GenY", 1] == 1,)
end
lmps_da = []
lmps_rt = []
function after_optimize_da(solution, model, instance)
lmp = UnitCommitment.compute_lmp(
model,
ConventionalLMP(),
optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"log_to_console" => false,
),
)
return push!(lmps_da, lmp)
end
function after_optimize_rt(solution, model, instance)
lmp = UnitCommitment.compute_lmp(
model,
ConventionalLMP(),
optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"log_to_console" => false,
),
)
return push!(lmps_rt, lmp)
end
# solve the stochastic market with callbacks
solution = UnitCommitment.solve_market(
da_path,
rt_paths,
MarketSettings(), # keep everything default
optimizer = optimizer_with_attributes(
Cbc.Optimizer,
"logLevel" => 0,
),
lp_optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"log_to_console" => false,
),
after_build_da = after_build,
after_optimize_da = after_optimize_da,
after_optimize_rt = after_optimize_rt,
)
# the commitment status must agree with DA market
da_solution_sp = solution["Day-ahead market"]["market_da_simple"]
da_solution_sc = solution["Day-ahead market"]["market_da_scenario"]
@test da_solution_sc["Is on"]["GenY"] == [1.0, 1.0]
@test da_solution_sp["Locational marginal price"][(
"market_da_simple",
"B1",
1,
)] == 25.0
@test da_solution_sc["Locational marginal price"][(
"market_da_scenario",
"B1",
2,
)] == 0.0
rt_solution = solution["Real-time markets"]
@test rt_solution[rt_paths[1]]["Is on"]["GenY"] == [1.0, 1.0]
@test rt_solution[rt_paths[2]]["Is on"]["GenY"] == [1.0, 1.0]
@test rt_solution[rt_paths[3]]["Is on"]["GenY"] == [1.0, 1.0]
@test rt_solution[rt_paths[4]]["Is on"]["GenY"] == [1.0]
@test length(lmps_rt) == 4
end
end

@ -0,0 +1,159 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using UnitCommitment, DataStructures
function solution_methods_TimeDecomposition_initial_status_test()
@testset "determine_initial_status" begin
hot_start = 100
cold_start = -100
# all on throughout
stat_seq = ones(36)
# hot start
new_stat = UnitCommitment._determine_initial_status(hot_start, stat_seq)
@test new_stat == 136
# cold start
new_stat =
UnitCommitment._determine_initial_status(cold_start, stat_seq)
@test new_stat == 36
# off in the last 12 periods
stat_seq = ones(36)
stat_seq[25:end] .= 0
# hot start
new_stat = UnitCommitment._determine_initial_status(hot_start, stat_seq)
@test new_stat == -12
# cold start
new_stat =
UnitCommitment._determine_initial_status(cold_start, stat_seq)
@test new_stat == -12
# off in one period
stat_seq = ones(36)
stat_seq[10] = 0
# hot start
new_stat = UnitCommitment._determine_initial_status(hot_start, stat_seq)
@test new_stat == 26
# cold start
new_stat =
UnitCommitment._determine_initial_status(cold_start, stat_seq)
@test new_stat == 26
# off in several of the first 24 periods
stat_seq = ones(36)
stat_seq[[10, 11, 20]] .= 0
# hot start
new_stat = UnitCommitment._determine_initial_status(hot_start, stat_seq)
@test new_stat == 16
# cold start
new_stat =
UnitCommitment._determine_initial_status(cold_start, stat_seq)
@test new_stat == 16
# all off throughout
stat_seq = zeros(36)
# hot start
new_stat = UnitCommitment._determine_initial_status(hot_start, stat_seq)
@test new_stat == -36
# cold start
new_stat =
UnitCommitment._determine_initial_status(cold_start, stat_seq)
@test new_stat == -136
# on in the last 12 periods
stat_seq = zeros(36)
stat_seq[25:end] .= 1
# hot start
new_stat = UnitCommitment._determine_initial_status(hot_start, stat_seq)
@test new_stat == 12
# cold start
new_stat =
UnitCommitment._determine_initial_status(cold_start, stat_seq)
@test new_stat == 12
end
@testset "set_initial_status" begin
# read one scenario
instance = UnitCommitment.read(fixture("case14.json.gz"))
psuedo_solution = OrderedDict(
"Thermal production (MW)" => OrderedDict(
"g1" => [0.0, 112.0, 114.0, 116.0],
"g2" => [0.0, 102.0, 0.0, 0.0],
"g3" => [0.0, 0.0, 0.0, 0.0],
"g4" => [0.0, 34.0, 66.0, 99.0],
"g5" => [0.0, 34.0, 66.0, 99.0],
"g6" => [0.0, 100.0, 100.0, 100.0],
),
"Is on" => OrderedDict(
"g1" => [0.0, 1.0, 1.0, 1.0],
"g2" => [0.0, 1.0, 0.0, 0.0],
"g3" => [0.0, 0.0, 0.0, 0.0],
"g4" => [0.0, 1.0, 1.0, 1.0],
"g5" => [0.0, 1.0, 1.0, 1.0],
"g6" => [0.0, 1.0, 1.0, 1.0],
),
)
UnitCommitment._set_initial_status!(instance, psuedo_solution, 3)
thermal_units = instance.scenarios[1].thermal_units
@test thermal_units[1].initial_power == 116.0
@test thermal_units[1].initial_status == 3.0
@test thermal_units[2].initial_power == 0.0
@test thermal_units[2].initial_status == -2.0
@test thermal_units[3].initial_power == 0.0
@test thermal_units[3].initial_status == -9.0
# read multiple scenarios
instance = UnitCommitment.read([
fixture("case14.json.gz"),
fixture("case14-profiled.json.gz"),
])
psuedo_solution = OrderedDict(
"case14" => OrderedDict(
"Thermal production (MW)" => OrderedDict(
"g1" => [0.0, 112.0, 114.0, 116.0],
"g2" => [0.0, 102.0, 0.0, 0.0],
"g3" => [0.0, 0.0, 0.0, 0.0],
"g4" => [0.0, 34.0, 66.0, 99.0],
"g5" => [0.0, 34.0, 66.0, 99.0],
"g6" => [0.0, 100.0, 100.0, 100.0],
),
"Is on" => OrderedDict(
"g1" => [0.0, 1.0, 1.0, 1.0],
"g2" => [0.0, 1.0, 0.0, 0.0],
"g3" => [0.0, 0.0, 0.0, 0.0],
"g4" => [0.0, 1.0, 1.0, 1.0],
"g5" => [0.0, 1.0, 1.0, 1.0],
"g6" => [0.0, 1.0, 1.0, 1.0],
),
),
"case14-profiled" => OrderedDict(
"Thermal production (MW)" => OrderedDict(
"g1" => [0.0, 113.0, 116.0, 115.0],
"g2" => [0.0, 0.0, 0.0, 0.0],
"g3" => [0.0, 0.0, 0.0, 20.0],
"g4" => [0.0, 34.0, 66.0, 98.0],
"g5" => [0.0, 34.0, 66.0, 97.0],
"g6" => [0.0, 100.0, 100.0, 100.0],
),
"Is on" => OrderedDict(
"g1" => [0.0, 1.0, 1.0, 1.0],
"g2" => [0.0, 0.0, 0.0, 0.0],
"g3" => [0.0, 0.0, 0.0, 1.0],
"g4" => [0.0, 1.0, 1.0, 1.0],
"g5" => [0.0, 1.0, 1.0, 1.0],
"g6" => [0.0, 1.0, 1.0, 1.0],
),
),
)
UnitCommitment._set_initial_status!(instance, psuedo_solution, 3)
thermal_units_sc2 = instance.scenarios[2].thermal_units
@test thermal_units_sc2[1].initial_power == 115.0
@test thermal_units_sc2[1].initial_status == 3.0
@test thermal_units_sc2[2].initial_power == 0.0
@test thermal_units_sc2[2].initial_status == -11.0
@test thermal_units_sc2[3].initial_power == 20.0
@test thermal_units_sc2[3].initial_status == 1.0
end
end

@ -0,0 +1,88 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using UnitCommitment, DataStructures, Cbc, HiGHS
import UnitCommitment: TimeDecomposition, ConventionalLMP
function solution_methods_TimeDecomposition_optimize_test()
@testset "optimize_time_decomposition" begin
# read one scenario
instance = UnitCommitment.read(fixture("case14.json.gz"))
solution = UnitCommitment.optimize!(
instance,
TimeDecomposition(time_window = 3, time_increment = 2),
optimizer = optimizer_with_attributes(
Cbc.Optimizer,
"logLevel" => 0,
),
)
@test length(solution["Thermal production (MW)"]["g1"]) == 4
@test length(solution["Is on"]["g2"]) == 4
@test length(solution["Spinning reserve (MW)"]["r1"]["g2"]) == 4
# read one scenario with after_build and after_optimize
function after_build(model, instance)
@constraint(
model,
model[:is_on]["g3", 1] + model[:is_on]["g4", 1] <= 1,
)
end
lmps = []
function after_optimize(solution, model, instance)
lmp = UnitCommitment.compute_lmp(
model,
ConventionalLMP(),
optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"log_to_console" => false,
),
)
return push!(lmps, lmp)
end
instance = UnitCommitment.read(fixture("case14-profiled.json.gz"))
solution = UnitCommitment.optimize!(
instance,
TimeDecomposition(time_window = 3, time_increment = 2),
optimizer = optimizer_with_attributes(
Cbc.Optimizer,
"logLevel" => 0,
),
after_build = after_build,
after_optimize = after_optimize,
)
@test length(lmps) == 2
@test lmps[1]["s1", "b1", 1] == 50.0
@test lmps[2]["s1", "b10", 2] 38.04 atol = 0.1
@test solution["Is on"]["g3"][1] == 1.0
@test solution["Is on"]["g4"][1] == 0.0
# read multiple scenarios
instance = UnitCommitment.read([
fixture("case14.json.gz"),
fixture("case14-profiled.json.gz"),
])
solution = UnitCommitment.optimize!(
instance,
TimeDecomposition(time_window = 3, time_increment = 2),
optimizer = optimizer_with_attributes(
Cbc.Optimizer,
"logLevel" => 0,
),
)
@test length(solution["case14"]["Thermal production (MW)"]["g3"]) == 4
@test length(solution["case14"]["Is on"]["g4"]) == 4
@test length(
solution["case14-profiled"]["Thermal production (MW)"]["g5"],
) == 4
@test length(solution["case14-profiled"]["Is on"]["g6"]) == 4
@test length(
solution["case14-profiled"]["Profiled production (MW)"]["g7"],
) == 4
@test length(
solution["case14-profiled"]["Spinning reserve (MW)"]["r1"]["g3"],
) == 4
end
end

@ -0,0 +1,55 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using UnitCommitment, DataStructures
function solution_methods_TimeDecomposition_update_solution_test()
@testset "update_solution" begin
psuedo_solution = OrderedDict()
time_increment = 4
psuedo_sub_solution = OrderedDict(
"Thermal production (MW)" => OrderedDict(
"g1" => [100.0, 200.0, 300.0, 400.0, 500.0, 600.0],
),
"Is on" => OrderedDict("g1" => [1.0, 0.0, 1.0, 1.0, 0.0, 1.0]),
"Profiled production (MW)" => OrderedDict(
"g1" => [199.0, 299.0, 399.0, 499.0, 599.0, 699.0],
),
"Spinning reserve (MW)" => OrderedDict(
"r1" => OrderedDict(
"g1" => [31.0, 32.0, 33.0, 34.0, 35.0, 36.0],
),
),
)
# first update should directly copy the first 4 entries of sub solution
UnitCommitment._update_solution!(
psuedo_solution,
psuedo_sub_solution,
time_increment,
)
@test psuedo_solution["Thermal production (MW)"]["g1"] ==
[100.0, 200.0, 300.0, 400.0]
@test psuedo_solution["Is on"]["g1"] == [1.0, 0.0, 1.0, 1.0]
@test psuedo_solution["Profiled production (MW)"]["g1"] ==
[199.0, 299.0, 399.0, 499.0]
@test psuedo_solution["Spinning reserve (MW)"]["r1"]["g1"] ==
[31.0, 32.0, 33.0, 34.0]
# second update should append the first 4 entries of sub solution
UnitCommitment._update_solution!(
psuedo_solution,
psuedo_sub_solution,
time_increment,
)
@test psuedo_solution["Thermal production (MW)"]["g1"] ==
[100.0, 200.0, 300.0, 400.0, 100.0, 200.0, 300.0, 400.0]
@test psuedo_solution["Is on"]["g1"] ==
[1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0]
@test psuedo_solution["Profiled production (MW)"]["g1"] ==
[199.0, 299.0, 399.0, 499.0, 199.0, 299.0, 399.0, 499.0]
@test psuedo_solution["Spinning reserve (MW)"]["r1"]["g1"] ==
[31.0, 32.0, 33.0, 34.0, 31.0, 32.0, 33.0, 34.0]
end
end
Loading…
Cancel
Save