Compare commits

...

184 Commits

Author SHA1 Message Date
9af877ca60 web: Move to separate repository 2025-11-20 10:18:08 -06:00
ccda7dde9b web: refine style 2025-11-14 15:40:33 -06:00
95ad6eca00 web: Minor changes to icons 2025-11-14 11:14:27 -06:00
61179bb7c7 web: add redo 2025-11-13 08:55:03 -06:00
8b170cdbbf web: update buttons 2025-11-13 08:47:27 -06:00
12c5f9ccca web/backend: Show elapsed time 2025-11-12 15:33:22 -06:00
f53d704e74 web: Show position in line 2025-11-12 15:09:18 -06:00
22f1f9dae5 web/backend: Allow multiple workers 2025-11-12 13:20:45 -06:00
1ef7fc5535 web: Fix timezone 2025-11-12 12:37:16 -06:00
8fe330857d web: Update Dockerfile, Makefile 2025-11-12 12:26:05 -06:00
c9e18d4fe4 web/backend: use multiprocessing instead of threads; improve logging 2025-11-12 11:52:30 -06:00
117bf478f4 web/backend: Run tasks in separate thread 2025-11-12 09:34:45 -06:00
6c57e960aa web: frontend: Switch to express 2025-11-11 15:19:34 -06:00
4c34931b34 web: Prepare containers 2025-11-11 14:46:16 -06:00
18ab2c40ba web: Implement Jobs component 2025-11-11 11:54:22 -06:00
3168036bca web: implement onSolve 2025-11-11 10:49:40 -06:00
60fdf129a1 web: backend: Add CORS endpoints 2025-11-11 10:48:36 -06:00
49e4cdef59 web: backend: Allow user to choose HOST 2025-11-11 10:29:17 -06:00
a465154fec web: Add routes, solve button 2025-11-11 09:44:54 -06:00
1254780e42 web: backend: Implement view endpoint 2025-11-07 11:32:14 -06:00
ad8ee6fe6b web: backend: Make JobProcessor more abstract 2025-11-07 11:16:13 -06:00
e52798da7a web: backend: minor fixes 2025-11-07 10:59:00 -06:00
35dd5ab1a9 web: backend: Implement job queue 2025-11-06 15:22:19 -06:00
5c7b8038a1 web: Initial backend implementation 2025-11-06 13:49:02 -06:00
c2d5e58c75 web: Reorganize into frontend/backend 2025-11-06 12:41:03 -06:00
54b5b9dd7f docs: Fix broken image link 2025-11-05 09:57:17 -06:00
395c041202 Merge branch 'hotfix/0.4.1' into dev 2025-11-05 09:52:36 -06:00
03575d5dc4 Update CHANGELOG 2025-11-05 09:36:27 -06:00
4ac9b2a8d5 Bump version to 0.4.1 2025-11-05 09:33:30 -06:00
8763c8d8f7 Bump min julia version to 1.10; disable flaky tests 2025-11-05 09:27:55 -06:00
bbe57f88cd Fix some multi-threading issues
Replace nthreads by maxthreadid and use :static scheduling to disable
task migration. Fixes #56.
2025-11-05 09:09:45 -06:00
8e2769dc0e web: Update favicon 2025-09-10 15:02:04 -05:00
e96557bed8 web: Add placeholder text 2025-09-10 14:54:26 -05:00
5b9727b0ba web: Add support for transmission contingencies 2025-09-10 14:28:14 -05:00
9f560df4f5 web: Add support for price-sensitive loads 2025-09-10 12:30:11 -05:00
356046be7b web: Standardize capitalization in section headers 2025-09-10 11:55:18 -05:00
201dd34b30 web: Add support for storage units 2025-09-10 11:54:17 -05:00
fd95cefefc web: Handle error during table data update 2025-09-10 10:58:54 -05:00
930c6a3277 web: Optimize table data updates 2025-09-10 10:04:24 -05:00
3eb4cceb54 web: Clean up console logs and reset active cell after edit 2025-09-09 12:11:54 -05:00
5fbf9af286 web: Fix failing tests 2025-09-09 12:05:01 -05:00
1c821dde14 web: changeTimeHorizon, changeTimeStep: Adjust profiled units 2025-09-09 11:39:20 -05:00
055faefa28 web: Sync TextInputRow value with initialValue changes 2025-09-09 11:28:38 -05:00
af7cb92282 web: Adjust padding and margins 2025-09-09 11:10:04 -05:00
872cb7a66e web: Preserve active cell state during table updates 2025-09-09 10:56:57 -05:00
771eb5fa6d web: Fix update columns 2025-09-09 10:56:56 -05:00
840eea9879 web: Add Dockerfile 2025-06-27 11:54:49 -05:00
0dc0a5b460 Implement web case builder
Co-authored-by: Alinson S. Xavier <git@axavier.org>
Co-authored-by: Shaoming Xu <xsm90827@gmail.com>
2025-06-27 11:42:03 -05:00
a09e25db0f web: Remove TEST_SCENARIO and update scenario initialization 2025-06-27 11:41:03 -05:00
53489c1638 web: Update nullable number handling 2025-06-27 11:37:50 -05:00
fff70cce67 Fix ucjl-0.2.json.gz fixture 2025-06-27 10:59:16 -05:00
869498fa97 web: implement data migration, reorganize data folder 2025-06-27 10:59:02 -05:00
cac9d7e230 web: Transmission lines 2025-06-27 10:30:14 -05:00
eb3d39b1ab web: ThermalUnits: onDataChanged 2025-06-25 13:59:07 -05:00
3bf028577e web: ThermalUnits: CSV upload 2025-06-25 13:17:04 -05:00
3f10ad23ca web: ProfiledUnits: Revise CSV upload 2025-06-25 12:15:09 -05:00
7c752e4c31 web: ThermalUnits: Add, delete, and rename 2025-06-25 11:47:44 -05:00
dea5217916 web: ThermalUnits: Implement CSV download 2025-06-25 10:55:45 -05:00
012331c4bd web: DataTable: Use list editor for boolean values 2025-06-25 10:46:55 -05:00
1fea873ddf web: ThermalUnits: Build table data 2025-06-25 10:32:04 -05:00
d78700bdc6 web: Start implementation of ThermalUnitsComponent 2025-06-25 10:12:32 -05:00
02ddaf20dc web: Flatten dir structure 2025-06-25 08:58:39 -05:00
be500b920e web: Add undo functionality 2025-06-24 12:19:41 -05:00
9d48112bb9 web: Propagate bus deletion and renaming 2025-06-24 11:28:21 -05:00
5bfc3ffa55 web: Improve CSV validation 2025-06-24 11:06:26 -05:00
1b37af82e3 web: ProfiledUnits: Add data change and rename functionality 2025-06-24 10:39:06 -05:00
86aababf33 web: ProfiledUnits: Rename and delete 2025-06-24 09:21:54 -05:00
8397571c11 web: Add createProfiledUnit 2025-06-23 16:48:10 -05:00
8827f9e6c8 web: profiled units: Allow CSV upload 2025-06-23 16:06:23 -05:00
eb862e5701 web: Update busOperations to support time-indexed loads 2025-06-23 10:57:46 -05:00
80d8bb838c web: Profiled units 2025-05-29 12:33:06 -05:00
ee7a948a78 web: display toast, maintain table stage, localStorage 2025-05-21 12:01:04 -05:00
0cf93e7aa0 web: use defaults; calculate table height 2025-05-20 10:27:24 -05:00
6d9bbaab4e web: Reorganize 2025-05-16 14:37:53 -05:00
957294f220 web: Accept gz files 2025-05-16 14:32:03 -05:00
d8feef5431 web: Allow changing parameters 2025-05-16 13:44:14 -05:00
6469840f0a Validation; reformat source code 2025-05-15 14:04:16 -05:00
062b38514b Buses 2025-05-15 11:49:42 -05:00
ea58cf1615 web: Initial version 2025-05-12 14:36:57 -05:00
facc9faabf Update README.md 2024-08-19 10:17:16 -05:00
Feng
d34378c660 Update decomposition.md
put application names in subtitle: production cost modeling for time decomposition; stochastic UC for scenario decomposition.
2024-07-14 14:35:02 -05:00
4f04f0dd66 Minor fixes 2024-05-21 10:56:24 -05:00
b928baeeda Minor fixes 2024-05-21 10:47:26 -05:00
4b234a49c7 Bump version to 0.4 2024-05-21 10:36:23 -05:00
afcf8cfabb Update docs; prepare for v0.4 release 2024-05-21 10:33:51 -05:00
c638aaf4ec Docs: Rewrite model customization 2024-05-09 11:03:40 -05:00
de0339f7be Update docs 2024-05-09 09:59:07 -05:00
0835f0bf8f Revise usage.jl 2024-05-09 09:51:12 -05:00
8b78f38c25 Convert usage.md to Literate.jl 2024-05-08 11:08:43 -05:00
007de88c3f Update docs 2024-02-22 10:24:21 -06:00
010558b3ae Document energy storage 2024-02-22 10:14:23 -06:00
4e8c281713 Update docs 2024-02-21 10:54:47 -06:00
12cf9d4b7b Update docs 2024-02-20 15:13:02 -06:00
b6ab8fcf4b Update docs 2024-02-20 11:23:37 -06:00
365cf0d522 Update docs 2024-02-20 10:35:23 -06:00
14dbf795fb Update docs 2024-02-19 16:04:50 -06:00
bfa967db6f Start documenting constraints & obj 2024-02-19 14:51:06 -06:00
58cc33ac69 Remove unused 'reactance' field 2023-08-01 12:31:19 -05:00
b555f9885a Minor correction to ISF definition 2023-08-01 12:21:22 -05:00
b39b14afa4 docs: Minor changes; add examples to repository 2023-07-27 12:02:13 -05:00
d49712f41b initcond: Apply to instance instead of scenario 2023-07-27 11:49:47 -05:00
beaf0b785f Add zenodo.json 2023-07-27 11:10:41 -05:00
9853b15f1c Merge pull request #40 from hejun0524/storage_units
Storage units
2023-07-26 09:22:37 -05:00
81d4ff5b9d Merge pull request #31 from hejun0524/dev
Time Decomposition and Marketing
2023-07-26 09:17:49 -05:00
Jun He
ad50cdd935 update doc for storage units 2023-07-18 16:04:52 -04:00
Jun He
8f0661c93f reformat one line 2023-07-17 12:04:16 -04:00
Jun He
ca092a67ce storage units 2023-07-17 11:39:31 -04:00
Jun He
82cefe2652 disable HiGHS logging 2023-07-16 16:57:53 -04:00
Jun He
cd96b28076 market json gz 2023-06-16 17:11:41 -04:00
Jun He
3086e71611 updated doc with solve_market 2023-06-16 17:02:06 -04:00
Jun He
0bb175078b da to rt market with tests 2023-06-16 15:35:51 -04:00
Jun He
2fb89045cd disable optimizer logging 2023-06-16 15:35:10 -04:00
Jun He
f31921fc4f added Time horizon (min) 2023-06-13 15:05:37 -04:00
Jun He
6ea769a68c add in after_build and after_optimize 2023-06-07 13:24:27 -04:00
Jun He
2d510ca7ea updated doc for time decomp 2023-06-07 13:22:56 -04:00
Jun He
d602b686bc add default values 2023-06-07 13:22:38 -04:00
Jun He
53052ec895 standalone test integration 2023-05-27 15:43:39 -04:00
Jun He
f59914f265 Merge remote-tracking branch 'upstream/dev' into dev 2023-05-27 14:54:44 -04:00
Jun He
7201acde78 time decomp bug fix 2023-05-27 14:49:43 -04:00
7a96f8cc1e Merge pull request #32 from oyurdakul/progressive-hedging
progressive hedging
2023-05-26 11:58:18 -05:00
b8ada6432a Format source code 2023-05-26 11:50:41 -05:00
03bf1c4c04 PH: Rename vars, remove return value 2023-05-26 11:47:34 -05:00
3961aedaf5 Revise docs and struct name; add basic MPI test 2023-05-26 10:52:23 -05:00
oyurdakul
9dc3607c56 progressive hedging 2023-05-22 16:41:00 -05:00
Jun He
ec2d56602b updated the warning block syntax 2023-05-20 12:13:28 -04:00
40270b0030 Make test/ a standalone project 2023-05-19 15:35:49 -05:00
Jun He
7c41a9761c warning on nested time decomp 2023-05-19 13:57:56 -04:00
Jun He
6f9420874d added more comments 2023-05-19 13:57:33 -04:00
Jun He
eff5908b13 time decomposition doc 2023-05-19 13:31:44 -04:00
Jun He
adcaf6fc55 time decomposition tests 2023-05-19 13:31:32 -04:00
Jun He
46259f7c1c time decomposition src code 2023-05-19 13:31:20 -04:00
oyurdakul
e8d8272510 Fix pwlcosts bug 2023-05-19 11:34:02 -05:00
6db2ca76e8 Fix formatting 2023-05-19 10:40:25 -05:00
4adb3344ac Profiled units: minor changes 2023-05-19 10:38:35 -05:00
Jun He
316d0bdf5a added profiled units in slice 2023-05-05 14:48:42 -04:00
Jun He
33f8ec26d5 renamed capacity to max_power 2023-05-05 14:48:15 -04:00
Jun He
41790db448 new test case gz file 2023-04-22 14:09:40 -04:00
Jun He
baf529a15d added commitment status to thermal 2023-04-22 14:02:03 -04:00
Jun He
b71a1c3d5f Updated randomize, validate and initial conditions 2023-04-07 16:42:03 -04:00
Jun He
bea42d174c Reformatted code 2023-04-06 16:21:58 -04:00
Jun He
896ef0f3e3 Added min power, fixed typo 2023-04-06 16:16:30 -04:00
Jun He
cb7f9e3b27 Added minimum power to profiled generator 2023-04-06 16:16:04 -04:00
319a787904 Merge pull request #26 from hejun0524/dev
LMP Methods & Profiled Units
2023-04-06 13:11:04 -05:00
b1c963f217 Rename 'production' to 'thermal production' 2023-04-04 15:59:41 -05:00
19534a128f Rename Unit to ThermalUnit 2023-04-04 15:40:44 -05:00
Jun He
51f6aa9a80 Create case14-profiled.json.gz 2023-03-31 15:19:46 -04:00
Jun He
f2c0388cac Updated the docs 2023-03-31 15:11:59 -04:00
Jun He
3564358a63 Re-formatted the codes 2023-03-31 15:11:47 -04:00
Jun He
b2ed0f67c1 Added the profiled units 2023-03-31 15:11:37 -04:00
Jun He
2a6c206e08 updated LMP for UC scenario 2023-03-30 23:19:24 -04:00
Jun He
30a4284119 Merge remote-tracking branch 'upstream/dev' into dev 2023-03-30 14:35:09 -04:00
Jun He
71ed55cb40 Formatted codes on the LMP dev branch 2023-03-30 14:30:10 -04:00
Jun He
0b95df25ec typo fix in generator json example 2023-03-24 10:56:41 -04:00
Jun He
5f5c8b66eb more condition checking on AELMP 2023-03-19 14:28:39 -04:00
52f1ff9a27 Merge pull request #25 from oyurdakul/stochastic-extension
stochastic extension w/ scenarios
2023-03-16 12:10:13 -05:00
414128cc0b Correct optimize!, add stochastic test case 2023-03-16 12:03:40 -05:00
20939dc4b7 Minor edits to instance/structs.jl 2023-03-16 10:43:30 -05:00
d8741f04a0 Minor edits to instance/read.jl 2023-03-16 10:38:08 -05:00
3b6d810884 Remove duplicate format.jl file 2023-03-16 10:24:31 -05:00
204c5d900f Remove unused dependency 2023-03-16 10:23:40 -05:00
cb9334c0a3 Minor changes to tests 2023-03-16 10:21:31 -05:00
31e0613134 Remove unused dependency & debug statements 2023-03-16 10:09:01 -05:00
4827c29230 Add Jun to authors 2023-03-15 12:41:09 -05:00
19e84bac07 Reformat source code 2023-03-15 12:27:43 -05:00
d7d2a3fcf6 AELMP: Convert warnings into errors; update docstrings 2023-03-15 12:23:18 -05:00
784ebfa199 ConventionalLMP: turn warnings into errors, remove some inline comments 2023-03-15 12:15:57 -05:00
d2e11eee42 Flatten dir structure, update docstrings 2023-03-15 12:08:35 -05:00
34ca6952fb Revise docs 2023-03-15 11:34:50 -05:00
Jun He
bc3aee38f8 modified the tests for LMP and AELMP 2023-03-08 13:35:33 -05:00
Jun He
415732f0ec updated the doc with LMP and AELMP 2023-03-08 13:34:10 -05:00
Jun He
5c91dc2ac9 re-designed the LMP methods
The LMP and AELMP methods are re-designed to be dependent on the instance object instead of input files, and to have a unified API style for purposes of flexibility and consistency.
2023-03-08 13:33:47 -05:00
oyurdakul
ad4a754d63 read and repair scenario 2023-03-06 17:07:54 -06:00
oyurdakul
481f5a904c read and repair scenario 2023-03-06 17:03:34 -06:00
oyurdakul
7e8a2ee026 stochastic extension 2023-02-22 12:44:46 -06:00
oyurdakul
c95b01dadf stochastic extension w/ scenarios 2023-02-08 23:46:10 -06:00
Feng
8fc84412eb Update README.md
minor corrections on grammer.
2022-08-19 11:03:21 -05:00
6573bb7ea2 Update README.md 2022-07-18 09:54:15 -06:00
1769f2a932 Project.toml: Remove Revise.jl 2022-07-18 09:42:00 -06:00
4dc39363e8 Update references, copyright notices, links 2022-07-18 09:40:52 -06:00
5fef01cd99 Improve docs 2022-07-17 15:50:42 -06:00
18daaf5358 Switch to Documenter.jl 2022-07-17 14:44:58 -06:00
b68b4ff9e4 Update CHANGELOG and docs 2022-07-13 10:14:42 -05:00
6e30645084 Allow v0.3 to read v0.2 instance files 2022-07-12 11:57:55 -05:00
678e6aa2f5 Update docs 2022-07-11 12:16:06 -05:00
144 changed files with 9558 additions and 2969 deletions

View File

@@ -1,4 +1,4 @@
name: Tests name: Build & Test
on: on:
push: push:
pull_request: pull_request:
@@ -6,19 +6,30 @@ on:
- cron: '45 10 * * *' - cron: '45 10 * * *'
jobs: jobs:
test: test:
name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }}
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
strategy: strategy:
matrix: matrix:
julia-version: ['1.6', '1.7'] version: ['1.10', '1.12']
julia-arch: [x64] os:
os: [ubuntu-latest, windows-latest, macOS-latest] - ubuntu-latest
exclude: arch:
- os: macOS-latest - x64
julia-arch: x86
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest - uses: julia-actions/setup-julia@v1
with: with:
version: ${{ matrix.julia-version }} version: ${{ matrix.version }}
- uses: julia-actions/julia-buildpkg@latest arch: ${{ matrix.arch }}
- uses: julia-actions/julia-runtest@latest - name: Run tests
shell: julia --color=yes --project=test {0}
run: |
using Pkg
Pkg.develop(path=".")
Pkg.update()
using UnitCommitmentT
try
runtests()
catch
exit(1)
end

6
.gitignore vendored
View File

@@ -1,3 +1,4 @@
*-off.md
*.bak *.bak
*.gz *.gz
*.ipynb *.ipynb
@@ -19,6 +20,7 @@
.apdisk .apdisk
.com.apple.timemachine.donotpresent .com.apple.timemachine.donotpresent
.fseventsd .fseventsd
.idea
.ipy* .ipy*
.vscode .vscode
Icon Icon
@@ -32,6 +34,10 @@ benchmark/tables
benchmark/tmp.json benchmark/tmp.json
build build
docs/_build docs/_build
docs/src/tutorials/customizing.md
docs/src/tutorials/lmp.md
docs/src/tutorials/market.md
docs/src/tutorials/usage.md
instances/**/*.json instances/**/*.json
instances/_source instances/_source
local local

27
.zenodo.json Normal file
View File

@@ -0,0 +1,27 @@
{
"creators": [
{
"orcid": "0000-0002-5022-9802",
"affiliation": "Argonne National Laboratory",
"name": "Santos Xavier, Alinson"
},
{
"affiliation": "University of Florida",
"name": "Kazachkov, Aleksandr M."
},
{
"affiliation": "Technische Universität Berlin",
"name": "Yurdakul, Ogün"
},
{
"affiliation": "Purdue University",
"name": "He, Jun"
},
{
"affiliation": "Argonne National Laboratory",
"name": "Qiu, Feng"
}
],
"title": "UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment",
"description": "<b>UnitCommitment.jl</b> (UC.jl) is an optimization package for the Security-Constrained Unit Commitment Problem (SCUC), a fundamental optimization problem in power systems used, for example, to clear the day-ahead electricity markets. The package provides benchmark instances for the problem and Julia/JuMP implementations of state-of-the-art mixed-integer programming formulations."
}

View File

@@ -11,15 +11,35 @@ All notable changes to this project will be documented in this file.
[semver]: https://semver.org/spec/v2.0.0.html [semver]: https://semver.org/spec/v2.0.0.html
[pkjjl]: https://pkgdocs.julialang.org/v1/compatibility/#compat-pre-1.0 [pkjjl]: https://pkgdocs.julialang.org/v1/compatibility/#compat-pre-1.0
## [Unreleased] ## [0.4.1] - 2025-11-05
### Added ### Fixed
- Add multiple reserve products - Fix multi-threading issues in Julia 1.12
### Changed ### Changed
- To support multiple reserve products, the input data format has been modified as follows: - The package now requires Julia 1.10 or newer
## [0.4.0] - 2024-05-21
### Added
- Add support for two-stage stochastic problems
- Add support for day-ahead and real-time market clearing simulation
- Add time decomposition methods
- Add scenario decomposition methods (progressive hedging)
- Add support for energy storage units
- Rewrite documentation with runnable examples
## [0.3.0] - 2022-07-18
### Added
- Add support for multiple reserve products and zonal reserves.
- Add flexiramp reserve products, following WanHob2016's formulation (@oyurdakul, #21).
- Add 365 variations for each MATPOWER instance, corresponding to each day of the year.
### Changed
- To support multiple/zonal reserves, the input data format has been modified as follows:
- In `Generators`, replace `Provides spinning reserves?` by `Reserve eligibility` - In `Generators`, replace `Provides spinning reserves?` by `Reserve eligibility`
- In `Parameters`, remove `Reserve shortfall penalty` - In `Parameters`, remove `Reserve shortfall penalty`
- Revise `Reserves` section - Revise `Reserves` section
- To allow new versions of UnitCommitment.jl to read old instance files, a new required field `Version` has been added to the `Parameters` section. To load v0.2 files in v0.3, please add `{"Parameters":{"Version":"0.2"}}` to the file.
- Benchmark test cases are now downloaded on-the-fly as needed, instead of being stored in our GitHub repository. Test cases can also be directly downloaded from: https://axavier.org/UnitCommitment.jl/
## [0.2.2] - 2021-07-21 ## [0.2.2] - 2021-07-21

View File

@@ -1,4 +1,4 @@
Copyright © 2020, UChicago Argonne, LLC Copyright © 2020-2022, UChicago Argonne, LLC
All Rights Reserved All Rights Reserved

View File

@@ -2,22 +2,10 @@
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
VERSION := 0.2 VERSION := 0.4
clean:
rm -rfv build Manifest.toml test/Manifest.toml deps/formatter/build deps/formatter/Manifest.toml
docs: docs:
cd docs; make clean; make dirhtml cd docs; julia --project=. -e 'include("make.jl"); make()'; cd ..
rsync -avP --delete-after docs/_build/dirhtml/ ../docs/$(VERSION)/ rsync -avP --delete-after docs/build/ ../docs/$(VERSION)/
format: .PHONY: docs
cd deps/formatter; ../../juliaw format.jl
test: test/Manifest.toml
./juliaw test/runtests.jl
test/Manifest.toml: test/Project.toml
julia --project=test -e "using Pkg; Pkg.instantiate()"
.PHONY: docs test format install-deps

View File

@@ -2,7 +2,7 @@ name = "UnitCommitment"
uuid = "64606440-39ea-11e9-0f29-3303a1d3d877" uuid = "64606440-39ea-11e9-0f29-3303a1d3d877"
authors = ["Santos Xavier, Alinson <axavier@anl.gov>"] authors = ["Santos Xavier, Alinson <axavier@anl.gov>"]
repo = "https://github.com/ANL-CEEESA/UnitCommitment.jl" repo = "https://github.com/ANL-CEEESA/UnitCommitment.jl"
version = "0.3.0" version = "0.4.1"
[deps] [deps]
DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8" DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
@@ -17,8 +17,9 @@ MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
PackageCompiler = "9b87118b-4619-50d2-8e1e-99f35a4d4d9d" PackageCompiler = "9b87118b-4619-50d2-8e1e-99f35a4d4d9d"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7" Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf" SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
TimerOutputs = "a759f4b9-e2f1-59dc-863e-4aeb61b1ea8f"
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
[compat] [compat]
DataStructures = "0.18" DataStructures = "0.18"
@@ -27,5 +28,7 @@ GZip = "0.5"
JSON = "0.21" JSON = "0.21"
JuMP = "1" JuMP = "1"
MathOptInterface = "1" MathOptInterface = "1"
MPI = "0.20"
PackageCompiler = "1" PackageCompiler = "1"
julia = "1" julia = "1.10"
TimerOutputs = "0.5"

View File

@@ -87,20 +87,18 @@ UnitCommitment.write("/tmp/output.json", solution)
## Documentation ## Documentation
1. [Usage](https://anl-ceeesa.github.io/UnitCommitment.jl/0.2/usage/) See official documentation at: https://anl-ceeesa.github.io/UnitCommitment.jl/
2. [Data Format](https://anl-ceeesa.github.io/UnitCommitment.jl/0.2/format/)
3. [Instances](https://anl-ceeesa.github.io/UnitCommitment.jl/0.2/instances/)
4. [JuMP Model](https://anl-ceeesa.github.io/UnitCommitment.jl/0.2/model/)
## Authors ## Authors
* **Alinson S. Xavier** (Argonne National Laboratory) * **Alinson S. Xavier** (Argonne National Laboratory)
* **Aleksandr M. Kazachkov** (University of Florida) * **Aleksandr M. Kazachkov** (University of Florida)
* **Ogün Yurdakul** (Technische Universität Berlin) * **Ogün Yurdakul** (Technische Universität Berlin)
* **Jun He** (Purdue University)
* **Feng Qiu** (Argonne National Laboratory) * **Feng Qiu** (Argonne National Laboratory)
## Acknowledgments ## Acknowledgments
* We would like to **Yonghong Chen** (Midcontinent Independent System Operator), **Feng Pan** (Pacific Northwest National Laboratory) for valuable feedback on early versions of this package. * We would like to thank **Yonghong Chen** (Midcontinent Independent System Operator), **Feng Pan** (Pacific Northwest National Laboratory) for valuable feedback on early versions of this package.
* Based upon work supported by **Laboratory Directed Research and Development** (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357 * Based upon work supported by **Laboratory Directed Research and Development** (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357
@@ -110,15 +108,15 @@ UnitCommitment.write("/tmp/output.json", solution)
If you use UnitCommitment.jl in your research (instances, models or algorithms), we kindly request that you cite the package as follows: If you use UnitCommitment.jl in your research (instances, models or algorithms), we kindly request that you cite the package as follows:
* **Alinson S. Xavier, Aleksandr M. Kazachkov, Feng Qiu**. "UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment". Zenodo (2020). [DOI: 10.5281/zenodo.4269874](https://doi.org/10.5281/zenodo.4269874). * **Alinson S. Xavier, Aleksandr M. Kazachkov, Ogün Yurdakul, Jun He, Feng Qiu**. "UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment (Version 0.4)". Zenodo (2024). [DOI: 10.5281/zenodo.4269874](https://doi.org/10.5281/zenodo.4269874).
If you use the instances, we additionally request that you cite the original sources, as described in the [instances page](docs/instances.md). If you use the instances, we additionally request that you cite the original sources, as described in the documentation.
## License ## License
```text ```text
UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment
Copyright © 2020-2021, UChicago Argonne, LLC. All Rights Reserved. Copyright © 2020-2024, UChicago Argonne, LLC. All Rights Reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted Redistribution and use in source and binary forms, with or without modification, are permitted
provided that the following conditions are met: provided that the following conditions are met:

View File

@@ -1,5 +0,0 @@
[deps]
JuliaFormatter = "98e50ef6-434e-11e9-1051-2b60c6c9e899"
[compat]
JuliaFormatter = "0.14.4"

View File

@@ -1,9 +0,0 @@
using JuliaFormatter
format(
[
"../../src",
"../../test",
"../../benchmark/run.jl",
],
verbose=true,
)

View File

@@ -1,14 +0,0 @@
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

10
docs/Project.toml Normal file
View File

@@ -0,0 +1,10 @@
[deps]
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Glob = "c27321d9-0574-5035-807b-f59d2c89b15c"
HiGHS = "87dc4568-4c63-4d18-b0c0-bb2238e4078b"
JSON = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
UnitCommitment = "64606440-39ea-11e9-0f29-3303a1d3d877"

View File

@@ -1,49 +0,0 @@
h1.site-logo {
font-size: 30px !important;
}
h1.site-logo small {
font-size: 20px !important;
}
h1.site-logo {
font-size: 30px !important;
}
h1.site-logo small {
font-size: 20px !important;
}
tbody, thead, pre {
border: 1px solid rgba(0, 0, 0, 0.25);
}
table td, th {
padding: 8px;
}
table p {
margin-bottom: 0;
}
table td code {
white-space: nowrap;
}
table tr,
table th {
border-bottom: 1px solid rgba(0, 0, 0, 0.1);
}
table tr:last-child {
border-bottom: 0;
}
pre {
box-shadow: inherit !important;
background-color: #fff;
}
.text-align\:center {
text-align: center;
}

View File

@@ -1,16 +0,0 @@
project = "UnitCommitment.jl"
copyright = "2020-2021, UChicago Argonne, LLC"
author = ""
release = "0.2"
extensions = ["myst_parser"]
templates_path = ["_templates"]
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
html_theme = "sphinx_book_theme"
html_static_path = ["_static"]
html_css_files = ["custom.css"]
html_theme_options = {
"repository_url": "https://github.com/ANL-CEEESA/UnitCommitment.jl/",
"use_repository_button": True,
"extra_navbar": "",
}
html_title = f"UnitCommitment.jl<br/><small>{release}</small>"

1158
docs/example/out.json Normal file

File diff suppressed because it is too large Load Diff

495
docs/example/s1.json Normal file
View File

@@ -0,0 +1,495 @@
{
"Parameters": {
"Version": "0.3",
"Time horizon (h)": 4
},
"Generators": {
"g1": {
"Bus": "b1",
"Production cost curve (MW)": [
100,
110,
130,
135
],
"Production cost curve ($)": [
1400,
1600,
2200,
2400
],
"Startup delays (h)": [
1,
2,
3
],
"Startup costs ($)": [
1000.0,
1500.0,
2000.0
],
"Initial status (h)": -100,
"Initial power (MW)": 0
},
"g2": {
"Bus": "b2",
"Production cost curve (MW)": [
0,
47,
94,
140
],
"Production cost curve ($)": [
0,
2256.00,
4733.37,
7395.39
],
"Startup delays (h)": [
1,
4
],
"Startup costs ($)": [
3000.0,
4000.0
],
"Ramp up limit (MW)": 98.0,
"Ramp down limit (MW)": 98.0,
"Startup limit (MW)": 98.0,
"Shutdown limit (MW)": 98.0,
"Minimum uptime (h)": 4,
"Minimum downtime (h)": 4,
"Maximum daily energy (MWh)": null,
"Maximum daily starts": null,
"Initial status (h)": -8,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
},
"g3": {
"Bus": "b3",
"Production cost curve (MW)": [
0,
33,
66,
100
],
"Production cost curve ($)": [
0,
1113.75,
2369.07,
3891.54
],
"Startup delays (h)": [
1,
4,
8
],
"Startup costs ($)": [
1000.0,
2000.0,
3000.0
],
"Ramp up limit (MW)": 70.0,
"Ramp down limit (MW)": 70.0,
"Startup limit (MW)": 70.0,
"Shutdown limit (MW)": 70.0,
"Must run?": true,
"Minimum uptime (h)": 1,
"Minimum downtime (h)": 1,
"Maximum daily energy (MWh)": null,
"Maximum daily starts": null,
"Initial status (h)": -6,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
},
"g4": {
"Bus": "b6",
"Production cost curve (MW)": [
33,
66,
100
],
"Production cost curve ($)": [
1113.75,
2369.07,
3891.54
],
"Initial status (h)": -100,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
},
"g5": {
"Bus": "b8",
"Production cost curve (MW)": [
33,
66,
100
],
"Production cost curve ($)": [
1113.75,
2369.07,
3891.54
],
"Initial status (h)": -100,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
},
"g6": {
"Bus": "b8",
"Production cost curve (MW)": [
100
],
"Production cost curve ($)": [
10000.00
],
"Initial status (h)": -100,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
}
},
"Buses": {
"b1": {
"Load (MW)": 0.0
},
"b2": {
"Load (MW)": [
26.01527,
24.46212,
23.29725,
22.90897
]
},
"b3": {
"Load (MW)": [
112.93263,
106.19039,
101.1337,
99.44814
]
},
"b4": {
"Load (MW)": [
57.30552,
53.88429,
51.31838,
50.46307
]
},
"b5": {
"Load (MW)": [
9.11134,
8.56738,
8.15941,
8.02342
]
},
"b6": {
"Load (MW)": [
13.42723,
12.62561,
12.02439,
11.82398
]
},
"b7": {
"Load (MW)": 0.0
},
"b8": {
"Load (MW)": 0.0
},
"b9": {
"Load (MW)": [
35.36638,
33.25495,
31.67138,
31.14353
]
},
"b10": {
"Load (MW)": [
10.78974,
10.14558,
9.66246,
9.50141
]
},
"b11": {
"Load (MW)": [
4.19601,
3.9455,
3.75762,
3.69499
]
},
"b12": {
"Load (MW)": [
7.31305,
6.87645,
6.549,
6.43985
]
},
"b13": {
"Load (MW)": [
16.18461,
15.21837,
14.49368,
14.25212
]
},
"b14": {
"Load (MW)": [
17.86302,
16.79657,
15.99673,
15.73012
]
}
},
"Transmission lines": {
"l1": {
"Source bus": "b1",
"Target bus": "b2",
"Reactance (ohms)": 0.05917000000000001,
"Susceptance (S)": 29.496860773945063,
"Normal flow limit (MW)": 300.0,
"Emergency flow limit (MW)": 400.0,
"Flow limit penalty ($/MW)": 1000.0
},
"l2": {
"Source bus": "b1",
"Target bus": "b5",
"Reactance (ohms)": 0.22304000000000002,
"Susceptance (S)": 7.825184953346168
},
"l3": {
"Source bus": "b2",
"Target bus": "b3",
"Reactance (ohms)": 0.19797,
"Susceptance (S)": 8.816129979261149
},
"l4": {
"Source bus": "b2",
"Target bus": "b4",
"Reactance (ohms)": 0.17632,
"Susceptance (S)": 9.898645939169292
},
"l5": {
"Source bus": "b2",
"Target bus": "b5",
"Reactance (ohms)": 0.17388,
"Susceptance (S)": 10.037550333530765
},
"l6": {
"Source bus": "b3",
"Target bus": "b4",
"Reactance (ohms)": 0.17103,
"Susceptance (S)": 10.204813494675376
},
"l7": {
"Source bus": "b4",
"Target bus": "b5",
"Reactance (ohms)": 0.04211,
"Susceptance (S)": 41.44690695783257
},
"l8": {
"Source bus": "b4",
"Target bus": "b7",
"Reactance (ohms)": 0.20911999999999997,
"Susceptance (S)": 8.346065665619404
},
"l9": {
"Source bus": "b4",
"Target bus": "b9",
"Reactance (ohms)": 0.55618,
"Susceptance (S)": 3.1380654680037567
},
"l10": {
"Source bus": "b5",
"Target bus": "b6",
"Reactance (ohms)": 0.25201999999999997,
"Susceptance (S)": 6.92536009838239
},
"l11": {
"Source bus": "b6",
"Target bus": "b11",
"Reactance (ohms)": 0.1989,
"Susceptance (S)": 8.774908255376218
},
"l12": {
"Source bus": "b6",
"Target bus": "b12",
"Reactance (ohms)": 0.25581,
"Susceptance (S)": 6.8227561549365925
},
"l13": {
"Source bus": "b6",
"Target bus": "b13",
"Reactance (ohms)": 0.13027,
"Susceptance (S)": 13.397783465067395
},
"l14": {
"Source bus": "b7",
"Target bus": "b8",
"Reactance (ohms)": 0.17615,
"Susceptance (S)": 9.908198989465395
},
"l15": {
"Source bus": "b7",
"Target bus": "b9",
"Reactance (ohms)": 0.11001,
"Susceptance (S)": 15.865187273832648
},
"l16": {
"Source bus": "b9",
"Target bus": "b10",
"Reactance (ohms)": 0.0845,
"Susceptance (S)": 20.65478404727017
},
"l17": {
"Source bus": "b9",
"Target bus": "b14",
"Reactance (ohms)": 0.27038,
"Susceptance (S)": 6.4550974628091184
},
"l18": {
"Source bus": "b10",
"Target bus": "b11",
"Reactance (ohms)": 0.19207,
"Susceptance (S)": 9.08694357262628
},
"l19": {
"Source bus": "b12",
"Target bus": "b13",
"Reactance (ohms)": 0.19988,
"Susceptance (S)": 8.73188539120637
},
"l20": {
"Source bus": "b13",
"Target bus": "b14",
"Reactance (ohms)": 0.34802,
"Susceptance (S)": 5.0150257226433235
}
},
"Contingencies": {
"c1": {
"Affected lines": [
"l1"
]
},
"c2": {
"Affected lines": [
"l2"
]
},
"c3": {
"Affected lines": [
"l3"
]
},
"c4": {
"Affected lines": [
"l4"
]
},
"c5": {
"Affected lines": [
"l5"
]
},
"c6": {
"Affected lines": [
"l6"
]
},
"c7": {
"Affected lines": [
"l7"
]
},
"c8": {
"Affected lines": [
"l8"
]
},
"c9": {
"Affected lines": [
"l9"
]
},
"c10": {
"Affected lines": [
"l10"
]
},
"c11": {
"Affected lines": [
"l11"
]
},
"c12": {
"Affected lines": [
"l12"
]
},
"c13": {
"Affected lines": [
"l13"
]
},
"c15": {
"Affected lines": [
"l15"
]
},
"c16": {
"Affected lines": [
"l16"
]
},
"c17": {
"Affected lines": [
"l17"
]
},
"c18": {
"Affected lines": [
"l18"
]
},
"c19": {
"Affected lines": [
"l19"
]
},
"c20": {
"Affected lines": [
"l20"
]
}
},
"Price-sensitive loads": {
"ps1": {
"Bus": "b3",
"Revenue ($/MW)": 100.0,
"Demand (MW)": 50.0
}
},
"Reserves": {
"r1": {
"Type": "Spinning",
"Amount (MW)": 100.0,
"Shortfall penalty ($/MW)": 1000.0
}
}
}

495
docs/example/s2.json Normal file
View File

@@ -0,0 +1,495 @@
{
"Parameters": {
"Version": "0.3",
"Time horizon (h)": 4
},
"Generators": {
"g1": {
"Bus": "b1",
"Production cost curve (MW)": [
100,
110,
130,
135
],
"Production cost curve ($)": [
1400,
1600,
2200,
2400
],
"Startup delays (h)": [
1,
2,
3
],
"Startup costs ($)": [
1000.0,
1500.0,
2000.0
],
"Initial status (h)": -100,
"Initial power (MW)": 0
},
"g2": {
"Bus": "b2",
"Production cost curve (MW)": [
0,
47,
94,
140
],
"Production cost curve ($)": [
0,
2256.00,
4733.37,
7395.39
],
"Startup delays (h)": [
1,
4
],
"Startup costs ($)": [
3000.0,
4000.0
],
"Ramp up limit (MW)": 98.0,
"Ramp down limit (MW)": 98.0,
"Startup limit (MW)": 98.0,
"Shutdown limit (MW)": 98.0,
"Minimum uptime (h)": 4,
"Minimum downtime (h)": 4,
"Maximum daily energy (MWh)": null,
"Maximum daily starts": null,
"Initial status (h)": -8,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
},
"g3": {
"Bus": "b3",
"Production cost curve (MW)": [
0,
33,
66,
100
],
"Production cost curve ($)": [
0,
1113.75,
2369.07,
3891.54
],
"Startup delays (h)": [
1,
4,
8
],
"Startup costs ($)": [
1000.0,
2000.0,
3000.0
],
"Ramp up limit (MW)": 70.0,
"Ramp down limit (MW)": 70.0,
"Startup limit (MW)": 70.0,
"Shutdown limit (MW)": 70.0,
"Must run?": true,
"Minimum uptime (h)": 1,
"Minimum downtime (h)": 1,
"Maximum daily energy (MWh)": null,
"Maximum daily starts": null,
"Initial status (h)": -6,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
},
"g4": {
"Bus": "b6",
"Production cost curve (MW)": [
33,
66,
100
],
"Production cost curve ($)": [
1113.75,
2369.07,
3891.54
],
"Initial status (h)": -100,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
},
"g5": {
"Bus": "b8",
"Production cost curve (MW)": [
33,
66,
100
],
"Production cost curve ($)": [
1113.75,
2369.07,
3891.54
],
"Initial status (h)": -100,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
},
"g6": {
"Bus": "b8",
"Production cost curve (MW)": [
100
],
"Production cost curve ($)": [
10000.00
],
"Initial status (h)": -100,
"Initial power (MW)": 0,
"Reserve eligibility": [
"r1"
]
}
},
"Buses": {
"b1": {
"Load (MW)": 0.0
},
"b2": {
"Load (MW)": [
26.01527,
24.46212,
23.29725,
22.90897
]
},
"b3": {
"Load (MW)": [
112.93263,
106.19039,
101.1337,
99.44814
]
},
"b4": {
"Load (MW)": [
57.30552,
53.88429,
51.31838,
50.46307
]
},
"b5": {
"Load (MW)": [
9.11134,
8.56738,
8.15941,
8.02342
]
},
"b6": {
"Load (MW)": [
13.42723,
12.62561,
12.02439,
11.82398
]
},
"b7": {
"Load (MW)": 0.0
},
"b8": {
"Load (MW)": 0.0
},
"b9": {
"Load (MW)": [
35.36638,
33.25495,
31.67138,
31.14353
]
},
"b10": {
"Load (MW)": [
10.78974,
10.14558,
9.66246,
9.50141
]
},
"b11": {
"Load (MW)": [
4.19601,
3.9455,
3.75762,
3.69499
]
},
"b12": {
"Load (MW)": [
7.31305,
6.87645,
6.549,
6.43985
]
},
"b13": {
"Load (MW)": [
16.18461,
15.21837,
14.49368,
14.25212
]
},
"b14": {
"Load (MW)": [
17.86302,
16.79657,
15.99673,
15.73012
]
}
},
"Transmission lines": {
"l1": {
"Source bus": "b1",
"Target bus": "b2",
"Reactance (ohms)": 0.05917000000000001,
"Susceptance (S)": 29.496860773945063,
"Normal flow limit (MW)": 300.0,
"Emergency flow limit (MW)": 400.0,
"Flow limit penalty ($/MW)": 1000.0
},
"l2": {
"Source bus": "b1",
"Target bus": "b5",
"Reactance (ohms)": 0.22304000000000002,
"Susceptance (S)": 7.825184953346168
},
"l3": {
"Source bus": "b2",
"Target bus": "b3",
"Reactance (ohms)": 0.19797,
"Susceptance (S)": 8.816129979261149
},
"l4": {
"Source bus": "b2",
"Target bus": "b4",
"Reactance (ohms)": 0.17632,
"Susceptance (S)": 9.898645939169292
},
"l5": {
"Source bus": "b2",
"Target bus": "b5",
"Reactance (ohms)": 0.17388,
"Susceptance (S)": 10.037550333530765
},
"l6": {
"Source bus": "b3",
"Target bus": "b4",
"Reactance (ohms)": 0.17103,
"Susceptance (S)": 10.204813494675376
},
"l7": {
"Source bus": "b4",
"Target bus": "b5",
"Reactance (ohms)": 0.04211,
"Susceptance (S)": 41.44690695783257
},
"l8": {
"Source bus": "b4",
"Target bus": "b7",
"Reactance (ohms)": 0.20911999999999997,
"Susceptance (S)": 8.346065665619404
},
"l9": {
"Source bus": "b4",
"Target bus": "b9",
"Reactance (ohms)": 0.55618,
"Susceptance (S)": 3.1380654680037567
},
"l10": {
"Source bus": "b5",
"Target bus": "b6",
"Reactance (ohms)": 0.25201999999999997,
"Susceptance (S)": 6.92536009838239
},
"l11": {
"Source bus": "b6",
"Target bus": "b11",
"Reactance (ohms)": 0.1989,
"Susceptance (S)": 8.774908255376218
},
"l12": {
"Source bus": "b6",
"Target bus": "b12",
"Reactance (ohms)": 0.25581,
"Susceptance (S)": 6.8227561549365925
},
"l13": {
"Source bus": "b6",
"Target bus": "b13",
"Reactance (ohms)": 0.13027,
"Susceptance (S)": 13.397783465067395
},
"l14": {
"Source bus": "b7",
"Target bus": "b8",
"Reactance (ohms)": 0.17615,
"Susceptance (S)": 9.908198989465395
},
"l15": {
"Source bus": "b7",
"Target bus": "b9",
"Reactance (ohms)": 0.11001,
"Susceptance (S)": 15.865187273832648
},
"l16": {
"Source bus": "b9",
"Target bus": "b10",
"Reactance (ohms)": 0.0845,
"Susceptance (S)": 20.65478404727017
},
"l17": {
"Source bus": "b9",
"Target bus": "b14",
"Reactance (ohms)": 0.27038,
"Susceptance (S)": 6.4550974628091184
},
"l18": {
"Source bus": "b10",
"Target bus": "b11",
"Reactance (ohms)": 0.19207,
"Susceptance (S)": 9.08694357262628
},
"l19": {
"Source bus": "b12",
"Target bus": "b13",
"Reactance (ohms)": 0.19988,
"Susceptance (S)": 8.73188539120637
},
"l20": {
"Source bus": "b13",
"Target bus": "b14",
"Reactance (ohms)": 0.34802,
"Susceptance (S)": 5.0150257226433235
}
},
"Contingencies": {
"c1": {
"Affected lines": [
"l1"
]
},
"c2": {
"Affected lines": [
"l2"
]
},
"c3": {
"Affected lines": [
"l3"
]
},
"c4": {
"Affected lines": [
"l4"
]
},
"c5": {
"Affected lines": [
"l5"
]
},
"c6": {
"Affected lines": [
"l6"
]
},
"c7": {
"Affected lines": [
"l7"
]
},
"c8": {
"Affected lines": [
"l8"
]
},
"c9": {
"Affected lines": [
"l9"
]
},
"c10": {
"Affected lines": [
"l10"
]
},
"c11": {
"Affected lines": [
"l11"
]
},
"c12": {
"Affected lines": [
"l12"
]
},
"c13": {
"Affected lines": [
"l13"
]
},
"c15": {
"Affected lines": [
"l15"
]
},
"c16": {
"Affected lines": [
"l16"
]
},
"c17": {
"Affected lines": [
"l17"
]
},
"c18": {
"Affected lines": [
"l18"
]
},
"c19": {
"Affected lines": [
"l19"
]
},
"c20": {
"Affected lines": [
"l20"
]
}
},
"Price-sensitive loads": {
"ps1": {
"Bus": "b3",
"Revenue ($/MW)": 100.0,
"Demand (MW)": 50.0
}
},
"Reserves": {
"r1": {
"Type": "Spinning",
"Amount (MW)": 100.0,
"Shortfall penalty ($/MW)": 1000.0
}
}
}

View File

@@ -1,305 +0,0 @@
```{sectnum}
---
start: 2
depth: 2
suffix: .
---
```
Data Format
===========
Input Data Format
-----------------
Instances are specified by JSON files containing the following main sections:
* Parameters
* Buses
* Generators
* Price-sensitive loads
* Transmission lines
* Reserves
* Contingencies
Each section is described in detail below. For a complete example, see [case14](https://github.com/ANL-CEEESA/UnitCommitment.jl/tree/dev/instances/matpower/case14).
### Parameters
This section describes system-wide parameters, such as power balance penalty, and optimization parameters, such as the length of the planning horizon and the time.
| Key | Description | Default | Time series?
| :----------------------------- | :------------------------------------------------ | :------: | :------------:
| `Time horizon (h)` | Length of the planning horizon (in hours). | Required | N
| `Time step (min)` | Length of each time step (in minutes). Must be a divisor of 60 (e.g. 60, 30, 20, 15, etc). | `60` | N
| `Power balance penalty ($/MW)` | Penalty for system-wide shortage or surplus in production (in $/MW). This is charged per time step. For example, if there is a shortage of 1 MW for three time steps, three times this amount will be charged. | `1000.0` | Y
#### Example
```json
{
"Parameters": {
"Time horizon (h)": 4,
"Power balance penalty ($/MW)": 1000.0,
}
}
```
### Buses
This section describes the characteristics of each bus in the system.
| Key | Description | Default | Time series?
| :----------------- | :------------------------------------------------------------ | ------- | :-------------:
| `Load (MW)` | Fixed load connected to the bus (in MW). | Required | Y
#### Example
```json
{
"Buses": {
"b1": {
"Load (MW)": 0.0
},
"b2": {
"Load (MW)": [
26.01527,
24.46212,
23.29725,
22.90897
]
}
}
}
```
### Generators
This section describes all generators in the system, including thermal units, renewable units and virtual units.
| Key | Description | Default | Time series?
| :------------------------ | :------------------------------------------------| ------- | :-----------:
| `Bus` | Identifier of the bus where this generator is located (string). | Required | N
| `Production cost curve (MW)` and `Production cost curve ($)` | Parameters describing the piecewise-linear production costs. See below for more details. | Required | Y
| `Startup costs ($)` and `Startup delays (h)` | Parameters describing how much it costs to start the generator after it has been shut down for a certain amount of time. If `Startup costs ($)` and `Startup delays (h)` are set to `[300.0, 400.0]` and `[1, 4]`, for example, and the generator is shut down at time `00:00` (h:min), then it costs \$300 to start up the generator at any time between `01:00` and `03:59`, and \$400 to start the generator at time `04:00` or any time after that. The number of startup cost points is unlimited, and may be different for each generator. Startup delays must be strictly increasing and the first entry must equal `Minimum downtime (h)`. | `[0.0]` and `[1]` | N
| `Minimum uptime (h)` | Minimum amount of time the generator must stay operational after starting up (in hours). For example, if the generator starts up at time `00:00` (h:min) and `Minimum uptime (h)` is set to 4, then the generator can only shut down at time `04:00`. | `1` | N
| `Minimum downtime (h)` | Minimum amount of time the generator must stay offline after shutting down (in hours). For example, if the generator shuts down at time `00:00` (h:min) and `Minimum downtime (h)` is set to 4, then the generator can only start producing power again at time `04:00`. | `1` | N
| `Ramp up limit (MW)` | Maximum increase in production from one time step to the next (in MW). For example, if the generator is producing 100 MW at time step 1 and if this parameter is set to 40 MW, then the generator will produce at most 140 MW at time step 2. | `+inf` | N
| `Ramp down limit (MW)` | Maximum decrease in production from one time step to the next (in MW). For example, if the generator is producing 100 MW at time step 1 and this parameter is set to 40 MW, then the generator will produce at least 60 MW at time step 2. | `+inf` | N
| `Startup limit (MW)` | Maximum amount of power a generator can produce immediately after starting up (in MW). For example, if `Startup limit (MW)` is set to 100 MW and the unit is off at time step 1, then it may produce at most 100 MW at time step 2.| `+inf` | N
| `Shutdown limit (MW)` | Maximum amount of power a generator can produce immediately before shutting down (in MW). Specifically, the generator can only shut down at time step `t+1` if its production at time step `t` is below this limit. | `+inf` | N
| `Initial status (h)` | If set to a positive number, indicates the amount of time (in hours) the generator has been on at the beginning of the simulation, and if set to a negative number, the amount of time the generator has been off. For example, if `Initial status (h)` is `-2`, this means that the generator was off since `-02:00` (h:min). The simulation starts at time `00:00`. If `Initial status (h)` is `3`, this means that the generator was on since `-03:00`. A value of zero is not acceptable. | Required | N
| `Initial power (MW)` | Amount of power the generator at time step `-1`, immediately before the planning horizon starts. | Required | N
| `Must run?` | If `true`, the generator should be committed, even if that is not economical (Boolean). | `false` | Y
| `Reserve eligibility` | List of reserve products this generator is eligibe to provide. By default, the generator is not eligible to provide any reserves. | `[]` | N
#### Production costs and limits
Production costs are represented as piecewise-linear curves. Figure 1 shows an example cost curve with three segments, where it costs \$1400, \$1600, \$2200 and \$2400 to generate, respectively, 100, 110, 130 and 135 MW of power. To model this generator, `Production cost curve (MW)` should be set to `[100, 110, 130, 135]`, and `Production cost curve ($)` should be set to `[1400, 1600, 2200, 2400]`.
Note that this curve also specifies the production limits. Specifically, the first point identifies the minimum power output when the unit is operational, while the last point identifies the maximum power output.
<center>
<img src="../_static/cost_curve.png" style="max-width: 500px"/>
<div><b>Figure 1.</b> Piecewise-linear production cost curve.</div>
<br/>
</center>
#### Additional remarks:
* For time-dependent production limits or time-dependent production costs, the usage of nested arrays is allowed. For example, if `Production cost curve (MW)` is set to `[5.0, [10.0, 12.0, 15.0, 20.0]]`, then the unit may generate at most 10, 12, 15 and 20 MW of power during time steps 1, 2, 3 and 4, respectively. The minimum output for all time periods is fixed to at 5 MW.
* There is no limit to the number of piecewise-linear segments, and different generators may have a different number of segments.
* If `Production cost curve (MW)` and `Production cost curve ($)` both contain a single element, then the generator must produce exactly that amount of power when operational. To specify that the generator may produce any amount of power up to a certain limit `P`, the parameter `Production cost curve (MW)` should be set to `[0, P]`.
* Production cost curves must be convex.
#### Example
```json
{
"Generators": {
"gen1": {
"Bus": "b1",
"Production cost curve (MW)": [100.0, 110.0, 130.0, 135.0],
"Production cost curve ($)": [1400.0, 1600.0, 2200.0, 2400.0],
"Startup costs ($)": [300.0, 400.0],
"Startup delays (h)": [1, 4],
"Ramp up limit (MW)": 232.68,
"Ramp down limit (MW)": 232.68,
"Startup limit (MW)": 232.68,
"Shutdown limit (MW)": 232.68,
"Minimum downtime (h)": 4,
"Minimum uptime (h)": 4,
"Initial status (h)": 12,
"Must run?": false,
"Reserve eligibility": ["r1"],
},
"gen2": {
"Bus": "b5",
"Production cost curve (MW)": [0.0, [10.0, 8.0, 0.0, 3.0]],
"Production cost curve ($)": [0.0, 0.0],
"Reserve eligibility": ["r1", "r2"],
}
}
}
```
### Price-sensitive loads
This section describes components in the system which may increase or reduce their energy consumption according to the energy prices. Fixed loads (as described in the `buses` section) are always served, regardless of the price, unless there is significant congestion in the system or insufficient production capacity. Price-sensitive loads, on the other hand, are only served if it is economical to do so.
| Key | Description | Default | Time series?
| :---------------- | :------------------------------------------------ | :------: | :------------:
| `Bus` | Bus where the load is located. Multiple price-sensitive loads may be placed at the same bus. | Required | N
| `Revenue ($/MW)` | Revenue obtained for serving each MW of power to this load. | Required | Y
| `Demand (MW)` | Maximum amount of power required by this load. Any amount lower than this may be served. | Required | Y
#### Example
```json
{
"Price-sensitive loads": {
"p1": {
"Bus": "b3",
"Revenue ($/MW)": 23.0,
"Demand (MW)": 50.0
}
}
}
```
### Transmission Lines
This section describes the characteristics of transmission system, such as its topology and the susceptance of each transmission line.
| Key | Description | Default | Time series?
| :--------------------- | :----------------------------------------------- | ------- | :------------:
| `Source bus` | Identifier of the bus where the transmission line originates. | Required | N
| `Target bus` | Identifier of the bus where the transmission line reaches. | Required | N
| `Reactance (ohms)` | Reactance of the transmission line (in ohms). | Required | N
| `Susceptance (S)` | Susceptance of the transmission line (in siemens). | Required | N
| `Normal flow limit (MW)` | Maximum amount of power (in MW) allowed to flow through the line when the system is in its regular, fully-operational state. | `+inf` | Y
| `Emergency flow limit (MW)` | Maximum amount of power (in MW) allowed to flow through the line when the system is in degraded state (for example, after the failure of another transmission line). | `+inf` | Y
| `Flow limit penalty ($/MW)` | Penalty for violating the flow limits of the transmission line (in $/MW). This is charged per time step. For example, if there is a thermal violation of 1 MW for three time steps, then three times this amount will be charged. | `5000.0` | Y
#### Example
```json
{
"Transmission lines": {
"l1": {
"Source bus": "b1",
"Target bus": "b2",
"Reactance (ohms)": 0.05917,
"Susceptance (S)": 29.49686,
"Normal flow limit (MW)": 15000.0,
"Emergency flow limit (MW)": 20000.0,
"Flow limit penalty ($/MW)": 5000.0
}
}
}
```
### Reserves
This section describes the hourly amount of reserves required.
| Key | Description | Default | Time series?
| :-------------------- | :------------------------------------------------- | --------- | :----:
| `Type` | Type of reserve product. Must be either "spinning" or "flexiramp". | Required | N
| `Amount (MW)` | Amount of reserves required. | Required | Y
| `Shortfall penalty ($/MW)` | Penalty for shortage in meeting the reserve requirements (in $/MW). This is charged per time step. Negative value implies reserve constraints must always be satisfied. | `-1` | Y
#### Example 1
```json
{
"Reserves": {
"r1": {
"Type": "spinning",
"Amount (MW)": [
57.30552,
53.88429,
51.31838,
50.46307
],
"Shortfall penalty ($/MW)": 5.0
},
"r2": {
"Type": "flexiramp",
"Amount (MW)": [
20.31042,
23.65273,
27.41784,
25.34057
],
}
}
}
```
### Contingencies
This section describes credible contingency scenarios in the optimization, such as the loss of a transmission line or generator.
| Key | Description | Default
| :-------------------- | :----------------------------------------------- | ----------
| `Affected generators` | List of generators affected by this contingency. May be omitted if no generators are affected. | `[]`
| `Affected lines` | List of transmission lines affected by this contingency. May be omitted if no lines are affected. | `[]`
#### Example
```json
{
"Contingencies": {
"c1": {
"Affected lines": ["l1", "l2", "l3"],
"Affected generators": ["g1"]
},
"c2": {
"Affected lines": ["l4"]
},
}
}
```
### Additional remarks
#### Time series parameters
Many numerical properties in the JSON file can be specified either as a single floating point number if they are time-independent, or as an array containing exactly `T` elements, if they are time-dependent, where `T` is the number of time steps in the planning horizon. For example, both formats below are valid when `T=3`:
```json
{
"Load (MW)": 800.0,
"Load (MW)": [800.0, 850.0, 730.0]
}
```
The value `T` depends on both `Time horizon (h)` and `Time step (min)`, as the table below illustrates.
Time horizon (h) | Time step (min) | T
:---------------:|:---------------:|:----:
24 | 60 | 24
24 | 15 | 96
24 | 5 | 288
36 | 60 | 36
36 | 15 | 144
36 | 5 | 432
Output Data Format
------------------
The output data format is also JSON-based, but it is not currently documented since we expect it to change significantly in a future version of the package.
Current limitations
-------------------
* Network topology remains the same for all time periods
* Only N-1 transmission contingencies are supported. Generator contingencies are not currently supported.
* Time-varying minimum production amounts are not currently compatible with ramp/startup/shutdown limits.
* Flexible ramping products can only be acquired under the `WanHob2016` formulation, which does not support spinning reserves.

View File

@@ -1,316 +0,0 @@
```{sectnum}
---
start: 3
depth: 2
suffix: .
---
```
Instances
=========
UnitCommitment.jl provides a large collection of benchmark instances collected from the literature and converted to a [common data format](format.md). In some cases, as indicated below, the original instances have been extended, with realistic parameters, using data-driven methods. If you use these instances in your research, we request that you cite UnitCommitment.jl, as well as the original sources, as listed below. Benchmark instances can be loaded with `UnitCommitment.read_benchmark(name)`, as explained in the [usage section](usage.md).
```{warning}
The instances included in UC.jl are still under development and may change in the future. If you use these instances in your research, for reproducibility, you should specify what version of UC.jl they came from.
```
MATPOWER
--------
[MATPOWER](https://github.com/MATPOWER/matpower) is an open-source package for solving power flow problems in MATLAB and Octave. It contains a number of power flow test cases, which have been widely used in the power systems literature.
Because most MATPOWER test cases were originally designed for power flow studies, they lack a number of important unit commitment parameters, such as time-varying loads, production cost curves, ramp limits, reserves and initial conditions. The test cases included in UnitCommitment.jl are extended versions of the original MATPOWER test cases, modified as following:
* **Production cost** curves were generated using a data-driven approach, based on publicly available data. More specifically, machine learning models were trained to predict typical production cost curves, for each day of the year, based on a generator's maximum and minimum power output.
* **Load profiles** were generated using a similar data-driven approach.
* **Ramp-up, ramp-down, startup and shutdown rates** were set to a fixed proportion of the generator's maximum output.
* **Minimum reserves** were set to a fixed proportion of the total demand.
* **Contingencies** were set to include all N-1 transmission line contingencies that do not generate islands or isolated buses. More specifically, there is one contingency for each transmission line, as long as that transmission line is not a bridge in the network graph.
For each MATPOWER test case, UC.jl provides two variations (`2017-02-01` and `2017-08-01`) corresponding respectively to a winter and to a summer test case.
### MATPOWER/UW-PSTCA
A variety of smaller IEEE test cases, [compiled by University of Washington](http://labs.ece.uw.edu/pstca/), corresponding mostly to small portions of the American Electric Power System in the 1960s.
| Name | Buses | Generators | Lines | Contingencies | References |
|------|-------|------------|-------|---------------|--------|
| `matpower/case14/2017-02-01` | 14 | 5 | 20 | 19 | [MTPWR, PSTCA]
| `matpower/case30/2017-02-01` | 30 | 6 | 41 | 38 | [MTPWR, PSTCA]
| `matpower/case57/2017-02-01` | 57 | 7 | 80 | 79 | [MTPWR, PSTCA]
| `matpower/case118/2017-02-01` | 118 | 54 | 186 | 177 | [MTPWR, PSTCA]
| `matpower/case300/2017-02-01` | 300 | 69 | 411 | 320 | [MTPWR, PSTCA]
### MATPOWER/Polish
Test cases based on the Polish 400, 220 and 110 kV networks, originally provided by **Roman Korab** (Politechnika Śląska) and corrected by the MATPOWER team.
| Name | Buses | Generators | Lines | Contingencies | References |
|------|-------|------------|-------|---------------|--------|
| `matpower/case2383wp/2017-02-01` | 2383 | 323 | 2896 | 2240 | [MTPWR]
| `matpower/case2736sp/2017-02-01` | 2736 | 289 | 3504 | 3159 | [MTPWR]
| `matpower/case2737sop/2017-02-01` | 2737 | 267 | 3506 | 3161 | [MTPWR]
| `matpower/case2746wop/2017-02-01` | 2746 | 443 | 3514 | 3155 | [MTPWR]
| `matpower/case2746wp/2017-02-01` | 2746 | 457 | 3514 | 3156 | [MTPWR]
| `matpower/case3012wp/2017-02-01` | 3012 | 496 | 3572 | 2854 | [MTPWR]
| `matpower/case3120sp/2017-02-01` | 3120 | 483 | 3693 | 2950 | [MTPWR]
| `matpower/case3375wp/2017-02-01` | 3374 | 590 | 4161 | 3245 | [MTPWR]
### MATPOWER/PEGASE
Test cases from the [Pan European Grid Advanced Simulation and State Estimation (PEGASE) project](https://cordis.europa.eu/project/id/211407), describing part of the European high voltage transmission network.
| Name | Buses | Generators | Lines | Contingencies | References |
|------|-------|------------|-------|---------------|--------|
| `matpower/case89pegase/2017-02-01` | 89 | 12 | 210 | 192 | [JoFlMa16, FlPaCa13, MTPWR]
| `matpower/case1354pegase/2017-02-01` | 1354 | 260 | 1991 | 1288 | [JoFlMa16, FlPaCa13, MTPWR]
| `matpower/case2869pegase/2017-02-01` | 2869 | 510 | 4582 | 3579 | [JoFlMa16, FlPaCa13, MTPWR]
| `matpower/case9241pegase/2017-02-01` | 9241 | 1445 | 16049 | 13932 | [JoFlMa16, FlPaCa13, MTPWR]
| `matpower/case13659pegase/2017-02-01` | 13659 | 4092 | 20467 | 13932 | [JoFlMa16, FlPaCa13, MTPWR]
### MATPOWER/RTE
Test cases from the R&D Division at [Reseau de Transport d'Electricite](https://www.rte-france.com) representing the size and complexity of the French very high voltage transmission network.
| Name | Buses | Generators | Lines | Contingencies | References |
|------|-------|------------|-------|---------------|--------|
| `matpower/case1888rte/2017-02-01` | 1888 | 296 | 2531 | 1484 | [MTPWR, JoFlMa16]
| `matpower/case1951rte/2017-02-01` | 1951 | 390 | 2596 | 1497 | [MTPWR, JoFlMa16]
| `matpower/case2848rte/2017-02-01` | 2848 | 544 | 3776 | 2242 | [MTPWR, JoFlMa16]
| `matpower/case2868rte/2017-02-01` | 2868 | 596 | 3808 | 2260 | [MTPWR, JoFlMa16]
| `matpower/case6468rte/2017-02-01` | 6468 | 1262 | 9000 | 6094 | [MTPWR, JoFlMa16]
| `matpower/case6470rte/2017-02-01` | 6470 | 1306 | 9005 | 6085 | [MTPWR, JoFlMa16]
| `matpower/case6495rte/2017-02-01` | 6495 | 1352 | 9019 | 6060 | [MTPWR, JoFlMa16]
| `matpower/case6515rte/2017-02-01` | 6515 | 1368 | 9037 | 6063 | [MTPWR, JoFlMa16]
PGLIB-UC Instances
------------------
[PGLIB-UC](https://github.com/power-grid-lib/pglib-uc) is a benchmark library curated and maintained by the [IEEE PES Task Force on Benchmarks for Validation of Emerging Power System Algorithms](https://power-grid-lib.github.io/). These test cases have been used in [KnOsWa20].
### PGLIB-UC/California
Test cases based on publicly available data from the California ISO. For more details, see [PGLIB-UC case file overview](https://github.com/power-grid-lib/pglib-uc).
| Name | Buses | Generators | Lines | Contingencies | References |
|------|-------|------------|-------|---------------|--------|
| `pglib-uc/ca/2014-09-01_reserves_0` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2014-09-01_reserves_1` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2014-09-01_reserves_3` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2014-09-01_reserves_5` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2014-12-01_reserves_0` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2014-12-01_reserves_1` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2014-12-01_reserves_3` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2014-12-01_reserves_5` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2015-03-01_reserves_0` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2015-03-01_reserves_1` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2015-03-01_reserves_3` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2015-03-01_reserves_5` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2015-06-01_reserves_0` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2015-06-01_reserves_1` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2015-06-01_reserves_3` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/2015-06-01_reserves_5` | 1 | 610 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/Scenario400_reserves_0` | 1 | 611 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/Scenario400_reserves_1` | 1 | 611 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/Scenario400_reserves_3` | 1 | 611 | 0 | 0 | [KnOsWa20]
| `pglib-uc/ca/Scenario400_reserves_5` | 1 | 611 | 0 | 0 | [KnOsWa20]
### PGLIB-UC/FERC
Test cases based on a publicly available [unit commitment test case produced by the Federal Energy Regulatory Commission](https://www.ferc.gov/industries-data/electric/power-sales-and-markets/increasing-efficiency-through-improved-software-1). For more details, see [PGLIB-UC case file overview](https://github.com/power-grid-lib/pglib-uc).
| Name | Buses | Generators | Lines | Contingencies | References |
|------|-------|------------|-------|---------------|--------|
| `pglib-uc/ferc/2015-01-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-01-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-02-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-02-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-03-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-03-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-04-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-04-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-05-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-05-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-06-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-06-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-07-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-07-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-08-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-08-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-09-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-09-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-10-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-10-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-11-02_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-11-02_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-12-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
| `pglib-uc/ferc/2015-12-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12]
### PGLIB-UC/RTS-GMLC
[RTS-GMLC](https://github.com/GridMod/RTS-GMLC) is an updated version of the RTS-96 test system produced by the United States Department of Energy's [Grid Modernization Laboratory Consortium](https://gmlc.doe.gov/). The PGLIB-UC/RTS-GMLC instances are modified versions of the original RTS-GMLC instances, with modified ramp-rates and without a transmission network. For more details, see [PGLIB-UC case file overview](https://github.com/power-grid-lib/pglib-uc).
| Name | Buses | Generators | Lines | Contingencies | References |
|------|-------|------------|-------|---------------|--------|
| `pglib-uc/rts_gmlc/2020-01-27` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-02-09` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-03-05` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-04-03` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-05-05` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-06-09` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-07-06` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-08-12` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-09-20` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-10-27` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-11-25` | 1 | 154 | 0 | 0 | [BaBlEh19]
| `pglib-uc/rts_gmlc/2020-12-23` | 1 | 154 | 0 | 0 | [BaBlEh19]
OR-LIB/UC
---------
[OR-LIB](http://people.brunel.ac.uk/~mastjjb/jeb/info.html) is a collection of test data sets for a variety of operations research problems, including unit commitment. The UC instances in OR-LIB are synthetic instances generated by a [random problem generator](http://groups.di.unipi.it/optimize/Data/UC.html) developed by the [Operations Research Group at University of Pisa](http://groups.di.unipi.it/optimize/). These test cases have been used in [FrGe06] and many other publications.
| Name | Hours | Buses | Generators | Lines | Contingencies | References |
|------|-------|-------|------------|-------|---------------|------------|
| `or-lib/10_0_1_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/10_0_2_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/10_0_3_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/10_0_4_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/10_0_5_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/20_0_1_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/20_0_2_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/20_0_3_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/20_0_4_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/20_0_5_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/50_0_1_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/50_0_2_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/50_0_3_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/50_0_4_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/50_0_5_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/75_0_1_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/75_0_2_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/75_0_3_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/75_0_4_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/75_0_5_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/100_0_1_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/100_0_2_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/100_0_3_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/100_0_4_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/100_0_5_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/150_0_1_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/150_0_2_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/150_0_3_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/150_0_4_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/150_0_5_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_10_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_11_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_12_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_1_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_2_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_3_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_4_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_5_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_6_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_7_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_8_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
| `or-lib/200_0_9_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06]
Tejada19
--------
Test cases used in [TeLuSa19]. These instances are similar to OR-LIB/UC, in the sense that they use the same random problem generator, but are much larger.
| Name | Hours | Buses | Generators | Lines | Contingencies | References |
|------|-------|-------|------------|-------|---------------|------------|
| `tejada19/UC_24h_214g` | 24 | 1 | 214 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_250g` | 24 | 1 | 250 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_290g` | 24 | 1 | 290 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_480g` | 24 | 1 | 480 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_505g` | 24 | 1 | 505 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_623g` | 24 | 1 | 623 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_647g` | 24 | 1 | 647 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_836g` | 24 | 1 | 836 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_850g` | 24 | 1 | 850 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_918g` | 24 | 1 | 918 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_931g` | 24 | 1 | 931 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_940g` | 24 | 1 | 940 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_957g` | 24 | 1 | 957 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_959g` | 24 | 1 | 959 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1069g` | 24 | 1 | 1069 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1130g` | 24 | 1 | 1130 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1376g` | 24 | 1 | 1376 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1393g` | 24 | 1 | 1393 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1577g` | 24 | 1 | 1577 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1615g` | 24 | 1 | 1615 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1632g` | 24 | 1 | 1632 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1768g` | 24 | 1 | 1768 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1804g` | 24 | 1 | 1804 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1820g` | 24 | 1 | 1820 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1823g` | 24 | 1 | 1823 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_24h_1888g` | 24 | 1 | 1888 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_36g` | 168 | 1 | 36 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_38g` | 168 | 1 | 38 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_40g` | 168 | 1 | 40 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_53g` | 168 | 1 | 53 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_58g` | 168 | 1 | 58 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_59g` | 168 | 1 | 59 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_72g` | 168 | 1 | 72 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_84g` | 168 | 1 | 84 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_86g` | 168 | 1 | 86 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_88g` | 168 | 1 | 88 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_93g` | 168 | 1 | 93 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_105g` | 168 | 1 | 105 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_110g` | 168 | 1 | 110 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_125g` | 168 | 1 | 125 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_130g` | 168 | 1 | 130 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_131g` | 168 | 1 | 131 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_140g` | 168 | 1 | 140 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_165g` | 168 | 1 | 165 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_175g` | 168 | 1 | 175 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_179g` | 168 | 1 | 179 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_188g` | 168 | 1 | 188 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_192g` | 168 | 1 | 192 | 0 | 0 | [TeLuSa19]
| `tejada19/UC_168h_199g` | 168 | 1 | 199 | 0 | 0 | [TeLuSa19]
References
----------
* [UCJL] **Alinson S. Xavier, Aleksandr M. Kazachkov, Feng Qiu.** "UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment". Zenodo (2020). [DOI: 10.5281/zenodo.4269874](https://doi.org/10.5281/zenodo.4269874)
* [KnOsWa20] **Bernard Knueven, James Ostrowski and Jean-Paul Watson.** "On Mixed-Integer Programming Formulations for the Unit Commitment Problem". INFORMS Journal on Computing (2020). [DOI: 10.1287/ijoc.2019.0944](https://doi.org/10.1287/ijoc.2019.0944)
* [KrHiOn12] **Eric Krall, Michael Higgins and Richard P. ONeill.** "RTO unit commitment test system." Federal Energy Regulatory Commission. Available at: <https://www.ferc.gov/industries-data/electric/power-sales-and-markets/increasing-efficiency-through-improved-software-1> (Accessed: Nov 14, 2020)
* [BaBlEh19] **Clayton Barrows, Aaron Bloom, Ali Ehlen, Jussi Ikaheimo, Jennie Jorgenson, Dheepak Krishnamurthy, Jessica Lau et al.** "The IEEE Reliability Test System: A Proposed 2019 Update." IEEE Transactions on Power Systems (2019). [DOI: 10.1109/TPWRS.2019.2925557](https://doi.org/10.1109/TPWRS.2019.2925557)
* [JoFlMa16] **C. Josz, S. Fliscounakis, J. Maeght, and P. Panciatici.** "AC Power Flow
Data in MATPOWER and QCQP Format: iTesla, RTE Snapshots, and PEGASE". [ArXiv (2016)](https://arxiv.org/abs/1603.01533).
* [FlPaCa13] **S. Fliscounakis, P. Panciatici, F. Capitanescu, and L. Wehenkel.**
"Contingency ranking with respect to overloads in very large power
systems taking into account uncertainty, preventive and corrective
actions", Power Systems, IEEE Trans. on, (28)4:4909-4917, 2013.
[DOI: 10.1109/TPWRS.2013.2251015](https://doi.org/10.1109/TPWRS.2013.2251015)
* [MTPWR] **D. Zimmerman, C. E. Murillo-Sandnchez and R. J. Thomas.** "Matpower: Steady-state operations, planning, and analysis tools forpower systems research and education", IEEE Transactions on PowerSystems, vol. 26, no. 1, pp. 12 19, Feb. 2011. [DOI: 10.1109/TPWRS.2010.2051168](https://doi.org/10.1109/TPWRS.2010.2051168)
* [PSTCA] **University of Washington, Dept. of Electrical Engineering.** "Power Systems Test Case Archive". Available at: <http://www.ee.washington.edu/research/pstca/> (Accessed: Nov 14, 2020)
* [ORLIB] **J.E.Beasley.** "OR-Library: distributing test problems by electronic mail", Journal of the Operational Research Society 41(11) (1990). [DOI: 10.2307/2582903](https://doi.org/10.2307/2582903)
* [FrGe06] **A. Frangioni, C. Gentile.** "Solving nonlinear single-unit commitment problems with ramping constraints" Operations Research 54(4), p. 767 - 775, 2006. [DOI: 10.1287/opre.1060.0309](https://doi.org/10.1287/opre.1060.0309)
* [TeLuSa19] **D. A. Tejada-Arango, S. Lumbreras, P. Sanchez-Martin and A. Ramos.** "Which Unit-Commitment Formulation is Best? A Systematic Comparison," in IEEE Transactions on Power Systems. [DOI: 10.1109/TPWRS.2019.2962024](https://ieeexplore.ieee.org/document/8941313/).

43
docs/make.jl Normal file
View File

@@ -0,0 +1,43 @@
using Documenter
using UnitCommitment
using JuMP
using Literate
function make()
literate_sources = [
"src/tutorials/usage.jl",
"src/tutorials/customizing.jl",
"src/tutorials/lmp.jl",
"src/tutorials/market.jl",
]
for src in literate_sources
Literate.markdown(
src,
dirname(src);
documenter = true,
credit = false,
)
end
return makedocs(
sitename = "UnitCommitment.jl",
pages = [
"Home" => "index.md",
"Tutorials" => [
"tutorials/usage.md",
"tutorials/customizing.md",
"tutorials/lmp.md",
"tutorials/market.md",
"tutorials/decomposition.md",
],
"User guide" => [
"guides/problem.md",
"guides/format.md",
"guides/instances.md",
],
"api.md",
],
format = Documenter.HTML(assets = ["assets/custom.css"]),
)
end

View File

@@ -1,244 +0,0 @@
```{sectnum}
---
start: 4
depth: 2
suffix: .
---
```
JuMP Model
==========
In this page, we describe the JuMP optimization model produced by the function `UnitCommitment.build_model`. A detailed understanding of this model is not necessary if you are just interested in using the package to solve some standard unit commitment cases, but it may be useful, for example, if you need to solve a slightly different problem, with additional variables and constraints. The notation in this page generally follows [KnOsWa20].
Decision variables
------------------
### Generators
Name | Symbol | Description | Unit
-----|:--------:|-------------|:------:
`is_on[g,t]` | $u_{g}(t)$ | True if generator `g` is on at time `t`. | Binary
`switch_on[g,t]` | $v_{g}(t)$ | True is generator `g` switches on at time `t`. | Binary
`switch_off[g,t]` | $w_{g}(t)$ | True if generator `g` switches off at time `t`. | Binary
`prod_above[g,t]` |$p'_{g}(t)$ | Amount of power produced by generator `g` above its minimum power output at time `t`. For example, if the minimum power of generator `g` is 100 MW and `g` is producing 115 MW of power at time `t`, then `prod_above[g,t]` equals `15.0`. | MW
`segprod[g,t,k]` | $p^k_g(t)$ | Amount of power from piecewise linear segment `k` produced by generator `g` at time `t`. For example, if cost curve for generator `g` is defined by the points `(100, 1400)`, `(110, 1600)`, `(130, 2200)` and `(135, 2400)`, and if the generator is producing 115 MW of power at time `t`, then `segprod[g,t,:]` equals `[10.0, 5.0, 0.0]`.| MW
`reserve[r,g,t]` | $r_g(t)$ | Amount of reserve `r` provided by unit `g` at time `t`. | MW
`startup[g,t,s]` | $\delta^s_g(t)$ | True if generator `g` switches on at time `t` incurring start-up costs from start-up category `s`. | Binary
### Buses
Name | Symbol | Description | Unit
-----|:------:|-------------|:------:
`net_injection[b,t]` | $n_b(t)$ | Net injection at bus `b` at time `t`. | MW
`curtail[b,t]` | $s^+_b(t)$ | Amount of load curtailed at bus `b` at time `t` | MW
### Price-sensitive loads
Name | Symbol | Description | Unit
-----|:------:|-------------|:------:
`loads[s,t]` | $d_{s}(t)$ | Amount of power served to price-sensitive load `s` at time `t`. | MW
### Transmission lines
Name | Symbol | Description | Unit
-----|:------:|-------------|:------:
`flow[l,t]` | $f_l(t)$ | Power flow on line `l` at time `t`. | MW
`overflow[l,t]` | $f^+_l(t)$ | Amount of flow above the limit for line `l` at time `t`. | MW
```{warning}
Since transmission and N-1 security constraints are enforced in a lazy way, most of the `flow[l,t]` variables are never added to the model. Accessing `model[:flow][l,t]` without first checking that the variable exists will likely generate an error.
```
Objective function
------------------
$$
\begin{align}
\text{minimize} \;\; &
\sum_{t \in \mathcal{T}}
\sum_{g \in \mathcal{G}}
C^\text{min}_g(t) u_g(t) \\
&
+ \sum_{t \in \mathcal{T}}
\sum_{g \in \mathcal{G}}
\sum_{g \in \mathcal{K}_g}
C^k_g(t) p^k_g(t) \\
&
+ \sum_{t \in \mathcal{T}}
\sum_{g \in \mathcal{G}}
\sum_{s \in \mathcal{S}_g}
C^s_{g}(t) \delta^s_g(t) \\
&
+ \sum_{t \in \mathcal{T}}
\sum_{l \in \mathcal{L}}
C^\text{overflow}_{l}(t) f^+_l(t) \\
&
+ \sum_{t \in \mathcal{T}}
\sum_{b \in \mathcal{B}}
C^\text{curtail}(t) s^+_b(t) \\
&
- \sum_{t \in \mathcal{T}}
\sum_{s \in \mathcal{PS}}
R_{s}(t) d_{s}(t) \\
\end{align}
$$
where
- $\mathcal{B}$ is the set of buses
- $\mathcal{G}$ is the set of generators
- $\mathcal{L}$ is the set of transmission lines
- $\mathcal{PS}$ is the set of price-sensitive loads
- $\mathcal{S}_g$ is the set of start-up categories for generator $g$
- $\mathcal{T}$ is the set of time steps
- $C^\text{curtail}(t)$ is the curtailment penalty (in \$/MW)
- $C^\text{min}_g(t)$ is the cost of keeping generator $g$ on and producing at minimum power during time $t$ (in \$)
- $C^\text{overflow}_{l}(t)$ is the flow limit penalty for line $l$ at time $t$ (in \$/MW)
- $C^k_g(t)$ is the cost for generator $g$ to produce 1 MW of power at time $t$ under piecewise linear segment $k$
- $C^s_{g}(t)$ is the cost of starting up generator $g$ at time $t$ under start-up category $s$ (in \$)
- $R_{s}(t)$ is the revenue obtained from serving price-sensitive load $s$ at time $t$ (in \$/MW)
Constraints
-----------
TODO
Inspecting and modifying the model
----------------------------------
### Accessing decision variables
After building a model using `UnitCommitment.build_model`, it is possible to obtain a reference to the decision variables by calling `model[:varname][index]`. For example, `model[:is_on]["g1",1]` returns a direct reference to the JuMP variable indicating whether generator named "g1" is on at time 1. The script below illustrates how to build a model, solve it and display the solution without using the function `UnitCommitment.solution`.
```julia
using Cbc
using Printf
using JuMP
using UnitCommitment
# Load benchmark instance
instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
# Build JuMP model
model = UnitCommitment.build_model(
instance=instance,
optimizer=Cbc.Optimizer,
)
# Solve the model
UnitCommitment.optimize!(model)
# Display commitment status
for g in instance.units
for t in 1:instance.time
@printf(
"%-10s %5d %5.1f %5.1f %5.1f\n",
g.name,
t,
value(model[:is_on][g.name, t]),
value(model[:switch_on][g.name, t]),
value(model[:switch_off][g.name, t]),
)
end
end
```
### Fixing variables, modifying objective function and adding constraints
Since we now have a direct reference to the JuMP decision variables, it is possible to fix variables, change the coefficients in the objective function, or even add new constraints to the model before solving it. The script below shows how can this be accomplished. For more information on modifying an existing model, [see the JuMP documentation](https://jump.dev/JuMP.jl/stable/manual/variables/).
```julia
using Cbc
using JuMP
using UnitCommitment
# Load benchmark instance
instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
# Construct JuMP model
model = UnitCommitment.build_model(
instance=instance,
optimizer=Cbc.Optimizer,
)
# Fix a decision variable to 1.0
JuMP.fix(
model[:is_on]["g1",1],
1.0,
force=true,
)
# Change the objective function
JuMP.set_objective_coefficient(
model,
model[:switch_on]["g2",1],
1000.0,
)
# Create a new constraint
@constraint(
model,
model[:is_on]["g3",1] + model[:is_on]["g4",1] <= 1,
)
# Solve the model
UnitCommitment.optimize!(model)
```
### Adding new component to a bus
The following snippet shows how to add a new grid component to a particular bus. For each time step, we create decision variables for the new grid component, add these variables to the objective function, then attach the component to a particular bus by modifying some existing model constraints.
```julia
using Cbc
using JuMP
using UnitCommitment
# Load instance and build base model
instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
model = UnitCommitment.build_model(
instance=instance,
optimizer=Cbc.Optimizer,
)
# Get the number of time steps in the original instance
T = instance.time
# Create decision variables for the new grid component.
# In this example, we assume that the new component can
# inject up to 10 MW of power at each time step, so we
# create new continuous variables 0 ≤ x[t] ≤ 10.
@variable(model, x[1:T], lower_bound=0.0, upper_bound=10.0)
# For each time step
for t in 1:T
# Add production costs to the objective function.
# In this example, we assume a cost of $5/MW.
set_objective_coefficient(model, x[t], 5.0)
# Attach the new component to bus b1, by modifying the
# constraint `eq_net_injection`.
set_normalized_coefficient(
model[:eq_net_injection]["b1", t],
x[t],
1.0,
)
end
# Solve the model
UnitCommitment.optimize!(model)
# Show optimal values for the x variables
@show value.(x)
```
References
----------
* [KnOsWa20] **Bernard Knueven, James Ostrowski and Jean-Paul Watson.** "On Mixed-Integer Programming Formulations for the Unit Commitment Problem". INFORMS Journal on Computing (2020). [DOI: 10.1287/ijoc.2019.0944](https://doi.org/10.1287/ijoc.2019.0944)

63
docs/src/api.md Normal file
View File

@@ -0,0 +1,63 @@
# API Reference
## Read data, build model & optimize
```@docs
UnitCommitment.read
UnitCommitment.read_benchmark
UnitCommitment.build_model
UnitCommitment.optimize!
UnitCommitment.solution
UnitCommitment.validate
UnitCommitment.write
```
## Locational Marginal Prices
### Conventional LMPs
```@docs
UnitCommitment.compute_lmp(::JuMP.Model,::UnitCommitment.ConventionalLMP)
```
### Approximated Extended LMPs
```@docs
UnitCommitment.AELMP
UnitCommitment.compute_lmp(::JuMP.Model,::UnitCommitment.AELMP)
```
## Modify instance
```@docs
UnitCommitment.slice
UnitCommitment.randomize!(::UnitCommitment.UnitCommitmentInstance)
UnitCommitment.generate_initial_conditions!
```
## Formulations
```@docs
UnitCommitment.Formulation
UnitCommitment.ShiftFactorsFormulation
UnitCommitment.ArrCon2000
UnitCommitment.CarArr2006
UnitCommitment.DamKucRajAta2016
UnitCommitment.Gar1962
UnitCommitment.KnuOstWat2018
UnitCommitment.MorLatRam2013
UnitCommitment.PanGua2016
UnitCommitment.WanHob2016
```
## Solution Methods
```@docs
UnitCommitment.XavQiuWanThi2019.Method
```
## Randomization Methods
```@docs
UnitCommitment.XavQiuAhm2021.Randomization
```

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

@@ -0,0 +1,36 @@
@media screen and (min-width: 1056px) {
#documenter .docs-main {
max-width: 50rem !important;
}
}
tbody, thead, pre {
border: 1px solid rgba(0, 0, 0, 0.25);
}
table td, th {
padding: 8px;
}
table p {
margin-bottom: 0;
}
table td code {
white-space: nowrap;
}
table tr,
table th {
border-bottom: 1px solid rgba(0, 0, 0, 0.1);
}
table tr:last-child {
border-bottom: 0;
}
code {
background-color: transparent;
color: rgb(232, 62, 140);
}

380
docs/src/guides/format.md Normal file
View File

@@ -0,0 +1,380 @@
# JSON data format
An instance of the stochastic security-constrained unit commitment (SCUC) problem is composed multiple scenarios. Each scenario should be described in an individual JSON file containing the main section belows. For deterministic instances, a single scenario file, following the same format below, may also be provided. Fields that are allowed to differ among scenarios are marked as "uncertain". Fields that are allowed to be time-dependent are marked as "time series".
- [Parameters](#Parameters)
- [Buses](#Buses)
- [Generators](#Generators)
- [Storage units](#Storage-units)
- [Price-sensitive loads](#Price-sensitive-loads)
- [Transmission lines](#Transmission-lines)
- [Reserves](#Reserves)
- [Contingencies](#Contingencies)
Each section is described in detail below. See [case118/2017-01-01.json.gz](https://axavier.org/UnitCommitment.jl/0.4/instances/matpower/case118/2017-01-01.json.gz) for a complete example.
### Parameters
This section describes system-wide parameters, such as power balance penalty, and optimization parameters, such as the length of the planning horizon and the time.
| Key | Description | Default | Time series? | Uncertain? |
| :----------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------: | :----------: | :--------: |
| `Version` | Version of UnitCommitment.jl this file was written for. Required to ensure that the file remains readable in future versions of the package. If you are following this page to construct the file, this field should equal `0.4`. | Required | No | No |
| `Time horizon (min)` or `Time horizon (h)` | Length of the planning horizon (in minutes or hours). Either `Time horizon (min)` or `Time horizon (h)` is required, but not both. | Required | No | No |
| `Time step (min)` | Length of each time step (in minutes). Must be a divisor of 60 (e.g. 60, 30, 20, 15, etc). | `60` | No | No |
| `Power balance penalty ($/MW)` | Penalty for system-wide shortage or surplus in production (in $/MW). This is charged per time step. For example, if there is a shortage of 1 MW for three time steps, three times this amount will be charged. | `1000.0` | No | Yes |
| `Scenario name` | Name of the scenario. | `"s1"` | No | --- |
| `Scenario weight` | Weight of the scenario. The scenario weight can be any positive real number, that is, it does not have to be between zero and one. The package normalizes the weights to ensure that the probability of all scenarios sum up to one. | 1.0 | No | --- |
#### Example
```json
{
"Parameters": {
"Version": "0.4",
"Time horizon (h)": 4,
"Power balance penalty ($/MW)": 1000.0,
"Scenario name": "s1",
"Scenario weight": 0.5
}
}
```
### Buses
This section describes the characteristics of each bus in the system.
| Key | Description | Default | Time series? | Uncertain? |
| :---------- | :--------------------------------------- | -------- | :----------: | :--------: |
| `Load (MW)` | Fixed load connected to the bus (in MW). | Required | Yes | Yes |
#### Example
```json
{
"Buses": {
"b1": {
"Load (MW)": 0.0
},
"b2": {
"Load (MW)": [26.01527, 24.46212, 23.29725, 22.90897]
}
}
}
```
### Generators
This section describes all generators in the system. Two types of units can be specified:
- **Thermal units:** Units that produce power by converting heat into electrical energy, such as coal and oil power plants. These units use a more complex model, with binary decision variables, and various constraints to enforce ramp rates and minimum up/down time.
- **Profiled units:** Simplified model for units that do not require the constraints mentioned above, only a maximum and minimum power output for each time period. Typically used for renewables and hydro.
#### Thermal Units
| Key | Description | Default | Time series? | Uncertain? |
| :----------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | :----------: | :--------: |
| `Bus` | Identifier of the bus where this generator is located (string). | Required | No | Yes |
| `Type` | Type of the generator (string). For thermal generators, this must be `Thermal`. | Required | No | No |
| `Production cost curve (MW)` and `Production cost curve ($)` | Parameters describing the piecewise-linear production costs. See below for more details. | Required | Yes | Yes |
| `Startup costs ($)` and `Startup delays (h)` | Parameters describing how much it costs to start the generator after it has been shut down for a certain amount of time. If `Startup costs ($)` and `Startup delays (h)` are set to `[300.0, 400.0]` and `[1, 4]`, for example, and the generator is shut down at time `00:00` (h:min), then it costs \$300 to start up the generator at any time between `01:00` and `03:59`, and \$400 to start the generator at time `04:00` or any time after that. The number of startup cost points is unlimited, and may be different for each generator. Startup delays must be strictly increasing and the first entry must equal `Minimum downtime (h)`. | `[0.0]` and `[1]` | No | Yes |
| `Minimum uptime (h)` | Minimum amount of time the generator must stay operational after starting up (in hours). For example, if the generator starts up at time `00:00` (h:min) and `Minimum uptime (h)` is set to 4, then the generator can only shut down at time `04:00`. | `1` | No | Yes |
| `Minimum downtime (h)` | Minimum amount of time the generator must stay offline after shutting down (in hours). For example, if the generator shuts down at time `00:00` (h:min) and `Minimum downtime (h)` is set to 4, then the generator can only start producing power again at time `04:00`. | `1` | No | Yes |
| `Ramp up limit (MW)` | Maximum increase in production from one time step to the next (in MW). For example, if the generator is producing 100 MW at time step 1 and if this parameter is set to 40 MW, then the generator will produce at most 140 MW at time step 2. | `+inf` | No | Yes |
| `Ramp down limit (MW)` | Maximum decrease in production from one time step to the next (in MW). For example, if the generator is producing 100 MW at time step 1 and this parameter is set to 40 MW, then the generator will produce at least 60 MW at time step 2. | `+inf` | No | Yes |
| `Startup limit (MW)` | Maximum amount of power a generator can produce immediately after starting up (in MW). For example, if `Startup limit (MW)` is set to 100 MW and the unit is off at time step 1, then it may produce at most 100 MW at time step 2. | `+inf` | No | Yes |
| `Shutdown limit (MW)` | Maximum amount of power a generator can produce immediately before shutting down (in MW). Specifically, the generator can only shut down at time step `t+1` if its production at time step `t` is below this limit. | `+inf` | No | Yes |
| `Initial status (h)` | If set to a positive number, indicates the amount of time (in hours) the generator has been on at the beginning of the simulation, and if set to a negative number, the amount of time the generator has been off. For example, if `Initial status (h)` is `-2`, this means that the generator was off since `-02:00` (h:min). The simulation starts at time `00:00`. If `Initial status (h)` is `3`, this means that the generator was on since `-03:00`. A value of zero is not acceptable. | Required | No | No |
| `Initial power (MW)` | Amount of power the generator at time step `-1`, immediately before the planning horizon starts. | Required | No | No |
| `Must run?` | If `true`, the generator should be committed, even if that is not economical (Boolean). | `false` | Yes | Yes |
| `Reserve eligibility` | List of reserve products this generator is eligibe to provide. By default, the generator is not eligible to provide any reserves. | `[]` | No | Yes |
| `Commitment status` | List of commitment status over the time horizon. At time `t`, if `true`, the generator must be commited at that time period; if `false`, the generator must not be commited at that time period. If `null` at time `t`, the generator's commitment status is then decided by the model. By default, the status is a list of `null` values. | `null` | Yes | Yes |
#### Profiled Units
| Key | Description | Default | Time series? | Uncertain? |
| :------------------- | :-------------------------------------------------------------------------------- | :------: | :----------: | :--------: |
| `Bus` | Identifier of the bus where this generator is located (string). | Required | No | Yes |
| `Type` | Type of the generator (string). For profiled generators, this must be `Profiled`. | Required | No | No |
| `Cost ($/MW)` | Cost incurred for serving each MW of power by this generator. | Required | Yes | Yes |
| `Minimum power (MW)` | Minimum amount of power this generator may supply. | `0.0` | Yes | Yes |
| `Maximum power (MW)` | Maximum amount of power this generator may supply. | Required | Yes | Yes |
#### Production costs and limits
Production costs are represented as piecewise-linear curves. Figure 1 shows an example cost curve with three segments, where it costs \$1400, \$1600, \$2200 and \$2400 to generate, respectively, 100, 110, 130 and 135 MW of power. To model this generator, `Production cost curve (MW)` should be set to `[100, 110, 130, 135]`, and `Production cost curve ($)` should be set to `[1400, 1600, 2200, 2400]`.
Note that this curve also specifies the production limits. Specifically, the first point identifies the minimum power output when the unit is operational, while the last point identifies the maximum power output.
```@raw html
<center>
<img src="../../assets/cost_curve.png" style="max-width: 500px"/>
<div><b>Figure 1.</b> Piecewise-linear production cost curve.</div>
<br/>
</center>
```
#### Additional remarks:
- For time-dependent production limits or time-dependent production costs, the usage of nested arrays is allowed. For example, if `Production cost curve (MW)` is set to `[5.0, [10.0, 12.0, 15.0, 20.0]]`, then the unit may generate at most 10, 12, 15 and 20 MW of power during time steps 1, 2, 3 and 4, respectively. The minimum output for all time periods is fixed to at 5 MW.
- There is no limit to the number of piecewise-linear segments, and different generators may have a different number of segments.
- If `Production cost curve (MW)` and `Production cost curve ($)` both contain a single element, then the generator must produce exactly that amount of power when operational. To specify that the generator may produce any amount of power up to a certain limit `P`, the parameter `Production cost curve (MW)` should be set to `[0, P]`.
- Production cost curves must be convex.
#### Example
```json
{
"Generators": {
"gen1": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [100.0, 110.0, 130.0, 135.0],
"Production cost curve ($)": [1400.0, 1600.0, 2200.0, 2400.0],
"Startup costs ($)": [300.0, 400.0],
"Startup delays (h)": [1, 4],
"Ramp up limit (MW)": 232.68,
"Ramp down limit (MW)": 232.68,
"Startup limit (MW)": 232.68,
"Shutdown limit (MW)": 232.68,
"Minimum downtime (h)": 4,
"Minimum uptime (h)": 4,
"Initial status (h)": 12,
"Initial power (MW)": 115,
"Must run?": false,
"Reserve eligibility": ["r1"]
},
"gen2": {
"Bus": "b5",
"Type": "Thermal",
"Production cost curve (MW)": [0.0, [10.0, 8.0, 0.0, 3.0]],
"Production cost curve ($)": [0.0, 0.0],
"Initial status (h)": -100,
"Initial power (MW)": 0,
"Reserve eligibility": ["r1", "r2"],
"Commitment status": [true, false, null, true]
},
"gen3": {
"Bus": "b6",
"Type": "Profiled",
"Minimum power (MW)": 10.0,
"Maximum power (MW)": 120.0,
"Cost ($/MW)": 100.0
}
}
}
```
### Storage units
This section describes energy storage units in the system which charge and discharge power. The storage units consume power while charging, and generate power while discharging.
| Key | Description | Default | Time series? | Uncertain? |
| :-------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------: | :----------: | :--------: |
| `Bus` | Bus where the storage unit is located. Multiple storage units may be placed at the same bus. | Required | No | Yes |
| `Minimum level (MWh)` | Minimum of energy level this storage unit may contain. | `0.0` | Yes | Yes |
| `Maximum level (MWh)` | Maximum of energy level this storage unit may contain. | Required | Yes | Yes |
| `Allow simultaneous charging and discharging` | If `false`, the storage unit is not allowed to charge and discharge at the same time (Boolean). | `true` | Yes | Yes |
| `Charge cost ($/MW)` | Cost incurred for charging each MW of power into this storage unit. | Required | Yes | Yes |
| `Discharge cost ($/MW)` | Cost incurred for discharging each MW of power from this storage unit. | Required | Yes | Yes |
| `Charge efficiency` | Efficiency rate to charge power into this storage unit. This value must be greater than or equal to `0.0`, and less than or equal to `1.0`. | `1.0` | Yes | Yes |
| `Discharge efficiency` | Efficiency rate to discharge power from this storage unit. This value must be greater than or equal to `0.0`, and less than or equal to `1.0`. | `1.0` | Yes | Yes |
| `Loss factor` | The energy dissipation rate of this storage unit. This value must be greater than or equal to `0.0`, and less than or equal to `1.0`. | `0.0` | Yes | Yes |
| `Minimum charge rate (MW)` | Minimum amount of power rate this storage unit may charge. | `0.0` | Yes | Yes |
| `Maximum charge rate (MW)` | Maximum amount of power rate this storage unit may charge. | Required | Yes | Yes |
| `Minimum discharge rate (MW)` | Minimum amount of power rate this storage unit may discharge. | `0.0` | Yes | Yes |
| `Maximum discharge rate (MW)` | Maximum amount of power rate this storage unit may discharge. | Required | Yes | Yes |
| `Initial level (MWh)` | Amount of energy this storage unit at time step `-1`, immediately before the planning horizon starts. | `0.0` | No | Yes |
| `Last period minimum level (MWh)` | Minimum of energy level this storage unit may contain in the last time step. By default, this value is the same as the last value of `Minimum level (MWh)`. | `Minimum level (MWh)` | No | Yes |
| `Last period maximum level (MWh)` | Maximum of energy level this storage unit may contain in the last time step. By default, this value is the same as the last value of `Maximum level (MWh)`. | `Maximum level (MWh)` | No | Yes |
#### Example
```json
{
"Storage units": {
"su1": {
"Bus": "b2",
"Maximum level (MWh)": 100.0,
"Charge cost ($/MW)": 2.0,
"Discharge cost ($/MW)": 2.5,
"Maximum charge rate (MW)": 10.0,
"Maximum discharge rate (MW)": 8.0
},
"su2": {
"Bus": "b2",
"Minimum level (MWh)": 10.0,
"Maximum level (MWh)": 100.0,
"Allow simultaneous charging and discharging": false,
"Charge cost ($/MW)": 3.0,
"Discharge cost ($/MW)": 3.5,
"Charge efficiency": 0.8,
"Discharge efficiency": 0.85,
"Loss factor": 0.01,
"Minimum charge rate (MW)": 5.0,
"Maximum charge rate (MW)": 10.0,
"Minimum discharge rate (MW)": 2.0,
"Maximum discharge rate (MW)": 10.0,
"Initial level (MWh)": 70.0,
"Last period minimum level (MWh)": 80.0,
"Last period maximum level (MWh)": 85.0
},
"su3": {
"Bus": "b9",
"Minimum level (MWh)": [10.0, 11.0, 12.0, 13.0],
"Maximum level (MWh)": [100.0, 110.0, 120.0, 130.0],
"Allow simultaneous charging and discharging": [false, false, true, true],
"Charge cost ($/MW)": [2.0, 2.1, 2.2, 2.3],
"Discharge cost ($/MW)": [1.0, 1.1, 1.2, 1.3],
"Charge efficiency": [0.8, 0.81, 0.82, 0.82],
"Discharge efficiency": [0.85, 0.86, 0.87, 0.88],
"Loss factor": [0.01, 0.01, 0.02, 0.02],
"Minimum charge rate (MW)": [5.0, 5.1, 5.2, 5.3],
"Maximum charge rate (MW)": [10.0, 10.1, 10.2, 10.3],
"Minimum discharge rate (MW)": [4.0, 4.1, 4.2, 4.3],
"Maximum discharge rate (MW)": [8.0, 8.1, 8.2, 8.3],
"Initial level (MWh)": 20.0,
"Last period minimum level (MWh)": 21.0,
"Last period maximum level (MWh)": 22.0
}
}
}
```
### Price-sensitive loads
This section describes components in the system which may increase or reduce their energy consumption according to the energy prices. Fixed loads (as described in the `buses` section) are always served, regardless of the price, unless there is significant congestion in the system or insufficient production capacity. Price-sensitive loads, on the other hand, are only served if it is economical to do so.
| Key | Description | Default | Time series? | Uncertain? |
| :--------------- | :------------------------------------------------------------------------------------------- | :------: | :----------: | :--------: |
| `Bus` | Bus where the load is located. Multiple price-sensitive loads may be placed at the same bus. | Required | No | Yes |
| `Revenue ($/MW)` | Revenue obtained for serving each MW of power to this load. | Required | Yes | Yes |
| `Demand (MW)` | Maximum amount of power required by this load. Any amount lower than this may be served. | Required | Yes | Yes |
#### Example
```json
{
"Price-sensitive loads": {
"p1": {
"Bus": "b3",
"Revenue ($/MW)": 23.0,
"Demand (MW)": 50.0
}
}
}
```
### Transmission lines
This section describes the characteristics of transmission system, such as its topology and the susceptance of each transmission line.
| Key | Description | Default | Time series? | Uncertain? |
| :-------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | :----------: | :--------: |
| `Source bus` | Identifier of the bus where the transmission line originates. | Required | No | Yes |
| `Target bus` | Identifier of the bus where the transmission line reaches. | Required | No | Yes |
| `Susceptance (S)` | Susceptance of the transmission line (in siemens). | Required | No | Yes |
| `Normal flow limit (MW)` | Maximum amount of power (in MW) allowed to flow through the line when the system is in its regular, fully-operational state. | `+inf` | Yes | Yes |
| `Emergency flow limit (MW)` | Maximum amount of power (in MW) allowed to flow through the line when the system is in degraded state (for example, after the failure of another transmission line). | `+inf` | Y | Yes |
| `Flow limit penalty ($/MW)` | Penalty for violating the flow limits of the transmission line (in $/MW). This is charged per time step. For example, if there is a thermal violation of 1 MW for three time steps, then three times this amount will be charged. | `5000.0` | Yes | Yes |
#### Example
```json
{
"Transmission lines": {
"l1": {
"Source bus": "b1",
"Target bus": "b2",
"Susceptance (S)": 29.49686,
"Normal flow limit (MW)": 15000.0,
"Emergency flow limit (MW)": 20000.0,
"Flow limit penalty ($/MW)": 5000.0
}
}
}
```
### Reserves
This section describes the hourly amount of reserves required.
| Key | Description | Default | Time series? | Uncertain? |
| :------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | :----------: | :--------: |
| `Type` | Type of reserve product. Must be either "spinning" or "flexiramp". | Required | No | No |
| `Amount (MW)` | Amount of reserves required. | Required | Yes | Yes |
| `Shortfall penalty ($/MW)` | Penalty for shortage in meeting the reserve requirements (in $/MW). This is charged per time step. Negative value implies reserve constraints must always be satisfied. | `-1` | Yes | Yes |
#### Example 1
```json
{
"Reserves": {
"r1": {
"Type": "spinning",
"Amount (MW)": [57.30552, 53.88429, 51.31838, 50.46307],
"Shortfall penalty ($/MW)": 5.0
},
"r2": {
"Type": "flexiramp",
"Amount (MW)": [20.31042, 23.65273, 27.41784, 25.34057]
}
}
}
```
### Contingencies
This section describes credible contingency scenarios in the optimization, such as the loss of a transmission line or generator.
| Key | Description | Default | Uncertain? |
| :-------------------- | :------------------------------------------------------------------------------------------------ | :-----: | :--------: |
| `Affected generators` | List of generators affected by this contingency. May be omitted if no generators are affected. | `[]` | Yes |
| `Affected lines` | List of transmission lines affected by this contingency. May be omitted if no lines are affected. | `[]` | Yes |
#### Example
```json
{
"Contingencies": {
"c1": {
"Affected lines": ["l1", "l2", "l3"],
"Affected generators": ["g1"]
},
"c2": {
"Affected lines": ["l4"]
}
}
}
```
### Additional remarks
#### Time series parameters
Many numerical properties in the JSON file can be specified either as a single floating point number if they are time-independent, or as an array containing exactly `T` elements, if they are time-dependent, where `T` is the number of time steps in the planning horizon. For example, both formats below are valid when `T=3`:
```json
{
"Load (MW)": 800.0,
"Load (MW)": [800.0, 850.0, 730.0]
}
```
The value `T` depends on both `Time horizon (h)` and `Time step (min)`, as the table below illustrates.
| Time horizon (h) | Time step (min) | T |
| :--------------: | :-------------: | :-: |
| 24 | 60 | 24 |
| 24 | 15 | 96 |
| 24 | 5 | 288 |
| 36 | 60 | 36 |
| 36 | 15 | 144 |
| 36 | 5 | 432 |
## Current limitations
- Network topology must remain the same for all time periods.
- Only N-1 transmission contingencies are supported. Generator contingencies are not currently supported.
- Time-varying minimum production amounts are not currently compatible with ramp/startup/shutdown limits.
- Flexible ramping products can only be acquired under the `WanHob2016` formulation, which does not support spinning reserves.
- The set of generators must be the same in all scenarios.

View File

@@ -0,0 +1,289 @@
# Benchmark instances
UnitCommitment.jl provides a large collection of benchmark instances collected from the literature and converted to a [common data format](../guides/format.md). In some cases, as indicated below, the original instances have been extended, with realistic parameters, using data-driven methods. If you use these instances in your research, we request that you cite UnitCommitment.jl, as well as the original sources, as listed below. Benchmark instances can be loaded with `UnitCommitment.read_benchmark(name)`, as explained in the [tutorials](../tutorials/usage.md). Instance files can also be [directly downloaded from our website](https://axavier.org/UnitCommitment.jl/0.4/instances/).
!!! warning
The instances included in UC.jl are still under development and may change in the future. If you use these instances in your research, for reproducibility, you should specify what version of UC.jl they came from.
## MATPOWER
[MATPOWER](https://github.com/MATPOWER/matpower) is an open-source package for solving power flow problems in MATLAB and Octave. It contains a number of power flow test cases, which have been widely used in the power systems literature.
Because most MATPOWER test cases were originally designed for power flow studies, they lack a number of important unit commitment parameters, such as time-varying loads, production cost curves, ramp limits, reserves and initial conditions. The test cases included in UnitCommitment.jl are extended versions of the original MATPOWER test cases, modified as following:
- **Production cost** curves were generated using a data-driven approach, based on publicly available data. More specifically, machine learning models were trained to predict typical production cost curves, for each day of the year, based on a generator's maximum and minimum power output.
- **Load profiles** were generated using a similar data-driven approach.
- **Ramp-up, ramp-down, startup and shutdown rates** were set to a fixed proportion of the generator's maximum output.
- **Minimum reserves** were set to a fixed proportion of the total demand.
- **Contingencies** were set to include all N-1 transmission line contingencies that do not generate islands or isolated buses. More specifically, there is one contingency for each transmission line, as long as that transmission line is not a bridge in the network graph.
For each MATPOWER test case, UC.jl provides 365 variations (`2017-01-01` to `2017-12-31`) corresponding different days of the year.
### MATPOWER/UW-PSTCA
A variety of smaller IEEE test cases, [compiled by University of Washington](http://labs.ece.uw.edu/pstca/), corresponding mostly to small portions of the American Electric Power System in the 1960s.
| Name | Buses | Generators | Lines | Contingencies | References |
| ----------------------------- | ----- | ---------- | ----- | ------------- | -------------- |
| `matpower/case14/2017-01-01` | 14 | 5 | 20 | 19 | [MTPWR, PSTCA] |
| `matpower/case30/2017-01-01` | 30 | 6 | 41 | 38 | [MTPWR, PSTCA] |
| `matpower/case57/2017-01-01` | 57 | 7 | 80 | 79 | [MTPWR, PSTCA] |
| `matpower/case118/2017-01-01` | 118 | 54 | 186 | 177 | [MTPWR, PSTCA] |
| `matpower/case300/2017-01-01` | 300 | 69 | 411 | 320 | [MTPWR, PSTCA] |
### MATPOWER/Polish
Test cases based on the Polish 400, 220 and 110 kV networks, originally provided by **Roman Korab** (Politechnika Śląska) and corrected by the MATPOWER team.
| Name | Buses | Generators | Lines | Contingencies | References |
| --------------------------------- | ----- | ---------- | ----- | ------------- | ---------- |
| `matpower/case2383wp/2017-01-01` | 2383 | 323 | 2896 | 2240 | [MTPWR] |
| `matpower/case2736sp/2017-01-01` | 2736 | 289 | 3504 | 3159 | [MTPWR] |
| `matpower/case2737sop/2017-01-01` | 2737 | 267 | 3506 | 3161 | [MTPWR] |
| `matpower/case2746wop/2017-01-01` | 2746 | 443 | 3514 | 3155 | [MTPWR] |
| `matpower/case2746wp/2017-01-01` | 2746 | 457 | 3514 | 3156 | [MTPWR] |
| `matpower/case3012wp/2017-01-01` | 3012 | 496 | 3572 | 2854 | [MTPWR] |
| `matpower/case3120sp/2017-01-01` | 3120 | 483 | 3693 | 2950 | [MTPWR] |
| `matpower/case3375wp/2017-01-01` | 3374 | 590 | 4161 | 3245 | [MTPWR] |
### MATPOWER/PEGASE
Test cases from the [Pan European Grid Advanced Simulation and State Estimation (PEGASE) project](https://cordis.europa.eu/project/id/211407), describing part of the European high voltage transmission network.
| Name | Buses | Generators | Lines | Contingencies | References |
| ------------------------------------- | ----- | ---------- | ----- | ------------- | --------------------------- |
| `matpower/case89pegase/2017-01-01` | 89 | 12 | 210 | 192 | [JoFlMa16, FlPaCa13, MTPWR] |
| `matpower/case1354pegase/2017-01-01` | 1354 | 260 | 1991 | 1288 | [JoFlMa16, FlPaCa13, MTPWR] |
| `matpower/case2869pegase/2017-01-01` | 2869 | 510 | 4582 | 3579 | [JoFlMa16, FlPaCa13, MTPWR] |
| `matpower/case9241pegase/2017-01-01` | 9241 | 1445 | 16049 | 13932 | [JoFlMa16, FlPaCa13, MTPWR] |
| `matpower/case13659pegase/2017-01-01` | 13659 | 4092 | 20467 | 13932 | [JoFlMa16, FlPaCa13, MTPWR] |
### MATPOWER/RTE
Test cases from the R&D Division at [Reseau de Transport d'Electricite](https://www.rte-france.com) representing the size and complexity of the French very high voltage transmission network.
| Name | Buses | Generators | Lines | Contingencies | References |
| --------------------------------- | ----- | ---------- | ----- | ------------- | ----------------- |
| `matpower/case1888rte/2017-01-01` | 1888 | 296 | 2531 | 1484 | [MTPWR, JoFlMa16] |
| `matpower/case1951rte/2017-01-01` | 1951 | 390 | 2596 | 1497 | [MTPWR, JoFlMa16] |
| `matpower/case2848rte/2017-01-01` | 2848 | 544 | 3776 | 2242 | [MTPWR, JoFlMa16] |
| `matpower/case2868rte/2017-01-01` | 2868 | 596 | 3808 | 2260 | [MTPWR, JoFlMa16] |
| `matpower/case6468rte/2017-01-01` | 6468 | 1262 | 9000 | 6094 | [MTPWR, JoFlMa16] |
| `matpower/case6470rte/2017-01-01` | 6470 | 1306 | 9005 | 6085 | [MTPWR, JoFlMa16] |
| `matpower/case6495rte/2017-01-01` | 6495 | 1352 | 9019 | 6060 | [MTPWR, JoFlMa16] |
| `matpower/case6515rte/2017-01-01` | 6515 | 1368 | 9037 | 6063 | [MTPWR, JoFlMa16] |
## PGLIB-UC Instances
[PGLIB-UC](https://github.com/power-grid-lib/pglib-uc) is a benchmark library curated and maintained by the [IEEE PES Task Force on Benchmarks for Validation of Emerging Power System Algorithms](https://power-grid-lib.github.io/). These test cases have been used in [KnOsWa20].
### PGLIB-UC/California
Test cases based on publicly available data from the California ISO. For more details, see [PGLIB-UC case file overview](https://github.com/power-grid-lib/pglib-uc).
| Name | Buses | Generators | Lines | Contingencies | References |
| ------------------------------------ | ----- | ---------- | ----- | ------------- | ---------- |
| `pglib-uc/ca/2014-09-01_reserves_0` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2014-09-01_reserves_1` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2014-09-01_reserves_3` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2014-09-01_reserves_5` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2014-12-01_reserves_0` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2014-12-01_reserves_1` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2014-12-01_reserves_3` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2014-12-01_reserves_5` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2015-03-01_reserves_0` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2015-03-01_reserves_1` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2015-03-01_reserves_3` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2015-03-01_reserves_5` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2015-06-01_reserves_0` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2015-06-01_reserves_1` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2015-06-01_reserves_3` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/2015-06-01_reserves_5` | 1 | 610 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/Scenario400_reserves_0` | 1 | 611 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/Scenario400_reserves_1` | 1 | 611 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/Scenario400_reserves_3` | 1 | 611 | 0 | 0 | [KnOsWa20] |
| `pglib-uc/ca/Scenario400_reserves_5` | 1 | 611 | 0 | 0 | [KnOsWa20] |
### PGLIB-UC/FERC
Test cases based on a publicly available [unit commitment test case produced by the Federal Energy Regulatory Commission](https://www.ferc.gov/industries-data/electric/power-sales-and-markets/increasing-efficiency-through-improved-software-1). For more details, see [PGLIB-UC case file overview](https://github.com/power-grid-lib/pglib-uc).
| Name | Buses | Generators | Lines | Contingencies | References |
| ----------------------------- | ----- | ---------- | ----- | ------------- | -------------------- |
| `pglib-uc/ferc/2015-01-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-01-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-02-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-02-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-03-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-03-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-04-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-04-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-05-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-05-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-06-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-06-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-07-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-07-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-08-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-08-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-09-01_hw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-09-01_lw` | 1 | 979 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-10-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-10-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-11-02_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-11-02_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-12-01_hw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
| `pglib-uc/ferc/2015-12-01_lw` | 1 | 935 | 0 | 0 | [KnOsWa20, KrHiOn12] |
### PGLIB-UC/RTS-GMLC
[RTS-GMLC](https://github.com/GridMod/RTS-GMLC) is an updated version of the RTS-96 test system produced by the United States Department of Energy's [Grid Modernization Laboratory Consortium](https://gmlc.doe.gov/). The PGLIB-UC/RTS-GMLC instances are modified versions of the original RTS-GMLC instances, with modified ramp-rates and without a transmission network. For more details, see [PGLIB-UC case file overview](https://github.com/power-grid-lib/pglib-uc).
| Name | Buses | Generators | Lines | Contingencies | References |
| ------------------------------ | ----- | ---------- | ----- | ------------- | ---------- |
| `pglib-uc/rts_gmlc/2020-01-27` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-02-09` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-03-05` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-04-03` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-05-05` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-06-09` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-07-06` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-08-12` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-09-20` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-10-27` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-11-25` | 1 | 154 | 0 | 0 | [BaBlEh19] |
| `pglib-uc/rts_gmlc/2020-12-23` | 1 | 154 | 0 | 0 | [BaBlEh19] |
## OR-LIB/UC
[OR-LIB](http://people.brunel.ac.uk/~mastjjb/jeb/info.html) is a collection of test data sets for a variety of operations research problems, including unit commitment. The UC instances in OR-LIB are synthetic instances generated by a [random problem generator](http://groups.di.unipi.it/optimize/Data/UC.html) developed by the [Operations Research Group at University of Pisa](http://groups.di.unipi.it/optimize/). These test cases have been used in [FrGe06] and many other publications.
| Name | Hours | Buses | Generators | Lines | Contingencies | References |
| ------------------- | ----- | ----- | ---------- | ----- | ------------- | --------------- |
| `or-lib/10_0_1_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/10_0_2_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/10_0_3_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/10_0_4_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/10_0_5_w` | 24 | 1 | 10 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/20_0_1_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/20_0_2_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/20_0_3_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/20_0_4_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/20_0_5_w` | 24 | 1 | 20 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/50_0_1_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/50_0_2_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/50_0_3_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/50_0_4_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/50_0_5_w` | 24 | 1 | 50 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/75_0_1_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/75_0_2_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/75_0_3_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/75_0_4_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/75_0_5_w` | 24 | 1 | 75 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/100_0_1_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/100_0_2_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/100_0_3_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/100_0_4_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/100_0_5_w` | 24 | 1 | 100 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/150_0_1_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/150_0_2_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/150_0_3_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/150_0_4_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/150_0_5_w` | 24 | 1 | 150 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_10_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_11_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_12_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_1_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_2_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_3_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_4_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_5_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_6_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_7_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_8_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
| `or-lib/200_0_9_w` | 24 | 1 | 200 | 0 | 0 | [ORLIB, FrGe06] |
## Tejada19
Test cases used in [TeLuSa19]. These instances are similar to OR-LIB/UC, in the sense that they use the same random problem generator, but are much larger.
| Name | Hours | Buses | Generators | Lines | Contingencies | References |
| ----------------------- | ----- | ----- | ---------- | ----- | ------------- | ---------- |
| `tejada19/UC_24h_214g` | 24 | 1 | 214 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_250g` | 24 | 1 | 250 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_290g` | 24 | 1 | 290 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_480g` | 24 | 1 | 480 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_505g` | 24 | 1 | 505 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_623g` | 24 | 1 | 623 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_647g` | 24 | 1 | 647 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_836g` | 24 | 1 | 836 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_850g` | 24 | 1 | 850 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_918g` | 24 | 1 | 918 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_931g` | 24 | 1 | 931 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_940g` | 24 | 1 | 940 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_957g` | 24 | 1 | 957 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_959g` | 24 | 1 | 959 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1069g` | 24 | 1 | 1069 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1130g` | 24 | 1 | 1130 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1376g` | 24 | 1 | 1376 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1393g` | 24 | 1 | 1393 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1577g` | 24 | 1 | 1577 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1615g` | 24 | 1 | 1615 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1632g` | 24 | 1 | 1632 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1768g` | 24 | 1 | 1768 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1804g` | 24 | 1 | 1804 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1820g` | 24 | 1 | 1820 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1823g` | 24 | 1 | 1823 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_24h_1888g` | 24 | 1 | 1888 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_36g` | 168 | 1 | 36 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_38g` | 168 | 1 | 38 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_40g` | 168 | 1 | 40 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_53g` | 168 | 1 | 53 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_58g` | 168 | 1 | 58 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_59g` | 168 | 1 | 59 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_72g` | 168 | 1 | 72 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_84g` | 168 | 1 | 84 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_86g` | 168 | 1 | 86 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_88g` | 168 | 1 | 88 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_93g` | 168 | 1 | 93 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_105g` | 168 | 1 | 105 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_110g` | 168 | 1 | 110 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_125g` | 168 | 1 | 125 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_130g` | 168 | 1 | 130 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_131g` | 168 | 1 | 131 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_140g` | 168 | 1 | 140 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_165g` | 168 | 1 | 165 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_175g` | 168 | 1 | 175 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_179g` | 168 | 1 | 179 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_188g` | 168 | 1 | 188 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_192g` | 168 | 1 | 192 | 0 | 0 | [TeLuSa19] |
| `tejada19/UC_168h_199g` | 168 | 1 | 199 | 0 | 0 | [TeLuSa19] |
## References
- [UCJL] **Alinson S. Xavier, Aleksandr M. Kazachkov, Ogün Yurdakul, Feng Qiu.** "UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment (Version 0.3)". Zenodo (2022). [DOI: 10.5281/zenodo.4269874](https://doi.org/10.5281/zenodo.4269874)
- [KnOsWa20] **Bernard Knueven, James Ostrowski and Jean-Paul Watson.** "On Mixed-Integer Programming Formulations for the Unit Commitment Problem". INFORMS Journal on Computing (2020). [DOI: 10.1287/ijoc.2019.0944](https://doi.org/10.1287/ijoc.2019.0944)
- [KrHiOn12] **Eric Krall, Michael Higgins and Richard P. ONeill.** "RTO unit commitment test system." Federal Energy Regulatory Commission. Available at: <https://www.ferc.gov/industries-data/electric/power-sales-and-markets/increasing-efficiency-through-improved-software-1> (Accessed: Nov 14, 2020)
- [BaBlEh19] **Clayton Barrows, Aaron Bloom, Ali Ehlen, Jussi Ikaheimo, Jennie Jorgenson, Dheepak Krishnamurthy, Jessica Lau et al.** "The IEEE Reliability Test System: A Proposed 2019 Update." IEEE Transactions on Power Systems (2019). [DOI: 10.1109/TPWRS.2019.2925557](https://doi.org/10.1109/TPWRS.2019.2925557)
- [JoFlMa16] **C. Josz, S. Fliscounakis, J. Maeght, and P. Panciatici.** "AC Power Flow Data in MATPOWER and QCQP Format: iTesla, RTE Snapshots, and PEGASE". [ArXiv (2016)](https://arxiv.org/abs/1603.01533).
- [FlPaCa13] **S. Fliscounakis, P. Panciatici, F. Capitanescu, and L. Wehenkel.** "Contingency ranking with respect to overloads in very large power systems taking into account uncertainty, preventive and corrective actions", Power Systems, IEEE Trans. on, (28)4:4909-4917, 2013. [DOI: 10.1109/TPWRS.2013.2251015](https://doi.org/10.1109/TPWRS.2013.2251015)
- [MTPWR] **D. Zimmerman, C. E. Murillo-Sandnchez and R. J. Thomas.** "Matpower: Steady-state operations, planning, and analysis tools forpower systems research and education", IEEE Transactions on PowerSystems, vol. 26, no. 1, pp. 12 19, Feb. 2011. [DOI: 10.1109/TPWRS.2010.2051168](https://doi.org/10.1109/TPWRS.2010.2051168)
- [PSTCA] **University of Washington, Dept. of Electrical Engineering.** "Power Systems Test Case Archive". Available at: <http://www.ee.washington.edu/research/pstca/> (Accessed: Nov 14, 2020)
- [ORLIB] **J.E.Beasley.** "OR-Library: distributing test problems by electronic mail", Journal of the Operational Research Society 41(11) (1990). [DOI: 10.2307/2582903](https://doi.org/10.2307/2582903)
- [FrGe06] **A. Frangioni, C. Gentile.** "Solving nonlinear single-unit commitment problems with ramping constraints" Operations Research 54(4), p. 767 - 775, 2006. [DOI: 10.1287/opre.1060.0309](https://doi.org/10.1287/opre.1060.0309)
- [TeLuSa19] **D. A. Tejada-Arango, S. Lumbreras, P. Sanchez-Martin and A. Ramos.** "Which Unit-Commitment Formulation is Best? A Systematic Comparison," in IEEE Transactions on Power Systems. [DOI: 10.1109/TPWRS.2019.2962024](https://ieeexplore.ieee.org/document/8941313/).

78
docs/src/guides/model.md Normal file
View File

@@ -0,0 +1,78 @@
JuMP Model
==========
In this page, we describe the JuMP optimization model produced by the function `build_model`. A detailed understanding of this model is not necessary if you are just interested in using the package to solve some standard unit commitment cases, but it may be useful, for example, if you need to solve a slightly different problem, with additional variables and constraints. The notation in this page generally follows [KnOsWa20].
Decision variables
------------------
UC.jl models the security-constrained unit commitment problem as a two-stage stochastic program. In this approach, some of the decision variables are *first-stage decisions*, which are taken before the uncertainty is realized and must therefore be the same across all scenarios, while the remaining variables are *second-stage decisions*, which can attain a different values in each scenario. In the current version of the package, all binary variables (which model commitment decisions of thermal units) are first-stage decisions and all continuous variables are second-stage decisions.
!!! note
UC.jl treats deterministic SCUC instances as a special case of the stochastic problem in which there is only one scenario, named `"s1"` by default. To access second-stage decisions, therefore, you must provide this scenario name as the value for `sn`. For example, `model[:prod_above]["s1", g, t]`.
### Generators
In this section, we describe the decision variables associated with the generators, which include both thermal units (e.g., natural gas-fired power plant) and profiled units (e.g., wind turbine).
#### Thermal Units
Name | Description | Unit | Stage
:-----|:-------------|:------: | :------:
`is_on[g,t]` | True if generator `g` is on at time `t`. | Binary | 1
`switch_on[g,t]` | True is generator `g` switches on at time `t`. | Binary| 1
`switch_off[g,t]` | True if generator `g` switches off at time `t`. | Binary| 1
`startup[g,t,s]` | True if generator `g` switches on at time `t` incurring start-up costs from start-up category `s`. | Binary| 1
`prod_above[sn,g,t]` | Amount of power produced by generator `g` above its minimum power output at time `t` in scenario `sn`. For example, if the minimum power of generator `g` is 100 MW and `g` is producing 115 MW of power at time `t` in scenario `sn`, then `prod_above[sn,g,t]` equals `15.0`. | MW | 2
`segprod[sn,g,t,k]` | Amount of power from piecewise linear segment `k` produced by generator `g` at time `t` in scenario `sn`. For example, if cost curve for generator `g` is defined by the points `(100, 1400)`, `(110, 1600)`, `(130, 2200)` and `(135, 2400)`, and if the generator is producing 115 MW of power at time `t` in scenario `sn`, then `segprod[sn,g,t,:]` equals `[10.0, 5.0, 0.0]`.| MW | 2
`reserve[sn,r,g,t]` | Amount of reserve `r` provided by unit `g` at time `t` in scenario `sn`. | MW | 2
!!! warning
The first-stage decision variables of the JuMP model are `is_on[g,t]`, `switch_on[g,t]`, `switch_off[g,t]`, and `startup[g,t,s]`. As such, the dictionaries corresponding to these variables do not include the scenario index in their keys. In contrast, all other variables of the created JuMP model are allowed to obtain a different value in each scenario and are thus modeled as second-stage decision variables. Accordingly, the dictionaries of all second-stage decision variables have the scenario index in their keys. This is true even if the model is created to solve the deterministic SCUC, in which case the default scenario index `s1` is included in the dictionary key.
#### Profiled Units
Name | Description | Unit | Stage
:-----|:-------------|:------: | :------:
`prod_profiled[s,t]` | Amount of power produced by profiled unit `g` at time `t`. | MW | 2
### Buses
Name | Description | Unit | Stage
:-----|:-------------|:------:| :------:
`net_injection[sn,b,t]` | Net injection at bus `b` at time `t` in scenario `sn`. | MW | 2
`curtail[sn,b,t]` | Amount of load curtailed at bus `b` at time `t` in scenario `sn`. | MW | 2
### Price-sensitive loads
Name | Description | Unit | Stage
:-----|:-------------|:------:| :------:
`loads[sn,s,t]` | Amount of power served to price-sensitive load `s` at time `t` in scenario `sn`. | MW | 2
### Transmission lines
Name | Description | Unit | Stage
:-----|:-------------|:------:| :------:
`flow[sn,l,t]` | Power flow on line `l` at time `t` in scenario `sn`. | MW | 2
`overflow[sn,l,t]` | Amount of flow above the limit for line `l` at time `t` in scenario `sn`. | MW | 2
!!! warning
Since transmission and N-1 security constraints are enforced in a lazy way, most of the `flow[l,t]` variables are never added to the model. Accessing `model[:flow][sn,l,t]` without first checking that the variable exists will likely generate an error.
Objective function
------------------
TODO
Constraints
-----------
TODO

618
docs/src/guides/problem.md Normal file
View File

@@ -0,0 +1,618 @@
# Problem definition
The **Security-Constrained Unit Commitment Problem** (SCUC) is formulated in
UC.jl as a two-stage stochastic mixed-integer linear optimization problem that
aims to find the minimum-cost schedule for electricity generation while
satisfying various physical, operational and economic constraints. In its most
basic form, the problem is composed by:
- A set of generators, which produce power, at a given cost;
- A set of loads, which consume power;
- A transmission network, which delivers power from generators to the loads.
In addition to the basic components above, SCUC also include a wide variety of
additional components, such as _energy storage devices_, _reserves_ and _network
interfaces_, to name a few. On this page, we present a complete definition of
the problem, as modeled in UC.jl. Please note that different sources in the
literature may have significantly different definitions and assumptions.
!!! note
UC.jl treats deterministic SCUC instances as a special case of the stochastic problem in which there is only one scenario, named `"s1"` by default. To access second-stage decisions, therefore, you must provide this scenario name as the value for `s`. For example, `model[:prod_above]["s1", g, t]`.
!!! warning
The problem definition presented in this page is mathematically equivalent to the one solved by UC.jl. However, some constraints (ramping, piecewise-linear costs and start-up costs) have been simplified in this page for clarity. The set of constraints actually enforced by UC.jl better describes the convex hull of the problem and leads to better computational performance, but it is much more complex to describe. For further details, we refer to the package's source code and associated references.
## 1. General modeling assumptions
- **Time discretization:** SCUC is a multi-period problem, with decisions
typically covering a 24-hour or 36-hour time window. UC.jl assumes that this
time window is discretized into time steps of fixed length. The number of time
steps, as well as the duration of each time step, are configurable. In the
equations below, the set of time steps is denoted by $T=\{1,2,\ldots,|T|\}$.
- **Decision under uncertainty:** SCUC is a two-stage stochastic problem. In the
first stage, we must decide the _commitment status_ of all thermal generators.
In the second stage, we determine the remaining decision variables, such power
output of all generators, the operation of energy storage devices and load
shedding. Stochasticity is modeled through a discrete number of scenarios
$s \in S$, each with given probability $p(S)$. The goal is to minimize the
minimum expected cost.
## 2. Thermal generators
A _thermal generator_ is a power generation unit that converts thermal energy,
typically from the combustion of coal, natural gas or oil, into electrical
energy. Scheduling thermal generators is particularly complex due to their
operational characteristics, including minimum up and down times, ramping rates,
and start-up and shutdown limits.
### Important concepts
- **Commitment, power output and startup costs:** Thermal generators can either
be online (on) or offline (off). When a thermal generator is on, it can
produce between a minimum and a maximum amount of power; when it is off, it
cannot produce any power. Switching a generator on incurs a startup cost,
which depends on how long the unit has been offline. More precisely, each
thermal generator $g$ has a number $K^{start}_g$ of startup categories (e.g.,
cold, warm and hot). Each category $k$ has a corresponding startup cost
$Z^{\text{start}}_{gk}$, and is available only if the unit has spent at most
$M^{\text{delay}}_{gk}$ time steps offline.
- **Piecewise-linear production cost curve:** Besides startup costs, thermal
generators also incur production costs based on their power output. The
relationship between production cost and power output is not a linear, but a
convex curve, which is simplified using a piecewise-linear approximation. For
this purpose, each thermal generator $g$ has a number $K^{\text{cost}}_g$ of
piecewise-linear segments and its power output $y^{\text{prod-above}}_{gts}$
are broken down into
$\sum_{k=1}^{K^{\text{cost}}_g} y^{\text{seg-prod}}_{gtks}$, so that
production costs can be more easily calculated.
- **Ramping, minimum up/down:** Due to physical and operational limits, such as
thermal inertia and mechanical stress, thermal generators cannot vary their
power output too dramatically from one time period to the next. Similarly,
thermal generators cannot switch on and off too frequently; after switching on
or off, units must remain at that state for a minimum specified number of time
steps.
- **Startup and shutdown limit:** A thermal generator cannot shut off if its
output power level in the immediately preceding time step is very high (above
a specified value); the unit must first ramp down, over potentially multiple
time steps, and only then shut off. Similarly, the unit cannot produce a very
large amount of power (above a specified limit) immediately after starting up;
it must ramp up over potentially multiple time steps.
- **Initial status:** The optimization process finds optimal commitment status
and power output level for all thermal generators starting at time period 1.
Many constraints, however, require knowledge of previous time periods (0, -1,
-2, ...) which are not part of the optimization model. For this reason, part
of the input data is the initial power output $M^{\text{init-power}}_{g}$ of
unit $g$ (that is, the output at time 0) and the initial status
$M^{\text{init-status}}_{g}$ of unit g (how many time steps has it been
online/offline at time time 0). If $M^{\text{init-status}}_{g}$ is positive,
its magnitude indicates how many time periods has the unit been online; and if
negative, how has it been offline.
- **Must-run:** Due to various factors, including reliability considerations,
some units must remain operational regardless of whether it is economical for
them to do so. Must-run constraints are used to enforce such requirements.
### Sets and constants
| Symbol | Unit | Description |
| :------------------------------ | :----- | :----------------------------------------------------------------------------------------- |
| $K^{cost}_g$ | | Number of piecewise linear segments in the production cost curve. |
| $K^{start}_g$ | | Number of startup categories (e.g. cold, warm, hot). |
| $M^{\text{delay}}_{gk}$ | | Delay for startup category $k$. |
| $M^{\text{init-power}}_{g}$ | MW | Initial power output of unit $g$. |
| $M^{\text{init-status}}_{g}$ | | Initial status of unit $g$. |
| $M^{\text{min-up}}_{g}$ | | Minimum amount of time $g$ must stay on after switching on. |
| $M^{\text{must-run}}_{gt}$ | Binary | One if unit $g$ must be on at time $t$. |
| $M^{\text{pmax}}_{gt}$ | MW | Maximum power output at time $t$. |
| $M^{\text{pmin}}_{gt}$ | MW | Minimum power output at time $t$. |
| $M^{\text{ramp-down}}_{g}$ | MW | Ramp down limit. |
| $M^{\text{ramp-up}}_{g}$ | MW | Ramp up limit. |
| $M^{\text{seg-pmax}}_{gtks}$ | MW | Maximum power output for piecewise-linear segment $k$ at time $t$ and scenario $s$. |
| $M^{\text{shutdown-limit}}_{g}$ | MW | Maximum power unit $g$ produces immediately before shutting down |
| $M^{\text{startup-limit}}_{g}$ | MW | Maximum power unit $g$ produces immediately after starting up |
| $R_g$ | | Set of spinning reserves that may be served by $g$. |
| $Z^{\text{pmin}}_{gt}$ | \$ | Cost to keep $g$ operational at time $t$ generating at minimum power. |
| $Z^{\text{pvar}}_{gtks}$ | \$/MW | Cost for unit $g$ to produce 1 MW of power under piecewise-linear segment $k$ at time $t$. |
| $Z^{\text{start}}_{gk}$ | \$ | Cost to start unit $g$ at startup category $k$. |
| $G^\text{therm}$ | | Set of thermal generators. |
### Decision variables
| Symbol | JuMP name | Description | Unit | Stage |
| :---------------------------- | :------------------ | :-------------------------------------------------------------------------------------------- | :----- | :---- |
| $x^{\text{is-on}}_{gt}$ | `is_on[g,t]` | One if generator $g$ is on at time $t$. | Binary | 1 |
| $x^{\text{switch-on}}_{gt}$ | `switch_on[g,t]` | One if generator $g$ switches on at time $t$. | Binary | 1 |
| $x^{\text{switch-off}}_{gt}$ | `switch_off[g,t]` | One if generator $g$ switches off at time $t$. | Binary | 1 |
| $x^{\text{start}}_{gtk}$ | `startup[g,t,s]` | One if generator $g$ starts up at time $t$ under startup category $k$. | Binary | 1 |
| $y^{\text{prod-above}}_{gts}$ | `prod_above[s,g,t]` | Amount of power produced by $g$ at time $t$ in scenario $s$ above the minimum power. | MW | 2 |
| $y^{\text{seg-prod}}_{gtks}$ | `segprod[s,g,t,k]` | Amount of power produced by $g$ at time $t$ in piecewise-linear segment $k$ and scenario $s$. | MW | 2 |
| $y^{\text{res}}_{grts}$ | `reserve[s,r,g,t]` | Amount of spinning reserve $r$ supplied by $g$ at time $t$ in scenario $s$. | MW | 2 |
### Objective function terms
- Production costs:
```math
\sum_{g \in G^\text{therm}} \sum_{t \in T} x^{\text{is-on}}_{gt} Z^{\text{pmin}}_{gt}
+ \sum_{s \in S} p(s) \left[
\sum_{g \in G^\text{therm}} \sum_{t \in T} \sum_{k=1}^{K^{cost}_g}
y^{\text{seg-prod}}_{gtks} Z^{\text{pvar}}_{gtks}
\right]
```
- Start-up costs:
```math
\sum_{g \in G} \sum_{t \in T} \sum_{k=1}^{K^{start}_g} x^{\text{start}}_{gtk} Z^{\text{start}}_{gk}
```
### Constraints
- Some units must remain on, even if it is not economical for them to do so:
```math
x^{\text{is-on}}_{gt} \geq M^{\text{must-run}}_{gt}
```
- After switching on, unit must remain on for some amount of time
(`eq_min_uptime[g,t]`):
```math
\sum_{i=max(1,t-M^{\text{min-up}}_{g}+1)}^t x^{\text{switch-on}}_{gi} \leq x^{\text{is-on}}_{gt}
```
- Same as above, but covering the initial time steps (`eq_min_uptime[g,0]`):
```math
\sum_{i=1}^{min(T,M^{\text{min-up}}_{g}-M^{\text{init-status}}_{g})} x^{\text{switch-off}}_{gi} = 0 \; \text{ if } \; M^{\text{init-status}}_{g} > 0
```
- After switching off, unit must remain offline for some amount of time
(`eq_min_downtime[g,t]`):
```math
\sum_{i=max(1,t-M^{\text{min-down}}_{g}+1)}^t x^{\text{switch-off}}_{gi} \leq 1 - x^{\text{is-on}}_{gt}
```
- Same as above, but covering the initial time steps (`eq_min_downtime[g,0]`):
```math
\sum_{i=1}^{min(T,M^{\text{min-down}}_{g}+M^{\text{init-status}}_{g})} x^{\text{switch-on}}_{gi} = 0 \; \text{ if } \; M^{\text{init-status}}_{g} < 0
```
- If the unit switches on, it must choose exactly one startup category
(`eq_startup_choose[g,t]`):
```math
x^{\text{switch-on}}_{gt} = \sum_{k=1}^{K^{start}_g} x^{\text{start}}_{gtk}
```
- If unit has not switched off in the last "delay" time periods, then startup
category is forbidden (`eq_startup_restrict[g,t,s]`). The last startup
category is always allowed. In the equation below, $L^{\text{start}}_{gtk}=1$
if category should be allowed based on initial status.
```math
x^{\text{start}}_{gtk} \leq L^{\text{start}}_{gtk} + \sum_{i=min\left(1,t - M^{\text{delay}}_{g,k+1} + 1\right)}^{t - M^{\text{delay}}_{kg}} x^{\text{switch-off}}_{gi}
```
- Link the binary variables together (`eq_binary_link[g,t]`):
```math
\begin{align*}
& x^{\text{is-on}}_{gt} - x^{\text{is-on}}_{g,t-1} = x^{\text{switch-on}}_{gt} - x^{\text{switch-off}}_{gt} & \forall t > 1 \\
\end{align*}
```
- Cannot switch on and off at the same time (`eq_switch_on_off[g,t]`):
```math
x^{\text{switch-on}}_{gt} + x^{\text{switch-off}}_{gt} \leq 1
```
- If the unit is off, it cannot produce power or provide reserves. If it is on,
it must to so within the specified production limits (`eq_prod_limit[s,g,t]`):
```math
y^{\text{prod-above}}_{gts} + \sum_{r \in R_g} y^{\text{res}}_{grts} \leq
(M^{\text{pmax}}_{gt} - M^{\text{pmin}}_{gt}) x^{\text{is-on}}_{gt}
```
- Break down the "production above" variable into smaller "segment production"
variables, to simplify the objective function (`eq_prod_above_def[s,g,t]`):
```math
y^{\text{prod-above}}_{gts} = \sum_{k=1}^{K^{cost}_g} y^{\text{seg-prod}}_{gtks}
```
- Impose upper limit on segment production variables
(`eq_segprod_limit[s,g,t,k]`):
```math
0 \leq y^{\text{seg-prod}}_{gtks} \leq M^{\text{seg-pmax}}_{gtks}
```
- Unit cannot increase its production too quickly (`eq_ramp_up[s,g,t]`):
```math
y^{\text{prod-above}}_{gts} + \sum_{r \in R_g} y^{\text{res}}_{grts} \leq
y^{\text{prod-above}}_{g,t-1,s} + M^{\text{ramp-up}}_{g}
```
- Same as above, for initial time (`eq_ramp_up[s,g,1]`):
```math
y^{\text{prod-above}}_{g,1,s} + \sum_{r \in R_g} y^{\text{res}}_{gr,1,s} \leq
\left(M^{\text{init-power}}_{g} - M^{\text{pmin}}_{gt}\right) + M^{\text{ramp-up}}_{g}
```
- Unit cannot decrease its production too quickly (`eq_ramp_down[s,g,t]`):
```math
y^{\text{prod-above}}_{gts} \geq
y^{\text{prod-above}}_{g,t-1,s} - M^{\text{ramp-down}}_{g}
```
- Same as above, for initial time (`eq_ramp_down[s,g,1]`):
```math
y^{\text{prod-above}}_{g,1,s} \geq
\left(M^{\text{init-power}}_{g} - M^{\text{pmin}}_{gt}\right) - M^{\text{ramp-down}}_{g}
```
- Unit cannot produce excessive amount of power immediately after starting up
(`eq_startup_limit[s,g,t]`):
```math
y^{\text{prod-above}}_{gts} + \sum_{r \in R_g} y^{\text{res}}_{grts} \leq
(M^{\text{pmax}}_{gt} - M^{\text{pmin}}_{gt}) x^{\text{is-on}}_{gt} -
max\left\{0,M^{\text{pmax}}_{gt} - M^{\text{startup-limit}}_{g}\right\}
x^{\text{switch-on}}_{gt}
```
- Unit cannot shutoff it it's producing too much power
(`eq_shutdown_limit[s,g,t]`):
```math
y^{\text{prod-above}}_{gts} \leq
(M^{\text{pmax}}_{gt} - M^{\text{pmin}}_{gt}) x^{\text{is-on}}_{gt} -
max\left\{0,M^{\text{pmax}}_{gt} - M^{\text{shutdown-limit}}_{g}\right\}
x^{\text{switch-off}}_{g,t+1}
```
## 3. Profiled generators
A _profiled generator_ is a simplified generator model that can be used to
represent renewable energy resources, including wind, solar and hydro. Unlike
thermal generators, which can be either on or off, profiled generators do not
have status variables; the only optimization decision is on their power output
level, which must remain between minimum and maximum time-varying amounts.
Production cost curves for profiled generators are linear, making them again
much simpler than thermal units.
### Constants
| Symbol | Unit | Description |
| :---------------------- | :---- | :------------------------------------------------- |
| $M^{\text{pmax}}_{sgt}$ | MW | Maximum power output at time $t$ and scenario $s$. |
| $M^{\text{pmin}}_{sgt}$ | MW | Minimum power output at time $t$ and scenario $s$. |
| $Z^{\text{pvar}}_{sgt}$ | \$/MW | Generation cost at time $t$ and scenario $s$. |
### Decision variables
| Symbol | JuMP name | Unit | Description | Stage |
| :-------------------- | :--------------------- | :--- | :------------------------------------------------------------ | :---- |
| $y^\text{prod}_{sgt}$ | `prod_profiled[s,g,t]` | MW | Amount of power produced by $g$ in time $t$ and scenario $s$. | 2 |
### Objective function terms
- Production cost:
```math
\sum_{s \in S} p(s) \left[
\sum_{t \in T} y^\text{prod}_{sgt} Z^{\text{pvar}}_{sgt}
\right]
```
### Constraints
- Variable bounds:
```math
M^{\text{pmin}}_{sgt} \leq y^\text{prod}_{sgt} \leq M^{\text{pmax}}_{sgt}
```
## 4. Conventional loads
Loads represent the demand for electrical power by consumers and devices
connected to the system. This section describes _conventional_ (or inelastic)
loads, which are not sensitive to changes in electricity prices, and must always
be served. Each bus in the transmission network has exactly one load; multiple
loads in the same bus can be modelled by aggregating them. If there is not
enough production or transmission capacity to serve all loads, some load can be
curtailed, at a penalty.
### Constants
| Symbol | Unit | Description |
| :---------------------- | :---- | :--------------------------------------------------------- |
| $M^\text{load}_{sbt}$ | MW | Conventional load on bus $b$ at time $s$ and scenario $s$. |
| $Z^\text{curtail}_{st}$ | \$/MW | Load curtailment penalty at time $t$ in scenario $s$. |
### Decision variables
| Symbol | JuMP name | Unit | Description | Stage |
| :----------------------- | :--------------- | :--- | :--------------------------------------------------------------- | :---- |
| $y^\text{curtail}_{sbt}$ | `curtail[s,b,t]` | MW | Amount of load curtailed at bus $b$ in time $t$ and scenario $s$ | 2 |
### Objective function terms
- Load curtailment penalty:
```math
\sum_{s \in S} p(s) \left[
\sum_{b \in B} \sum_{t \in T} y^\text{curtail}_{sbt} Z^\text{curtail}_{ts}
\right]
```
### Constraints
- Variable bounds:
```math
0 \leq y^\text{curtail}_{sbt} \leq M^\text{load}_{bts}
```
## 5. Price-sensitive loads
_Price-sensitive loads_ refer to components in the system which may increase or
reduce their power consumption according to energy prices. Unlike conventional
loads, described above, price-sensitive loads are only served if it is
economical to do so. More specifically, there are no constraints forcing these
loads to be served; instead, there is a term in the objective function rewarding
each MW served. Unlike conventional loads, there may be multiple price-sensitive
loads per bus.
!!! note
Some unit commitment models allow price-sensitive loads to have a piecewise-linear convex revenue curves, similar to thermal generators. This can be achieved in UC.jl by adding multiple price-sensitive loads to the bus, one for each piecewise-linear segment.
### Sets and constants
| Symbol | Unit | Description |
| :--------------------------- | :---- | :--------------------------------------------------------------- |
| $M^\text{psl-demand}_{spt}$ | MW | Demand of price-sensitive load $p$ at time $t$ and scenario $s$. |
| $Z^\text{psl-revenue}_{spt}$ | \$/MW | Revenue from serving load $p$ at $t$ in scenario $s$. |
| $\text{PSL}$ | | Set of price-sensitive loads. |
### Decision variables
| Symbol | JuMP name | Unit | Description | Stage |
| :------------------- | :------------- | :--- | :------------------------------------------------ | :---- |
| $y^\text{psl}_{spt}$ | `loads[s,p,t]` | MW | Amount served to $p$ in time $t$ and scenario $s$ | 2 |
### Objective function terms
- Revenue from serving price-sensitive loads:
```math
- \sum_{s \in S} p(s) \left[
\sum_{p \in \text{PSL}} \sum_{t \in T} y^\text{psl}_{spt} Z^\text{psl-revenue}_{spt}
\right]
```
### Constraints
- Variable bounds:
```math
0 \leq y^\text{psl}_{spt} \leq M^\text{psl-demand}_{spt}
```
## 6. Energy storage
_Energy storage_ units are able to store energy during periods of low demand,
then release energy back to the grid during periods of high demand. These
devices include _batteries_, _pumped hydroelectric storage_, _compressed air
energy storage_ and _flywheels_. They are becoming increasingly important in the
modern power grid, and can help to enhance grid reliability, efficiency and
integration of renewable energy resources.
### Concepts
- **Min/max energy level and charge rate:** Energy storage units can only store
a limited amount of energy (in MWh). To maintain the operational safety and
longevity of these devices, a minimum energy level may also be imposed. The
rate (in MW) at which these units can charge and discharge is also limited,
due to chemical, physical and operational considerations.
- **Operational costs:** Charging and discharging energy storage units may incur
a cost/revenue. We assume that this cost/revenue is linear on the
charge/discharte rate ($/MW).
- **Efficiency:** Charging an energy storage unit for one hour with an input of
1 MW might not result in an increase of the energy level in the device by
exactly 1 MWh, due to various inneficiencies in the charging process,
including coversion losses and heat generation. For similar reasons,
discharging a storage unit for one hour at 1 MW might reduce the energy level
by more than 1 MWh. Furthermore, even when the unit is not charging or
discharging, some energy level may be gradually lost over time, due to
unwanted chemical reactions, thermal effects of mechanical losses.
- **Myopic effect:** Because the optimization process considers a fixed time
window, there is an inherent bias towards exploiting energy storage units to
their maximum within the window, completely ignoring their operation just
beyond this horizon. For instance, without further constraints, the
optimization algorithm will often ensure that all storage units are fully
discharged at the end of the last time step, which may not be desirable. To
mitigate this myopic effect, a minimum and maximum energy level may be imposed
at the last time step.
- **Simultaneous charging and discharging:** Depending on charge and discharge
costs/revenue, it may make sense mathematically to simultaneously charge and
discharge the storage unit, thus keeping its energy level unchanged while
potentially collecting revenue. Additional binary variables and constraints
are required to prevent this incorrect model behavior.
### Sets and constants
| Symbol | Unit | Description |
| :------------------------------------ | :---- | :---------------------------------------------------------------------------------------------------- |
| $\text{SU}$ | | Set of storage units |
| $Z^\text{charge}_{sut}$ | \$/MW | Linear charge cost/revenue for unit $u$ at time $t$ in scenario $s$. |
| $Z^\text{discharge}_{sut}$ | \$/MW | Linear discharge cost/revenue for unit $u$ at time $t$ in scenario $s$. |
| $M^\text{discharge-max}_{sut}$ | \$/MW | Maximum discharge rate for unit $u$ at time $t$ in scenario $s$. |
| $M^\text{discharge-min}_{sut}$ | \$/MW | Minimum discharge rate for unit $u$ at time $t$ in scenario $s$. |
| $M^\text{charge-max}_{sut}$ | \$/MW | Maximum charge rate for unit $u$ at time $t$ in scenario $s$. |
| $M^\text{charge-min}_{sut}$ | \$/MW | Minimum charge rate for unit $u$ at time $t$ in scenario $s$. |
| $M^\text{max-end-level}_{su}$ | MWh | Maximum storage level of unit $u$ at the last time step in scenario $s$ |
| $M^\text{min-end-level}_{su}$ | MWh | Minimum storage level of unit $u$ at the last time step in scenario $s$ |
| $\gamma^\text{loss}_{s,u,t}$ | | Self-discharge factor. |
| $\gamma^\text{charge-eff}_{s,u,t}$ | | Charging efficiency factor. |
| $\gamma^\text{discharge-eff}_{s,u,t}$ | | Discharging efficiency factor. |
| $\gamma^\text{time-step}$ | | Length of a time step, in hours. Should be 1.0 for hourly time steps, 0.5 for 30-min half steps, etc. |
### Decision variables
| Symbol | JuMP name | Unit | Description | Stage |
| :------------------------------ | :---------------------- | :----- | :----------------------------------------------------------- | :---- |
| $y^\text{level}_{sut}$ | `storage_level[s,u,t]` | MWh | Storage level of unit $u$ at time $t$ in scenario $s$. | 2 |
| $y^\text{charge}_{sut}$ | `charge_rate[s,u,t]` | MW | Charge rate of unit $u$ at time $t$ in scenario $s$. | 2 |
| $y^\text{discharge}_{sut}$ | `discharge_rate[s,u,t]` | MW | Discharge rate of unit $u$ at time $t$ in scenario $s$. | 2 |
| $x^\text{is-charging}_{sut}$ | `is_charging[s,u,t]` | Binary | True if unit $u$ is charging at time $t$ in scenario $s$. | 2 |
| $x^\text{is-discharging}_{sut}$ | `is_discharging[s,u,t]` | Binary | True if unit $u$ is discharging at time $t$ in scenario $s$. | 2 |
### Objective function terms
- Charge and discharge cost/revenue:
```math
\sum_{s \in S} p(s) \left[
\sum_{u \in \text{SU}} \sum_{t \in T} \left(
y^\text{charge}_{sut} Z^\text{charge}_{sut} +
y^\text{discharge}_{sut} Z^\text{discharge}_{sut}
\right)
\right]
```
### Constraints
- Prevent simultaneous charge and discharge
(`eq_simultaneous_charge_and_discharge[s,u,t]`):
```math
x^\text{is-charging}_{sut} + x^\text{is-discharging}_{sut} \leq 1
```
- Limit charge/discharge rate (`eq_min_charge_rate[s,u,t]`,
`eq_max_charge_rate[s,u,t]`, `eq_min_discharge_rate[s,u,t]` and
`eq_max_discharge_rate[s,u,t]`):
```math
\begin{align*}
y^\text{charge}_{sut} \leq x^\text{is-charging}_{sut} M^\text{charge-max}_{sut} \\
y^\text{charge}_{sut} \geq x^\text{is-charging}_{sut} M^\text{charge-min}_{sut} \\
y^\text{discharge}_{sut} \leq x^\text{is-discharging}_{sut} M^\text{discharge-max}_{sut} \\
y^\text{discharge}_{sut} \geq x^\text{is-discharging}_{sut} M^\text{discharge-min}_{sut} \\
\end{align*}
```
- Calculate current storage level (`eq_storage_transition[s,u,t]`):
```math
y^\text{level}_{sut} =
(1 - \gamma^\text{loss}_{s,u,t}) y^\text{level}_{su,t-1} +
\gamma^\text{time-step} \gamma^\text{charge-eff}_{s,u,t} y^\text{charge}_{sut} -
\frac{\gamma^\text{time-step}}{\gamma^\text{discharge-eff}_{s,u,t}} y^\text{charge}_{sut}
```
- Enforce storage level at last time step (`eq_ending_level[s,u]`):
```math
M^\text{min-end-level}_{su} \leq y^\text{level}_{sut} \leq M^\text{max-end-level}_{su}
```
## 7. Buses and transmission lines
So far, we have described generators, which produce power, loads, which consume
power, and storage units, which store energy for later use. Another important
element is the transmission network, which delivers the power produced by the
generators to the loads and storage units. Mathematically, the network is
represented as a graph $(B,L)$ where $B$ is the set of **buses** and $L$ is the
set of **transmission lines**. Each generator, load and storage unit is located
at a bus. The **net injection** at the bus is the sum of all power injected
minus withdrawn at the bus. To balance production and consumption, we must
enforce that the sum of all net injections over the entire network equal to
zero.
Besides the net balance equations, we must also enforce flow limits on the
transmission lines. Unlike flows in other optimization problems, power flows are
directly determined by net injections and transmission line parameters, and must
follow physical laws. UC.jl uses the DC linearization of AC power flow
equations. Under this linearization, the flow $f_l$ in transmission line $l$ is
given by $\sum_{b \in B} \delta_{bl} n_b$, where $\delta_{bl}$ is a constant
known as _injection shift factor_ (also commonly called _power transfer
distribution factor_), computed from the line parameters, and $n_b$ is the net
injection at bus $b$.
!!! warning
To improve computational performance, power flow variables and constraints are generated on-the-fly, during `UnitCommitment.optimize!`; they are **not** added by `UnitCommitment.build_model`.
### Sets and constants
| Symbol | Unit | Description |
| :------------------------ | :---- | :---------------------------------------------------------- |
| $M^\text{limit}_{slt}$ | MW | Flow limit for line $l$ at time $t$ and scenario $s$. |
| $Z^\text{overflow}_{slt}$ | \$/MW | Overflow penalty for line $l$ at time $t$ and scenario $s$. |
| $L$ | | Set of transmission lines. |
| $B$ | | Set of buses. |
### Decision variables
| Symbol | JuMP name | Unit | Description | Stage |
| :------------------------ | :--------------------- | :--- | :-------------------------------------------------------------------- | :---- |
| $y^\text{flow}_{slt}$ | _(added on-the-fly)_ | MW | Flow in line $l$ at time $t$ and scenario $s$. | 2 |
| $y^\text{inj}_{sbt}$ | `net_injection[s,b,t]` | MW | Total net injection at bus $b$, time $t$ and scenario $s$. | 2 |
| $y^\text{overflow}_{slt}$ | `overflow[s,l,t]` | MW | Amount of flow above limit for line $l$ at time $t$ and scenario $s$. | 2 |
### Objective function terms
- Penalty for exceeding line limits:
```math
\sum_{s \in S} p(s) \left[
\sum_{l \in L} \sum_{t \in T} y^\text{overflow}_{slt} Z^\text{overflow}_{slt}
\right]
```
### Constraints
- Power produced equal power consumed (`eq_power_balance[s,t]`):
```math
\sum_{b \in B} \sum_{t \in T} y^\text{inj}_{sbt} = 0
```
- Definition of flow (_enforced on-the-fly_):
```math
y^\text{flow}_{slt} = \sum_{b \in B} \delta_{sbl} y^\text{inj}_{sbt}
```
- Flow limits (_enforced on-the-fly_):
```math
\begin{align*}
y^\text{flow}_{slt} & \leq M^\text{limit}_{slt} + y^\text{overflow}_{slt} \\
-y^\text{flow}_{slt} & \leq M^\text{limit}_{slt} + y^\text{overflow}_{slt}
\end{align*}
```

View File

@@ -1,49 +1,43 @@
# UnitCommitment.jl # UnitCommitment.jl
**UnitCommitment.jl** (UC.jl) is a Julia/JuMP optimization package for the Security-Constrained Unit Commitment Problem (SCUC), a fundamental optimization problem in power systems used, for example, to clear the day-ahead electricity markets. The package provides benchmark instances for the problem and Julia/JuMP implementations of state-of-the-art mixed-integer programming formulations. **UnitCommitment.jl** (UC.jl) is an optimization package for the Security-Constrained Unit Commitment Problem (SCUC), a fundamental optimization problem in power systems used, for example, to clear the electricity markets. Both deterministic and two-stage stochastic versions of the problem are supported. The package provides benchmark instances for the problem, a flexible and well-documented data format for the problem, as well as Julia/JuMP implementations of state-of-the-art mixed-integer programming formulations and solution methods.
## Package Components ## Package Components
* **Data Format:** The package proposes an extensible and fully-documented JSON-based data specification format for SCUC, developed in collaboration with Independent System Operators (ISOs), which describes the most important aspects of the problem. The format supports all the most common generator characteristics (including ramping, piecewise-linear production cost curves and time-dependent startup costs), as well as operating reserves, price-sensitive loads, transmission networks and contingencies. - **Data Format:** The package proposes an extensible and fully-documented JSON-based data specification format for SCUC, developed in collaboration with Independent System Operators (ISOs), which describes the most important aspects of the problem. The format supports all the most common thermal generator characteristics (including ramping, piecewise-linear production cost curves and time-dependent startup costs), as well as profiled generators, reserves, price-sensitive loads, battery storage, transmission, and contingencies.
* **Benchmark Instances:** The package provides a diverse collection of large-scale benchmark instances collected from the literature, converted into a common data format, and extended using data-driven methods to make them more challenging and realistic. - **Benchmark Instances:** The package provides a diverse collection of large-scale benchmark instances collected from the literature, converted into a common data format, and extended using data-driven methods to make them more challenging and realistic.
* **Model Implementation**: The package provides a Julia/JuMP implementations of state-of-the-art formulations and solution methods for SCUC, including multiple ramping formulations ([ArrCon2000][ArrCon2000], [MorLatRam2013][MorLatRam2013], [DamKucRajAta2016][DamKucRajAta2016], [PanGua2016][PanGua2016]), multiple piecewise-linear costs formulations ([Gar1962][Gar1962], [CarArr2006][CarArr2006], [KnuOstWat2018][KnuOstWat2018]) and contingency screening methods ([XavQiuWanThi2019][XavQiuWanThi2019]). Our goal is to keep these implementations up-to-date as new methods are proposed in the literature. - **Model Implementation**: The package provides a Julia/JuMP implementations of state-of-the-art formulations and solution methods for the deterministic and stochastic SCUC, including multiple ramping formulations ([ArrCon2000](https://doi.org/10.1109/59.871739), [MorLatRam2013](https://doi.org/10.1109/TPWRS.2013.2251373), [DamKucRajAta2016](https://doi.org/10.1007/s10107-015-0919-9), [PanGua2016](https://doi.org/10.1287/opre.2016.1520)), piecewise-linear costs formulations ([Gar1962](https://doi.org/10.1109/AIEEPAS.1962.4501405), [CarArr2006](https://doi.org/10.1109/TPWRS.2006.876672), [KnuOstWat2018](https://doi.org/10.1109/TPWRS.2017.2783850)), contingency screening methods ([XavQiuWanThi2019](https://doi.org/10.1109/TPWRS.2019.2892620)) and decomposition methods. Our goal is to keep these implementations up-to-date as new methods are proposed in the literature.
* **Benchmark Tools:** The package provides automated benchmark scripts to accurately evaluate the performance impact of proposed code changes. - **Benchmark Tools:** The package provides automated benchmark scripts to accurately evaluate the performance impact of proposed code changes.
[ArrCon2000]: https://doi.org/10.1109/59.871739 ## Authors
[CarArr2006]: https://doi.org/10.1109/TPWRS.2006.876672
[DamKucRajAta2016]: https://doi.org/10.1007/s10107-015-0919-9
[Gar1962]: https://doi.org/10.1109/AIEEPAS.1962.4501405
[KnuOstWat2018]: https://doi.org/10.1109/TPWRS.2017.2783850
[MorLatRam2013]: https://doi.org/10.1109/TPWRS.2013.2251373
[PanGua2016]: https://doi.org/10.1287/opre.2016.1520
[XavQiuWanThi2019]: https://doi.org/10.1109/TPWRS.2019.2892620
### Authors - **Alinson S. Xavier** (Argonne National Laboratory)
* **Alinson S. Xavier** (Argonne National Laboratory) - **Aleksandr M. Kazachkov** (University of Florida)
* **Aleksandr M. Kazachkov** (University of Florida) - **Ogün Yurdakul** (Technische Universität Berlin)
* **Feng Qiu** (Argonne National Laboratory) - **Jun He** (Purdue University)
- **Feng Qiu** (Argonne National Laboratory)
### Acknowledgments ## Acknowledgments
* We would like to thank **Yonghong Chen** (Midcontinent Independent System Operator), **Feng Pan** (Pacific Northwest National Laboratory) for valuable feedback on early versions of this package. - We would like to thank **Yonghong Chen** (Midcontinent Independent System Operator), **Feng Pan** (Pacific Northwest National Laboratory) for valuable feedback on early versions of this package.
* Based upon work supported by **Laboratory Directed Research and Development** (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357 - Based upon work supported by **Laboratory Directed Research and Development** (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357
* Based upon work supported by the **U.S. Department of Energy Advanced Grid Modeling Program** under Grant DE-OE0000875. - Based upon work supported by the **U.S. Department of Energy Advanced Grid Modeling Program** under Grant DE-OE0000875.
### Citing ## Citing
If you use UnitCommitment.jl in your research (instances, models or algorithms), we kindly request that you cite the package as follows: If you use UnitCommitment.jl in your research (instances, models or algorithms), we kindly request that you cite the package as follows:
* **Alinson S. Xavier, Aleksandr M. Kazachkov, Feng Qiu**, "UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment". Zenodo (2020). [DOI: 10.5281/zenodo.4269874](https://doi.org/10.5281/zenodo.4269874). - **Alinson S. Xavier, Aleksandr M. Kazachkov, Ogün Yurdakul, Jun He, Feng Qiu**, "UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment (Version 0.4)". Zenodo (2024). [DOI: 10.5281/zenodo.4269874](https://doi.org/10.5281/zenodo.4269874).
If you use the instances, we additionally request that you cite the original sources, as described in the [instances page](instances.md). If you use the instances, we additionally request that you cite the original sources, as described in the [instances page](guides/instances.md).
### License ## License
```text ```text
UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment
Copyright © 2020, UChicago Argonne, LLC. All Rights Reserved. Copyright © 2020-2024, UChicago Argonne, LLC. All Rights Reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted Redistribution and use in source and binary forms, with or without modification, are permitted
provided that the following conditions are met: provided that the following conditions are met:
@@ -67,16 +61,3 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING N
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE. POSSIBILITY OF SUCH DAMAGE.
``` ```
## Site contents
```{toctree}
---
maxdepth: 2
---
usage.md
format.md
instances.md
model.md
```

View File

@@ -0,0 +1,122 @@
# # Model customization
# In the previous tutorial, we used UnitCommitment.jl to solve benchmark and user-provided instances using a default mathematical formulation for the problem. In this tutorial, we will explore how to customize this formulation.
# !!! warning
# This tutorial is not required for using UnitCommitment.jl, unless you plan to make changes to the problem formulation. In this page, we assume familiarity with the JuMP modeling language. Please see [JuMP's official documentation](https://jump.dev/JuMP.jl/stable/) for resources on getting started with JuMP.
# ## Selecting modeling components
# By default, `UnitCommitment.build_model` uses a formulation that combines modeling components from different publications, and that has been carefully tested, using our own benchmark scripts, to provide good performance across a wide variety of instances. This default formulation is expected to change over time, as new methods are proposed in the literature. You can, however, construct your own formulation, based on the modeling components that you choose, as shown in the next example.
# We start by importing the necessary packages and reading a benchmark instance:
using HiGHS
using JuMP
using UnitCommitment
instance = UnitCommitment.read_benchmark("matpower/case14/2017-01-01");
# Next, instead of calling `UnitCommitment.build_model` with default arguments, we can provide a `UnitCommitment.Formulation` object, which describes what modeling components to use, and how should they be configured. For a complete list of modeling components available in UnitCommitment.jl, see the [API docs](../api.md).
# In the example below, we switch to piecewise-linear cost modeling as defined in [KnuOstWat2018](https://doi.org/10.1109/TPWRS.2017.2783850), as well as ramping and startup costs formulation as defined in [MorLatRam2013](https://doi.org/10.1109/TPWRS.2013.2251373). In addition, we specify custom cutoffs for the shift factors formulation.
model = UnitCommitment.build_model(
instance = instance,
optimizer = HiGHS.Optimizer,
formulation = UnitCommitment.Formulation(
pwl_costs = UnitCommitment.KnuOstWat2018.PwlCosts(),
ramping = UnitCommitment.MorLatRam2013.Ramping(),
startup_costs = UnitCommitment.MorLatRam2013.StartupCosts(),
transmission = UnitCommitment.ShiftFactorsFormulation(
isf_cutoff = 0.008,
lodf_cutoff = 0.003,
),
),
);
# ## Accessing decision variables
# In the previous tutorial, we saw how to access the optimal solution through `UnitCommitment.solution`. While this approach works well for basic usage, it is also possible to get a direct reference to the JuMP decision variables and query their values, as the next example illustrates.
# First, we load a benchmark instance and solve it, as before.
instance = UnitCommitment.read_benchmark("matpower/case14/2017-01-01");
model =
UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer);
UnitCommitment.optimize!(model)
# At this point, it is possible to obtain a reference to the decision variables by calling `model[:varname][index]`. For example, `model[:is_on]["g1",1]` returns a direct reference to the JuMP variable indicating whether generator named "g1" is on at time 1. For a complete list of decision variables available, and how are they indexed, see the [problem definition](../guides/problem.md).
@show JuMP.value(model[:is_on]["g1", 1])
# To access second-stage decisions, it is necessary to specify the scenario name. UnitCommitment.jl models deterministic instances as a particular case in which there is a single scenario named "s1", so we need to use this key.
@show JuMP.value(model[:prod_above]["s1", "g1", 1])
# ## Modifying variables and constraints
# When testing variations of the unit commitment problem, it is often necessary to modify the objective function, variables and constraints of the formulation. UnitCommitment.jl makes this process relatively easy. The first step is to construct the standard model using `UnitCommitment.build_model`:
instance = UnitCommitment.read_benchmark("matpower/case14/2017-01-01");
model =
UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer);
# Now, before calling `UnitCommitment.optimize`, we can make any desired changes to the formulation. In the previous section, we saw how to obtain a direct reference to the decision variables. It is possible to modify them by using standard JuMP methods. For example, to fix the commitment status of a particular generator, we can use `JuMP.fix`:
JuMP.fix(model[:is_on]["g1", 1], 1.0, force = true)
# To modify the cost coefficient of a particular variable, we can use `JuMP.set_objective_coefficient`:
JuMP.set_objective_coefficient(model, model[:switch_on]["g1", 1], 1000.0)
# It is also possible to make changes to the set of constraints. For example, we can add a custom constraint, using the `JuMP.@constraint` macro:
@constraint(model, model[:is_on]["g3", 1] + model[:is_on]["g4", 1] <= 1,);
# We can also remove an existing model constraint using `JuMP.delete`. See the [problem definition](../guides/problem.md) for a list of constraint names and indices.
JuMP.delete(model, model[:eq_min_uptime]["g1", 1])
# After we are done with all changes, we can call `UnitCommitment.optimize` and extract the optimal solution:
UnitCommitment.optimize!(model)
@show UnitCommitment.solution(model)
# ## Modeling new grid components
# In this section we demonstrate how to add a new grid component to a particular bus in the network. This is useful, for example, when developing formulations for a new type of generator, energy storage, or any other grid device. We start by reading the instance data and buliding a standard model:
instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
model =
UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer);
# Next, we create decision variables for the new grid component. In this example, we assume that the new component can inject up to 10 MW of power at each time step, so we create new continuous variables $0 \leq x_t \leq 10$.
T = instance.time
@variable(model, x[1:T], lower_bound = 0.0, upper_bound = 10.0);
# Next, we add the production costs to the objective function. In this example, we assume a generation cost of \$5/MW:
for t in 1:T
set_objective_coefficient(model, x[t], 5.0)
end
# We then attach the new component to bus `b1` by modifying the net injection constraint (`eq_net_injection`):
for t in 1:T
set_normalized_coefficient(
model[:eq_net_injection]["s1", "b1", t],
x[t],
1.0,
)
end
# Next, we solve the model:
UnitCommitment.optimize!(model)
# We then finally extract the optimal value of the $x$ variables:
@show value.(x)

View File

@@ -0,0 +1,105 @@
# Decomposition methods
## 1. Time decomposition for production cost modeling
Solving unit commitment instances that have long time horizons (for example, year-long 8760-hour instances in production cost modeling) requires a substantial amount of computational power. To address this issue, UC.jl offers a time decomposition method, which breaks the instance down into multiple overlapping subproblems, solves them sequentially, then reassembles the solution.
When solving a unit commitment instance with a dense time slot structure, computational complexity can become a significant challenge. For instance, if the instance contains hourly data for an entire year (8760 hours), solving such a model can require a substantial amount of computational power. To address this issue, UC.jl provides a time_decomposition method within the `optimize!` function. This method decomposes the problem into multiple sub-problems, solving them sequentially.
The `optimize!` function takes 5 parameters: a unit commitment instance, a `TimeDecomposition` method, an optimizer, and two optional functions `after_build` and `after_optimize`. It returns a solution dictionary. The `TimeDecomposition` method itself requires four arguments: `time_window`, `time_increment`, `inner_method` (optional), and `formulation` (optional). These arguments define the time window for each sub-problem, the time increment to move to the next sub-problem, the method used to solve each sub-problem, and the formulation employed, respectively. The two functions, namely `after_build` and `after_optimize`, are invoked subsequent to the construction and optimization of each sub-model, respectively. It is imperative that the `after_build` function requires its two arguments to be consistently mapped to `model` and `instance`, while the `after_optimize` function necessitates its three arguments to be consistently mapped to `solution`, `model`, and `instance`.
The code snippet below illustrates an example of solving an instance by decomposing the model into multiple 36-hour sub-problems using the `XavQiuWanThi2019` method. Each sub-problem advances 24 hours at a time. The first sub-problem covers time steps 1 to 36, the second covers time steps 25 to 60, the third covers time steps 49 to 84, and so on. The initial power levels and statuses of the second and subsequent sub-problems are set based on the results of the first 24 hours from each of their immediate prior sub-problems. In essence, this approach addresses the complexity of solving a large problem by tackling it in 24-hour intervals, while incorporating an additional 12-hour buffer to mitigate the closing window effect for each sub-problem. Furthermore, the `after_build` function imposes the restriction that `g3` and `g4` cannot be activated simultaneously during the initial time slot of each sub-problem. On the other hand, the `after_optimize` function is invoked to calculate the conventional Locational Marginal Prices (LMPs) for each sub-problem, and subsequently appends the computed values to the `lmps` vector.
> **Warning**
> Specifying `TimeDecomposition` as the value of the `inner_method` field of another `TimeDecomposition` causes errors when calling the `optimize!` function due to the different argument structures between the two `optimize!` functions.
```julia
using UnitCommitment, JuMP, Cbc, HiGHS
import UnitCommitment:
TimeDecomposition,
ConventionalLMP,
XavQiuWanThi2019,
Formulation
# specifying the after_build and after_optimize functions
function after_build(model, instance)
@constraint(
model,
model[:is_on]["g3", 1] + model[:is_on]["g4", 1] <= 1,
)
end
lmps = []
function after_optimize(solution, model, instance)
lmp = UnitCommitment.compute_lmp(
model,
ConventionalLMP(),
optimizer = HiGHS.Optimizer,
)
return push!(lmps, lmp)
end
# assume the instance is given as a 120h problem
instance = UnitCommitment.read("instance.json")
solution = UnitCommitment.optimize!(
instance,
TimeDecomposition(
time_window = 36, # solve 36h problems
time_increment = 24, # advance by 24h each time
inner_method = XavQiuWanThi2019.Method(),
formulation = Formulation(),
),
optimizer = Cbc.Optimizer,
after_build = after_build,
after_optimize = after_optimize,
)
```
## 2. Scenario decomposition with Progressive Hedging for stochstic UC
By default, UC.jl uses the Extensive Form (EF) when solving stochastic instances. This approach involves constructing a single JuMP model that contains data and decision variables for all scenarios. Although EF has optimality guarantees and performs well with small test cases, it can become computationally intractable for large instances or substantial number of scenarios.
Progressive Hedging (PH) is an alternative (heuristic) solution method provided by UC.jl in which the problem is decomposed into smaller scenario-based subproblems, which are then solved in parallel in separate Julia processes, potentially across multiple machines. Quadratic penalty terms are used to enforce convergence of first-stage decision variables. The method is closely related to the Alternative Direction Method of Multipliers (ADMM) and can handle larger instances, although it is not guaranteed to converge to the optimal solution. Our implementation of PH relies on Message Passing Interface (MPI) for communication. We refer to [MPI.jl Documentation](https://github.com/JuliaParallel/MPI.jl) for more details on installing MPI.
The following example shows how to solve SCUC instances using progressive hedging. The script should be saved in a file, say `ph.jl`, and executed using `mpiexec -n <num-scenarios> julia ph.jl`.
```julia
using HiGHS
using MPI
using UnitCommitment
using Glob
# 1. Initialize MPI
MPI.Init()
# 2. Configure progressive hedging method
ph = UnitCommitment.ProgressiveHedging()
# 3. Read problem instance
instance = UnitCommitment.read(["example/s1.json", "example/s2.json"], ph)
# 4. Build JuMP model
model = UnitCommitment.build_model(
instance = instance,
optimizer = HiGHS.Optimizer,
)
# 5. Run the decentralized optimization algorithm
UnitCommitment.optimize!(model, ph)
# 6. Fetch the solution
solution = UnitCommitment.solution(model, ph)
# 7. Close MPI
MPI.Finalize()
```
When using PH, the model can be customized as usual, with different formulations or additional user-provided constraints. Note that `read`, in this case, takes `ph` as an argument. This allows each Julia process to read only the instance files that are relevant to it. Similarly, the `solution` function gathers the optimal solution of each processes and returns a combined dictionary.
Each process solves a sub-problem with $\frac{s}{p}$ scenarios, where $s$ is the total number of scenarios and $p$ is the number of MPI processes. For instance, if we have 15 scenario files and 5 processes, then each process will solve a JuMP model that contains data for 3 scenarios. If the total number of scenarios is not divisible by the number of processes, then an error will be thrown.
!!! warning
Currently, PH can handle only equiprobable scenarios. Further, `solution(model, ph)` can only handle cases where only one scenario is modeled in each process.

57
docs/src/tutorials/lmp.jl Normal file
View File

@@ -0,0 +1,57 @@
# # Locational Marginal Prices
# Locational Marginal Prices (LMPs) refer to the cost of supplying electricity at specific locations of the network. LMPs are crucial for the operation of electricity markets and have many other applications, such as indicating what areas of the network may require additional generation or transmission capacity. UnitCommitment.jl implements two methods for calculating LMPS: Conventional LMPs and Approximated Extended LMPs (AELMPs). In this tutorial, we introduce each method and illustrate their usage.
# ### Conventional LMPs
# Conventional LMPs work by (1) solving the original SCUC problem, (2) fixing all binary variables to their optimal values, and (3) re-solving the resulting linear programming model. In this approach, the LMPs are defined as the values of the dual variables associated with the net injection constraints.
# The first step to use this method is to load and optimize an instance, as explained in previous tutorials:
using UnitCommitment
using HiGHS
instance = UnitCommitment.read_benchmark("matpower/case14/2017-01-01")
model =
UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer)
UnitCommitment.optimize!(model)
# Next, we call `UnitCommitment.compute_lmp`, as shown below. The function accepts three arguments -- a solved SCUC model, the LMP method, and a linear optimizer -- and it returns a dictionary mapping `(scenario_name, bus_name, time)` to the marginal price.
lmp = UnitCommitment.compute_lmp(
model,
UnitCommitment.ConventionalLMP(),
optimizer = HiGHS.Optimizer,
)
# For example, the following code queries the LMP of bus `b1` in scenario `s1` at time 1:
@show lmp["s1", "b1", 1]
# ### Approximate Extended LMPs
# Approximate Extended LMPs (AELMPs) are an alternative method to calculate locational marginal prices which attemps to minimize uplift payments. The method internally works by modifying the instance data in three ways: (1) it sets the minimum power output of each generator to zero, (2) it averages the start-up cost over the offer blocks for each generator, and (3) it relaxes all integrality constraints. To compute AELMPs, as shown in the example below, we call `compute_lmp` and provide `UnitCommitment.AELMP()` as the second argument.
# This method has two configurable parameters: `allow_offline_participation` and `consider_startup_costs`. If `allow_offline_participation = true`, then offline generators are allowed to participate in the pricing. If instead `allow_offline_participation = false`, offline generators are not allowed and therefore are excluded from the system. A solved UC model is optional if offline participation is allowed, but is required if not allowed. The method forces offline participation to be allowed if the UC model supplied by the user is not solved. For the second field, If `consider_startup_costs = true`, then start-up costs are integrated and averaged over each unit production; otherwise the production costs stay the same. By default, both fields are set to `true`.
# !!! warning
# This method is still under active research, and has several limitations. The implementation provided in the package is based on MISO Phase I only. It only supports fast start resources. More specifically, the minimum up/down time of all generators must be 1, the initial power of all generators must be 0, and the initial status of all generators must be negative. The method does not support time-varying start-up costs, and only currently works for deterministic instances. If offline participation is not allowed, AELMPs treats an asset to be offline if it is never on throughout all time periods.
instance = UnitCommitment.read_benchmark("test/aelmp_simple")
model =
UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer)
UnitCommitment.optimize!(model)
lmp = UnitCommitment.compute_lmp(
model,
UnitCommitment.AELMP(
allow_offline_participation = false,
consider_startup_costs = true,
),
optimizer = HiGHS.Optimizer,
)
@show lmp["s1", "B1", 1]

View File

@@ -0,0 +1,183 @@
# # Market Clearing
# In North America, electricity markets are structured around two primary types of markets: the day-ahead (DA) market and the real-time (RT) market. The DA market schedules electricity generation and consumption for the next day, based on forecasts and bids from electricity suppliers and consumers. The RT market, on the other hand, operates continuously throughout the day, addressing the discrepancies between the DA schedule and actual demand, typically every five minutes. UnitCommitment.jl is able to simulate the DA and RT market clearing process. Specifically, the package provides the function `UnitCommitment.solve_market` which performs the following steps:
# 1. Solve the DA market problem.
# 2. Extract commitment status of all generators.
# 3. Solve a sequence of RT market problems, fixing the commitment status of each generator to the corresponding optimal solution of the DA problem.
# To use this function, we need to prepare an instance file corresponding to the DA market problem and multiple instance files corresponding to the RT market problems. The number of required files depends on the time granularity and window. For example, suppose that the DA problem is solved at hourly granularity and has 24 time periods, whereas the RT problems are solved at 5-minute granularity and have a single time period. Then we would need to prepare one files for the DA problem and 288 files $\left(24 \times \frac{60}{5}\right)$ for the RT market problems.
# ## A small example
# For simplicity, in this tutorial we illustate the usage of `UnitCommitment.solve_market` with a very small example, in which the DA problem has only two time periods. We start by creating the DA instance file:
da_contents = """
{
"Parameters": {
"Version": "0.4",
"Time horizon (h)": 2
},
"Buses": {
"b1": {
"Load (MW)": [200, 400]
}
},
"Generators": {
"g1": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 200],
"Production cost curve (\$)": [0, 1000],
"Initial status (h)": -24,
"Initial power (MW)": 0
},
"g2": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 300],
"Production cost curve (\$)": [0, 3000],
"Initial status (h)": -24,
"Initial power (MW)": 0
}
}
}
""";
open("da.json", "w") do file
return write(file, da_contents)
end;
# Next, we create eight single-period RT market problems, each one with a 15-minute time granularity:
for i in 1:8
rt_contents = """
{
"Parameters": {
"Version": "0.4",
"Time horizon (min)": 15,
"Time step (min)": 15
},
"Buses": {
"b1": {
"Load (MW)": [$(150 + 50 * i)]
}
},
"Generators": {
"g1": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 200],
"Production cost curve (\$)": [0, 1000],
"Initial status (h)": -24,
"Initial power (MW)": 0
},
"g2": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 300],
"Production cost curve (\$)": [0, 3000],
"Initial status (h)": -24,
"Initial power (MW)": 0
}
}
}
"""
open("rt_$i.json", "w") do file
return write(file, rt_contents)
end
end
# Finally, we call `UnitCommitment.solve_market`, providing as arguments (1) the path to the DA problem; (2) a list of paths to the RT problems; (3) the mixed-integer linear optimizer.
using UnitCommitment
using HiGHS
solution = UnitCommitment.solve_market(
"da.json",
[
"rt_1.json",
"rt_2.json",
"rt_3.json",
"rt_4.json",
"rt_5.json",
"rt_6.json",
"rt_7.json",
"rt_8.json",
],
optimizer = HiGHS.Optimizer,
)
# To retrieve the day-ahead market solution, we can query `solution["DA"]`:
@show solution["DA"]
# To query each real-time market solution, we can query `solution["RT"][i]`. Note that LMPs are automativally calculated.
@show solution["RT"][1]
# ## Customizing the model and LMPs
# When using the `solve_market` function it is still possible to customize the problem formulation and the LMP calculation method. In the next example, we use a custom formulation and explicitly specify the LMP method through the `settings` keyword argument:
UnitCommitment.solve_market(
"da.json",
[
"rt_1.json",
"rt_2.json",
"rt_3.json",
"rt_4.json",
"rt_5.json",
"rt_6.json",
"rt_7.json",
"rt_8.json",
],
settings = UnitCommitment.MarketSettings(
lmp_method = UnitCommitment.ConventionalLMP(),
formulation = UnitCommitment.Formulation(
pwl_costs = UnitCommitment.KnuOstWat2018.PwlCosts(),
ramping = UnitCommitment.MorLatRam2013.Ramping(),
startup_costs = UnitCommitment.MorLatRam2013.StartupCosts(),
transmission = UnitCommitment.ShiftFactorsFormulation(
isf_cutoff = 0.008,
lodf_cutoff = 0.003,
),
),
),
optimizer = HiGHS.Optimizer,
)
# It is also possible to add custom variables and constraints to either the DA or RT market problems, through the usage of `after_build_da` and `after_build_rt` callback functions. Similarly, the `after_optimize_da` and `after_optimize_rt` can be used to directly analyze the JuMP models, after they have been optimized:
using JuMP
function after_build_da(model, instance)
@constraint(model, model[:is_on]["g1", 1] <= model[:is_on]["g2", 1])
end
function after_optimize_da(solution, model, instance)
@show value(model[:is_on]["g1", 1])
end
UnitCommitment.solve_market(
"da.json",
[
"rt_1.json",
"rt_2.json",
"rt_3.json",
"rt_4.json",
"rt_5.json",
"rt_6.json",
"rt_7.json",
"rt_8.json",
],
after_build_da = after_build_da,
after_optimize_da = after_optimize_da,
optimizer = HiGHS.Optimizer,
)
# ## Additional considerations
# - UC.jl supports two-stage stochastic DA market problems. In this case, we need one file for each DA market scenario. All RT market problems must be deterministic.
# - UC.jl also supports multi-period RT market problems. Assume, for example, that the DA market problem is an hourly problem with 24 time periods, whereas the RT market problem uses 5-minute granularity with 4 time periods. UC.jl assumes that the first RT file covers period `0:00` to `0:20`, the second covers `0:05` to `0:25` and so on. We therefore still need 288 RT market files. To avoid going beyond the 24-hour period covered by the DA market solution, however, the last few RT market problems must have only 3, 2, and 1 time periods, covering `23:45` to `24:00`, `23:50` to `24:00` and `23:55` to `24:00`, respectively.
# - Some MILP solvers (such as Cbc) have issues handling linear programming problems, which are required for the RT market. In this case, a separate linear programming solver can be provided to `solve_market` using the `lp_optimizer` argument. For example, `solve_market(da_file, rt_files, optimizer=Cbc.Optimizer, lp_optimizer=Clp.Optimizer)`.

211
docs/src/tutorials/usage.jl Normal file
View File

@@ -0,0 +1,211 @@
# # Getting started
# ## Installing the package
# UnitCommitment.jl was tested and developed with [Julia 1.10](https://julialang.org/). To install Julia, please follow the [installation guide on the official Julia website](https://julialang.org/downloads/). To install UnitCommitment.jl, run the Julia interpreter, type `]` to open the package manager, then type:
# ```text
# pkg> add UnitCommitment@0.4
# ```
# To solve the optimization models, a mixed-integer linear programming (MILP) solver is also required. Please see the [JuMP installation guide](https://jump.dev/JuMP.jl/stable/installation/) for more instructions on installing a solver. Typical open-source choices are [HiGHS](https://github.com/jump-dev/HiGHS.jl), [Cbc](https://github.com/JuliaOpt/Cbc.jl) and [GLPK](https://github.com/JuliaOpt/GLPK.jl). In the instructions below, HiGHS will be used, but any other MILP solver should also be compatible.
# ## Solving a benchmark instance
# We start this tutorial by illustrating how to use UnitCommitment.jl to solve one of the provided benchmark instances. The package contains a large number of deterministic benchmark instances collected from the literature and converted into a common data format, which can be used to evaluate the performance of different solution methods. See [Instances](../guides/instances.md) for more details. The first step is to import `UnitCommitment` and HiGHS.
using HiGHS
using UnitCommitment
# Next, we use the function `UnitCommitment.read_benchmark` to read the instance.
instance = UnitCommitment.read_benchmark("matpower/case14/2017-01-01");
# Now that we have the instance loaded in memory, we build the JuMP optimization model using `UnitCommitment.build_model`:
model =
UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer);
# Next, we run the optimization process, with `UnitCommitment.optimize!`:
UnitCommitment.optimize!(model)
# Finally, we extract the optimal solution from the model:
solution = UnitCommitment.solution(model)
# We can then explore the solution using Julia:
@show solution["Thermal production (MW)"]["g1"]
# Or export the entire solution to a JSON file:
UnitCommitment.write("solution.json", solution)
# ## Solving a custom deterministic instance
# In the previous example, we solved a benchmark instance provided by the package. To solve a custom instance, the first step is to create an input file describing the list of elements (generators, loads and transmission lines) in the network. See [Data Format](../guides/format.md) for a complete description of the data format UC.jl expects. To keep this tutorial self-contained, we will create the input JSON file using Julia; however, this step can also be done with a simple text editor. First, we define the contents of the file:
json_contents = """
{
"Parameters": {
"Version": "0.4",
"Time horizon (h)": 4
},
"Buses": {
"b1": {
"Load (MW)": [100, 150, 200, 250]
}
},
"Generators": {
"g1": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 200],
"Production cost curve (\$)": [0, 1000],
"Initial status (h)": -24,
"Initial power (MW)": 0
},
"g2": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 300],
"Production cost curve (\$)": [0, 3000],
"Initial status (h)": -24,
"Initial power (MW)": 0
}
}
}
""";
# Next, we write it to `example.json`.
open("example.json", "w") do file
return write(file, json_contents)
end;
# Now that we have the input file, we can proceed as before, but using `UnitCommitment.read` instead of `UnitCommitment.read_benchmark`:
instance = UnitCommitment.read("example.json");
model =
UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer);
UnitCommitment.optimize!(model)
# Finally, we extract and display the solution:
solution = UnitCommitment.solution(model)
#
@show solution["Thermal production (MW)"]["g1"]
#
@show solution["Thermal production (MW)"]["g2"]
# ## Solving a custom stochastic instance
# In addition to deterministic test cases, UnitCommitment.jl can also solve two-stage stochastic instances of the problem. In this section, we demonstrate the most simple form, which builds a single (extensive form) model containing information for all scenarios. See [Decomposition](../tutorials/decomposition.md) for more advanced methods.
# First, we need to create one JSON input file for each scenario. Parameters that are allowed to change across scenarios are marked as "uncertain" in the [JSON data format](../guides/format.md) page. It is also possible to specify the name and weight of each scenario, as shown below.
# We start by creating `example_s1.json`, the first scenario file:
json_contents_s1 = """
{
"Parameters": {
"Version": "0.4",
"Time horizon (h)": 4,
"Scenario name": "s1",
"Scenario weight": 3.0
},
"Buses": {
"b1": {
"Load (MW)": [100, 150, 200, 250]
}
},
"Generators": {
"g1": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 200],
"Production cost curve (\$)": [0, 1000],
"Initial status (h)": -24,
"Initial power (MW)": 0
},
"g2": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 300],
"Production cost curve (\$)": [0, 3000],
"Initial status (h)": -24,
"Initial power (MW)": 0
}
}
}
"""
open("example_s1.json", "w") do file
return write(file, json_contents_s1)
end;
# Next, we create `example_s2.json`, the second scenario file:
json_contents_s2 = """
{
"Parameters": {
"Version": "0.4",
"Time horizon (h)": 4,
"Scenario name": "s2",
"Scenario weight": 1.0
},
"Buses": {
"b1": {
"Load (MW)": [200, 300, 400, 500]
}
},
"Generators": {
"g1": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 200],
"Production cost curve (\$)": [0, 1000],
"Initial status (h)": -24,
"Initial power (MW)": 0
},
"g2": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 300],
"Production cost curve (\$)": [0, 3000],
"Initial status (h)": -24,
"Initial power (MW)": 0
}
}
}
""";
open("example_s2.json", "w") do file
return write(file, json_contents_s2)
end;
# Now that we have our two scenario files, we can read them using `UnitCommitment.read`. Note that, instead of a single file, we now provide a list.
instance = UnitCommitment.read(["example_s1.json", "example_s2.json"])
# If we have a large number of scenario files, the [Glob](https://github.com/vtjnash/Glob.jl) package can also be used to avoid having to list them individually:
using Glob
instance = UnitCommitment.read(glob("example_s*.json"))
# Finally, we build the model and optimize as before:
model =
UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer);
UnitCommitment.optimize!(model)
# The solution to stochastic instances follows a slightly different format, as shown below:
solution = UnitCommitment.solution(model)
# The solution for each scenario can be accessed through `solution[scenario_name]`. For conveniance, this includes both first- and second-stage optimal decisions:
solution["s1"]

View File

@@ -0,0 +1,74 @@
# ## Generating initial conditions
# When creating random unit commitment instances for benchmark purposes, it is often hard to compute, in advance, sensible initial conditions for all thermal generators. Setting initial conditions naively (for example, making all generators initially off and producing no power) can easily cause the instance to become infeasible due to excessive ramping. Initial conditions can also make it hard to modify existing instances. For example, increasing the system load without carefully modifying the initial conditions may make the problem infeasible or unrealistically challenging to solve.
# To help with this issue, UC.jl provides a utility function which can generate feasible initial conditions by solving a single-period optimization problem. To illustrate its usage, we first generate a JSON file without initial conditions:
json_contents = """
{
"Parameters": {
"Version": "0.4",
"Time horizon (h)": 4
},
"Buses": {
"b1": {
"Load (MW)": [100, 150, 200, 250]
}
},
"Generators": {
"g1": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 200],
"Production cost curve (\$)": [0, 1000]
},
"g2": {
"Bus": "b1",
"Type": "Thermal",
"Production cost curve (MW)": [0, 300],
"Production cost curve (\$)": [0, 3000]
}
}
}
""";
open("example_initial.json", "w") do file
return write(file, json_contents)
end;
# Next, we read the instance and generate the initial conditions (in-place):
instance = UnitCommitment.read("example_initial.json")
UnitCommitment.generate_initial_conditions!(instance, HiGHS.Optimizer)
# Finally, we optimize the resulting problem:
model =
UnitCommitment.build_model(instance = instance, optimizer = HiGHS.Optimizer)
UnitCommitment.optimize!(model)
# !!! warning
# The function `generate_initial_conditions!` may return different initial conditions after each call, even if the same instance and the same optimizer is provided. The particular algorithm may also change in a future version of UC.jl. For these reasons, it is recommended that you generate initial conditions exactly once for each instance and store them for later use.
# ## 6. Verifying solutions
# When developing new formulations, it is very easy to introduce subtle errors in the model that result in incorrect solutions. To help avoiding this, UC.jl includes a utility function that verifies if a given solution is feasible, and, if not, prints all the validation errors it found. The implementation of this function is completely independent from the implementation of the optimization model, and therefore can be used to validate it.
# ```jldoctest; output = false
# using JSON
# using UnitCommitment
# # Read instance
# instance = UnitCommitment.read("example/s1.json")
# # Read solution (potentially produced by other packages)
# solution = JSON.parsefile("example/out.json")
# # Validate solution and print validation errors
# UnitCommitment.validate(instance, solution)
# # output
# true
# ```

View File

@@ -1,149 +0,0 @@
```{sectnum}
---
start: 1
depth: 2
suffix: .
---
```
Usage
=====
Installation
------------
UnitCommitment.jl was tested and developed with [Julia 1.6](https://julialang.org/). To install Julia, please follow the [installation guide on the official Julia website](https://julialang.org/downloads/platform.html). To install UnitCommitment.jl, run the Julia interpreter, type `]` to open the package manager, then type:
```text
pkg> add UnitCommitment@0.2
```
To test that the package has been correctly installed, run:
```text
pkg> test UnitCommitment
```
If all tests pass, the package should now be ready to be used by any Julia script on the machine.
To solve the optimization models, a mixed-integer linear programming (MILP) solver is also required. Please see the [JuMP installation guide](https://jump.dev/JuMP.jl/stable/installation/) for more instructions on installing a solver. Typical open-source choices are [Cbc](https://github.com/JuliaOpt/Cbc.jl) and [GLPK](https://github.com/JuliaOpt/GLPK.jl). In the instructions below, Cbc will be used, but any other MILP solver listed in JuMP installation guide should also be compatible.
Typical Usage
-------------
### Solving user-provided instances
The first step to use UC.jl is to construct a JSON file describing your unit commitment instance. See [Data Format](format.md) for a complete description of the data format UC.jl expects. The next steps, as shown below, are to: (1) read the instance from file; (2) construct the optimization model; (3) run the optimization; and (4) extract the optimal solution.
```julia
using Cbc
using JSON
using UnitCommitment
# 1. Read instance
instance = UnitCommitment.read("/path/to/input.json")
# 2. Construct optimization model
model = UnitCommitment.build_model(
instance=instance,
optimizer=Cbc.Optimizer,
)
# 3. Solve model
UnitCommitment.optimize!(model)
# 4. Write solution to a file
solution = UnitCommitment.solution(model)
UnitCommitment.write("/path/to/output.json", solution)
```
### Solving benchmark instances
UnitCommitment.jl contains a large number of benchmark instances collected from the literature and converted into a common data format. To solve one of these instances individually, instead of constructing your own, the function `read_benchmark` can be used, as shown below. See [Instances](instances.md) for the complete list of available instances.
```julia
using UnitCommitment
instance = UnitCommitment.read_benchmark("matpower/case3375wp/2017-02-01")
```
Advanced usage
--------------
### Customizing the formulation
By default, `build_model` uses a formulation that combines modeling components from different publications, and that has been carefully tested, using our own benchmark scripts, to provide good performance across a wide variety of instances. This default formulation is expected to change over time, as new methods are proposed in the literature. You can, however, construct your own formulation, based on the modeling components that you choose, as shown in the next example.
```julia
using Cbc
using UnitCommitment
import UnitCommitment:
Formulation,
KnuOstWat2018,
MorLatRam2013,
ShiftFactorsFormulation
instance = UnitCommitment.read_benchmark(
"matpower/case118/2017-02-01",
)
model = UnitCommitment.build_model(
instance = instance,
optimizer = Cbc.Optimizer,
formulation = Formulation(
pwl_costs = KnuOstWat2018.PwlCosts(),
ramping = MorLatRam2013.Ramping(),
startup_costs = MorLatRam2013.StartupCosts(),
transmission = ShiftFactorsFormulation(
isf_cutoff = 0.005,
lodf_cutoff = 0.001,
),
),
)
```
### Generating initial conditions
When creating random unit commitment instances for benchmark purposes, it is often hard to compute, in advance, sensible initial conditions for all generators. Setting initial conditions naively (for example, making all generators initially off and producing no power) can easily cause the instance to become infeasible due to excessive ramping. Initial conditions can also make it hard to modify existing instances. For example, increasing the system load without carefully modifying the initial conditions may make the problem infeasible or unrealistically challenging to solve.
To help with this issue, UC.jl provides a utility function which can generate feasible initial conditions by solving a single-period optimization problem, as shown below:
```julia
using Cbc
using UnitCommitment
# Read original instance
instance = UnitCommitment.read("instance.json")
# Generate initial conditions (in-place)
UnitCommitment.generate_initial_conditions!(instance, Cbc.Optimizer)
# Construct and solve optimization model
model = UnitCommitment.build_model(
instance=instance,
optimizer=Cbc.Optimizer,
)
UnitCommitment.optimize!(model)
```
```{warning}
The function `generate_initial_conditions!` may return different initial conditions after each call, even if the same instance and the same optimizer is provided. The particular algorithm may also change in a future version of UC.jl. For these reasons, it is recommended that you generate initial conditions exactly once for each instance and store them for later use.
```
### Verifying solutions
When developing new formulations, it is very easy to introduce subtle errors in the model that result in incorrect solutions. To help with this, UC.jl includes a utility function that verifies if a given solution is feasible, and, if not, prints all the validation errors it found. The implementation of this function is completely independent from the implementation of the optimization model, and therefore can be used to validate it. The function can also be used to verify solutions produced by other optimization packages, as long as they follow the [UC.jl data format](format.md).
```julia
using JSON
using UnitCommitment
# Read instance
instance = UnitCommitment.read("instance.json")
# Read solution (potentially produced by other packages)
solution = JSON.parsefile("solution.json")
# Validate solution and print validation errors
UnitCommitment.validate(instance, solution)
```

Binary file not shown.

75
juliaw
View File

@@ -1,75 +0,0 @@
#!/bin/bash
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
if [ ! -e Project.toml ]; then
echo "juliaw: Project.toml not found"
exit 1
fi
if [ ! -e Manifest.toml ]; then
julia --project=. -e 'using Pkg; Pkg.instantiate()' || exit 1
fi
if [ ! -e build/sysimage.so -o Project.toml -nt build/sysimage.so ]; then
echo "juliaw: rebuilding system image..."
# Generate temporary project folder
rm -rf $HOME/.juliaw
mkdir -p $HOME/.juliaw/src
cp Project.toml Manifest.toml $HOME/.juliaw
NAME=$(julia -e 'using TOML; toml = TOML.parsefile("Project.toml"); "name" in keys(toml) && print(toml["name"])')
if [ ! -z $NAME ]; then
cat > $HOME/.juliaw/src/$NAME.jl << EOF
module $NAME
end
EOF
fi
# Add PackageCompiler dependencies to temporary project
julia --project=$HOME/.juliaw -e 'using Pkg; Pkg.add(["PackageCompiler", "TOML", "Logging"])'
# Generate system image scripts
cat > $HOME/.juliaw/sysimage.jl << EOF
using PackageCompiler
using TOML
using Logging
Logging.disable_logging(Logging.Info)
mkpath("$PWD/build")
println("juliaw: generating precompilation statements...")
run(\`julia --project="$PWD" --trace-compile="$PWD"/build/precompile.jl \$(ARGS)\`)
println("juliaw: finding dependencies...")
project = TOML.parsefile("Project.toml")
manifest = TOML.parsefile("Manifest.toml")
deps = Symbol[]
for dep in keys(project["deps"])
if dep in keys(manifest)
# Up to Julia 1.6
dep_entry = manifest[dep][1]
else
# Julia 1.7+
dep_entry = manifest["deps"][dep][1]
end
if "path" in keys(dep_entry)
println(" - \$(dep) [skip]")
else
println(" - \$(dep)")
push!(deps, Symbol(dep))
end
end
println("juliaw: building system image...")
create_sysimage(
deps,
precompile_statements_file = "$PWD/build/precompile.jl",
sysimage_path = "$PWD/build/sysimage.so",
)
EOF
julia --project=$HOME/.juliaw $HOME/.juliaw/sysimage.jl $*
else
julia --project=. --sysimage build/sysimage.so $*
fi

View File

@@ -4,9 +4,13 @@
module UnitCommitment module UnitCommitment
using Base: String
include("instance/structs.jl") include("instance/structs.jl")
include("model/formulations/base/structs.jl") include("model/formulations/base/structs.jl")
include("solution/structs.jl") include("solution/structs.jl")
include("lmp/structs.jl")
include("market/structs.jl")
include("model/formulations/ArrCon2000/structs.jl") include("model/formulations/ArrCon2000/structs.jl")
include("model/formulations/CarArr2006/structs.jl") include("model/formulations/CarArr2006/structs.jl")
@@ -16,10 +20,13 @@ include("model/formulations/KnuOstWat2018/structs.jl")
include("model/formulations/MorLatRam2013/structs.jl") include("model/formulations/MorLatRam2013/structs.jl")
include("model/formulations/PanGua2016/structs.jl") include("model/formulations/PanGua2016/structs.jl")
include("solution/methods/XavQiuWanThi2019/structs.jl") include("solution/methods/XavQiuWanThi2019/structs.jl")
include("solution/methods/ProgressiveHedging/structs.jl")
include("model/formulations/WanHob2016/structs.jl") include("model/formulations/WanHob2016/structs.jl")
include("solution/methods/TimeDecomposition/structs.jl")
include("import/egret.jl") include("import/egret.jl")
include("instance/read.jl") include("instance/read.jl")
include("instance/migrate.jl")
include("model/build.jl") include("model/build.jl")
include("model/formulations/ArrCon2000/ramp.jl") include("model/formulations/ArrCon2000/ramp.jl")
include("model/formulations/base/bus.jl") include("model/formulations/base/bus.jl")
@@ -28,6 +35,8 @@ include("model/formulations/base/psload.jl")
include("model/formulations/base/sensitivity.jl") include("model/formulations/base/sensitivity.jl")
include("model/formulations/base/system.jl") include("model/formulations/base/system.jl")
include("model/formulations/base/unit.jl") include("model/formulations/base/unit.jl")
include("model/formulations/base/punit.jl")
include("model/formulations/base/storage.jl")
include("model/formulations/CarArr2006/pwlcosts.jl") include("model/formulations/CarArr2006/pwlcosts.jl")
include("model/formulations/DamKucRajAta2016/ramp.jl") include("model/formulations/DamKucRajAta2016/ramp.jl")
include("model/formulations/Gar1962/pwlcosts.jl") include("model/formulations/Gar1962/pwlcosts.jl")
@@ -44,6 +53,10 @@ include("solution/methods/XavQiuWanThi2019/enforce.jl")
include("solution/methods/XavQiuWanThi2019/filter.jl") include("solution/methods/XavQiuWanThi2019/filter.jl")
include("solution/methods/XavQiuWanThi2019/find.jl") include("solution/methods/XavQiuWanThi2019/find.jl")
include("solution/methods/XavQiuWanThi2019/optimize.jl") include("solution/methods/XavQiuWanThi2019/optimize.jl")
include("solution/methods/TimeDecomposition/optimize.jl")
include("solution/methods/ProgressiveHedging/optimize.jl")
include("solution/methods/ProgressiveHedging/read.jl")
include("solution/methods/ProgressiveHedging/solution.jl")
include("solution/optimize.jl") include("solution/optimize.jl")
include("solution/solution.jl") include("solution/solution.jl")
include("solution/warmstart.jl") include("solution/warmstart.jl")
@@ -55,5 +68,8 @@ include("utils/log.jl")
include("utils/benchmark.jl") include("utils/benchmark.jl")
include("validation/repair.jl") include("validation/repair.jl")
include("validation/validate.jl") include("validation/validate.jl")
include("lmp/conventional.jl")
include("lmp/aelmp.jl")
include("market/market.jl")
end end

View File

@@ -18,9 +18,9 @@ function read_egret_solution(path::String)::OrderedDict
solution = OrderedDict() solution = OrderedDict()
is_on = solution["Is on"] = OrderedDict() is_on = solution["Is on"] = OrderedDict()
production = solution["Production (MW)"] = OrderedDict() production = solution["Thermal production (MW)"] = OrderedDict()
reserve = solution["Reserve (MW)"] = OrderedDict() reserve = solution["Reserve (MW)"] = OrderedDict()
production_cost = solution["Production cost (\$)"] = OrderedDict() production_cost = solution["Thermal production cost (\$)"] = OrderedDict()
startup_cost = solution["Startup cost (\$)"] = OrderedDict() startup_cost = solution["Startup cost (\$)"] = OrderedDict()
for (gen_name, gen_dict) in egret["elements"]["generator"] for (gen_name, gen_dict) in egret["elements"]["generator"]

50
src/instance/migrate.jl Normal file
View File

@@ -0,0 +1,50 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using DataStructures
using JSON
function _migrate(json)
version = json["Parameters"]["Version"]
if version === nothing
error(
"The provided input file cannot be loaded because it does not " *
"specify what version of UnitCommitment.jl it was written for. " *
"Please modify the \"Parameters\" section of the file and include " *
"a \"Version\" entry. For example: {\"Parameters\":{\"Version\":\"0.3\"}}",
)
end
version = VersionNumber(version)
version >= v"0.3" || _migrate_to_v03(json)
version >= v"0.4" || _migrate_to_v04(json)
return
end
function _migrate_to_v03(json)
# Migrate reserves
if json["Reserves"] !== nothing &&
json["Reserves"]["Spinning (MW)"] !== nothing
amount = json["Reserves"]["Spinning (MW)"]
json["Reserves"] = DefaultOrderedDict(nothing)
json["Reserves"]["r1"] = DefaultOrderedDict(nothing)
json["Reserves"]["r1"]["Type"] = "spinning"
json["Reserves"]["r1"]["Amount (MW)"] = amount
for (gen_name, gen) in json["Generators"]
if gen["Provides spinning reserves?"] == true
gen["Reserve eligibility"] = ["r1"]
end
end
end
end
function _migrate_to_v04(json)
# Migrate thermal units
if json["Generators"] !== nothing
for (gen_name, gen) in json["Generators"]
if gen["Type"] === nothing
gen["Type"] = "Thermal"
end
end
end
end

View File

@@ -8,20 +8,18 @@ using DataStructures
using GZip using GZip
import Base: getindex, time import Base: getindex, time
const INSTANCES_URL = "https://axavier.org/UnitCommitment.jl/0.3/instances" const INSTANCES_URL = "https://axavier.org/UnitCommitment.jl/0.4/instances"
""" """
read_benchmark(name::AbstractString)::UnitCommitmentInstance read_benchmark(name::AbstractString)::UnitCommitmentInstance
Read one of the benchmark unit commitment instances included in the package. Read one of the benchmark instances included in the package. See
See "Instances" section of the documentation for the entire list of benchmark [Instances](guides/instances.md) for the entire list of benchmark instances available.
instances available.
Example # Example
------- ```julia
instance = UnitCommitment.read_benchmark("matpower/case3375wp/2017-02-01")
import UnitCommitment ```
instance = UnitCommitment.read_benchmark("matpower/case3375wp/2017-02-01")
""" """
function read_benchmark( function read_benchmark(
name::AbstractString; name::AbstractString;
@@ -45,26 +43,76 @@ function read_benchmark(
return UnitCommitment.read(filename) return UnitCommitment.read(filename)
end end
function _repair_scenario_names_and_probabilities!(
scenarios::Vector{UnitCommitmentScenario},
path::Vector{String},
)::Nothing
total_weight = sum([sc.probability for sc in scenarios])
for (sc_path, sc) in zip(path, scenarios)
sc.name !== "" ||
(sc.name = first(split(last(split(sc_path, "/")), ".")))
sc.probability = (sc.probability / total_weight)
end
return
end
""" """
read(path::AbstractString)::UnitCommitmentInstance read(path::AbstractString)::UnitCommitmentInstance
Read a unit commitment instance from a file. The file may be gzipped. Read a deterministic test case from the given file. The file may be gzipped.
Example # Example
-------
import UnitCommitment ```julia
instance = UnitCommitment.read("/path/to/input.json.gz") instance = UnitCommitment.read("s1.json.gz")
```
""" """
function read(path::AbstractString)::UnitCommitmentInstance function read(path::String)::UnitCommitmentInstance
if endswith(path, ".gz") scenarios = Vector{UnitCommitmentScenario}()
return _read(gzopen(path)) scenario = _read_scenario(path)
else scenario.name = "s1"
return _read(open(path)) scenario.probability = 1.0
end scenarios = [scenario]
instance =
UnitCommitmentInstance(time = scenario.time, scenarios = scenarios)
return instance
end end
function _read(file::IO)::UnitCommitmentInstance """
read(path::Vector{String})::UnitCommitmentInstance
Read a stochastic unit commitment instance from the given files. Each file
describes a scenario. The files may be gzipped.
# Example
```julia
instance = UnitCommitment.read(["s1.json.gz", "s2.json.gz"])
```
"""
function read(paths::Vector{String})::UnitCommitmentInstance
scenarios = UnitCommitmentScenario[]
for p in paths
push!(scenarios, _read_scenario(p))
end
_repair_scenario_names_and_probabilities!(scenarios, paths)
instance =
UnitCommitmentInstance(time = scenarios[1].time, scenarios = scenarios)
return instance
end
function _read_scenario(path::String)::UnitCommitmentScenario
if endswith(path, ".gz")
scenario = _read(gzopen(path))
elseif endswith(path, ".json")
scenario = _read(open(path))
else
error("Unsupported input format")
end
return scenario
end
function _read(file::IO)::UnitCommitmentScenario
return _from_json( return _from_json(
JSON.parse(file, dicttype = () -> DefaultOrderedDict(nothing)), JSON.parse(file, dicttype = () -> DefaultOrderedDict(nothing)),
) )
@@ -79,33 +127,53 @@ function _read_json(path::String)::OrderedDict
return JSON.parse(file, dicttype = () -> DefaultOrderedDict(nothing)) return JSON.parse(file, dicttype = () -> DefaultOrderedDict(nothing))
end end
function _from_json(json; repair = true) function _from_json(json; repair = true)::UnitCommitmentScenario
units = Unit[] _migrate(json)
thermal_units = ThermalUnit[]
buses = Bus[] buses = Bus[]
contingencies = Contingency[] contingencies = Contingency[]
lines = TransmissionLine[] lines = TransmissionLine[]
loads = PriceSensitiveLoad[] loads = PriceSensitiveLoad[]
reserves = Reserve[] reserves = Reserve[]
profiled_units = ProfiledUnit[]
storage_units = StorageUnit[]
function scalar(x; default = nothing) function scalar(x; default = nothing)
x !== nothing || return default x !== nothing || return default
return x return x
end end
time_horizon = json["Parameters"]["Time (h)"] time_horizon = json["Parameters"]["Time horizon (min)"]
if time_horizon === nothing if time_horizon === nothing
time_horizon = json["Parameters"]["Time horizon (h)"] time_horizon = json["Parameters"]["Time (h)"]
if time_horizon === nothing
time_horizon = json["Parameters"]["Time horizon (h)"]
end
if time_horizon !== nothing
time_horizon *= 60
end
end end
time_horizon !== nothing || error("Missing parameter: Time horizon (h)") time_horizon !== nothing || error("Missing parameter: Time horizon (min)")
isinteger(time_horizon) ||
error("Time horizon must be an integer in minutes")
time_horizon = Int(time_horizon)
time_step = scalar(json["Parameters"]["Time step (min)"], default = 60) time_step = scalar(json["Parameters"]["Time step (min)"], default = 60)
(60 % time_step == 0) || (60 % time_step == 0) ||
error("Time step $time_step is not a divisor of 60") error("Time step $time_step is not a divisor of 60")
(time_horizon % time_step == 0) || error(
"Time step $time_step is not a divisor of time horizon $time_horizon",
)
time_multiplier = 60 ÷ time_step time_multiplier = 60 ÷ time_step
T = time_horizon * time_multiplier T = time_horizon ÷ time_step
probability = json["Parameters"]["Scenario weight"]
probability !== nothing || (probability = 1)
scenario_name = json["Parameters"]["Scenario name"]
scenario_name !== nothing || (scenario_name = "")
name_to_bus = Dict{String,Bus}() name_to_bus = Dict{String,Bus}()
name_to_line = Dict{String,TransmissionLine}() name_to_line = Dict{String,TransmissionLine}()
name_to_unit = Dict{String,Unit}() name_to_unit = Dict{String,ThermalUnit}()
name_to_reserve = Dict{String,Reserve}() name_to_reserve = Dict{String,Reserve}()
function timeseries(x; default = nothing) function timeseries(x; default = nothing)
@@ -119,15 +187,6 @@ function _from_json(json; repair = true)
json["Parameters"]["Power balance penalty (\$/MW)"], json["Parameters"]["Power balance penalty (\$/MW)"],
default = [1000.0 for t in 1:T], default = [1000.0 for t in 1:T],
) )
# Penalty price for shortage in meeting system-wide flexiramp requirements
flexiramp_shortfall_penalty = timeseries(
json["Parameters"]["Flexiramp penalty (\$/MW)"],
default = [500.0 for t in 1:T],
)
shortfall_penalty = timeseries(
json["Parameters"]["Reserve shortfall penalty (\$/MW)"],
default = [-1.0 for t in 1:T],
)
# Read buses # Read buses
for (bus_name, dict) in json["Buses"] for (bus_name, dict) in json["Buses"]
@@ -135,8 +194,10 @@ function _from_json(json; repair = true)
bus_name, bus_name,
length(buses), length(buses),
timeseries(dict["Load (MW)"]), timeseries(dict["Load (MW)"]),
Unit[], ThermalUnit[],
PriceSensitiveLoad[], PriceSensitiveLoad[],
ProfiledUnit[],
StorageUnit[],
) )
name_to_bus[bus_name] = bus name_to_bus[bus_name] = bus
push!(buses, bus) push!(buses, bus)
@@ -149,7 +210,7 @@ function _from_json(json; repair = true)
name = reserve_name, name = reserve_name,
type = lowercase(dict["Type"]), type = lowercase(dict["Type"]),
amount = timeseries(dict["Amount (MW)"]), amount = timeseries(dict["Amount (MW)"]),
units = [], thermal_units = [],
shortfall_penalty = scalar( shortfall_penalty = scalar(
dict["Shortfall penalty (\$/MW)"], dict["Shortfall penalty (\$/MW)"],
default = -1, default = -1,
@@ -162,90 +223,127 @@ function _from_json(json; repair = true)
# Read units # Read units
for (unit_name, dict) in json["Generators"] for (unit_name, dict) in json["Generators"]
# Read and validate unit type
unit_type = scalar(dict["Type"], default = nothing)
unit_type !== nothing || error("unit $unit_name has no type specified")
bus = name_to_bus[dict["Bus"]] bus = name_to_bus[dict["Bus"]]
# Read production cost curve if lowercase(unit_type) === "thermal"
K = length(dict["Production cost curve (MW)"]) # Read production cost curve
curve_mw = hcat( K = length(dict["Production cost curve (MW)"])
[timeseries(dict["Production cost curve (MW)"][k]) for k in 1:K]..., curve_mw = hcat(
) [
curve_cost = hcat( timeseries(dict["Production cost curve (MW)"][k]) for
[timeseries(dict["Production cost curve (\$)"][k]) for k in 1:K]..., k in 1:K
) ]...,
min_power = curve_mw[:, 1]
max_power = curve_mw[:, K]
min_power_cost = curve_cost[:, 1]
segments = CostSegment[]
for k in 2:K
amount = curve_mw[:, k] - curve_mw[:, k-1]
cost = (curve_cost[:, k] - curve_cost[:, k-1]) ./ amount
replace!(cost, NaN => 0.0)
push!(segments, CostSegment(amount, cost))
end
# Read startup costs
startup_delays = scalar(dict["Startup delays (h)"], default = [1])
startup_costs = scalar(dict["Startup costs (\$)"], default = [0.0])
startup_categories = StartupCategory[]
for k in 1:length(startup_delays)
push!(
startup_categories,
StartupCategory(
startup_delays[k] .* time_multiplier,
startup_costs[k],
),
) )
end curve_cost = hcat(
[
# Read reserve eligibility timeseries(dict["Production cost curve (\$)"][k]) for
unit_reserves = Reserve[] k in 1:K
if "Reserve eligibility" in keys(dict) ]...,
unit_reserves = )
[name_to_reserve[n] for n in dict["Reserve eligibility"]] min_power = curve_mw[:, 1]
end max_power = curve_mw[:, K]
min_power_cost = curve_cost[:, 1]
# Read and validate initial conditions segments = CostSegment[]
initial_power = scalar(dict["Initial power (MW)"], default = nothing) for k in 2:K
initial_status = scalar(dict["Initial status (h)"], default = nothing) amount = curve_mw[:, k] - curve_mw[:, k-1]
if initial_power === nothing cost = (curve_cost[:, k] - curve_cost[:, k-1]) ./ amount
initial_status === nothing || replace!(cost, NaN => 0.0)
error("unit $unit_name has initial status but no initial power") push!(segments, CostSegment(amount, cost))
else
initial_status !== nothing ||
error("unit $unit_name has initial power but no initial status")
initial_status != 0 ||
error("unit $unit_name has invalid initial status")
if initial_status < 0 && initial_power > 1e-3
error("unit $unit_name has invalid initial power")
end end
initial_status *= time_multiplier
end
unit = Unit( # Read startup costs
unit_name, startup_delays = scalar(dict["Startup delays (h)"], default = [1])
bus, startup_costs = scalar(dict["Startup costs (\$)"], default = [0.0])
max_power, startup_categories = StartupCategory[]
min_power, for k in 1:length(startup_delays)
timeseries(dict["Must run?"], default = [false for t in 1:T]), push!(
min_power_cost, startup_categories,
segments, StartupCategory(
scalar(dict["Minimum uptime (h)"], default = 1) * time_multiplier, startup_delays[k] .* time_multiplier,
scalar(dict["Minimum downtime (h)"], default = 1) * time_multiplier, startup_costs[k],
scalar(dict["Ramp up limit (MW)"], default = 1e6), ),
scalar(dict["Ramp down limit (MW)"], default = 1e6), )
scalar(dict["Startup limit (MW)"], default = 1e6), end
scalar(dict["Shutdown limit (MW)"], default = 1e6),
initial_status, # Read reserve eligibility
initial_power, unit_reserves = Reserve[]
startup_categories, if "Reserve eligibility" in keys(dict)
unit_reserves, unit_reserves =
) [name_to_reserve[n] for n in dict["Reserve eligibility"]]
push!(bus.units, unit) end
for r in unit_reserves
push!(r.units, unit) # Read and validate initial conditions
initial_power =
scalar(dict["Initial power (MW)"], default = nothing)
initial_status =
scalar(dict["Initial status (h)"], default = nothing)
if initial_power === nothing
initial_status === nothing || error(
"unit $unit_name has initial status but no initial power",
)
else
initial_status !== nothing || error(
"unit $unit_name has initial power but no initial status",
)
initial_status != 0 ||
error("unit $unit_name has invalid initial status")
if initial_status < 0 && initial_power > 1e-3
error("unit $unit_name has invalid initial power")
end
initial_status *= time_multiplier
end
# Read commitment status
commitment_status = scalar(
dict["Commitment status"],
default = Vector{Union{Bool,Nothing}}(nothing, T),
)
unit = ThermalUnit(
unit_name,
bus,
max_power,
min_power,
timeseries(dict["Must run?"], default = [false for t in 1:T]),
min_power_cost,
segments,
scalar(dict["Minimum uptime (h)"], default = 1) *
time_multiplier,
scalar(dict["Minimum downtime (h)"], default = 1) *
time_multiplier,
scalar(dict["Ramp up limit (MW)"], default = 1e6),
scalar(dict["Ramp down limit (MW)"], default = 1e6),
scalar(dict["Startup limit (MW)"], default = 1e6),
scalar(dict["Shutdown limit (MW)"], default = 1e6),
initial_status,
initial_power,
startup_categories,
unit_reserves,
commitment_status,
)
push!(bus.thermal_units, unit)
for r in unit_reserves
push!(r.thermal_units, unit)
end
name_to_unit[unit_name] = unit
push!(thermal_units, unit)
elseif lowercase(unit_type) === "profiled"
bus = name_to_bus[dict["Bus"]]
pu = ProfiledUnit(
unit_name,
bus,
timeseries(scalar(dict["Minimum power (MW)"], default = 0.0)),
timeseries(dict["Maximum power (MW)"]),
timeseries(dict["Cost (\$/MW)"]),
)
push!(bus.profiled_units, pu)
push!(profiled_units, pu)
else
error("unit $unit_name has an invalid type")
end end
name_to_unit[unit_name] = unit
push!(units, unit)
end end
# Read transmission lines # Read transmission lines
@@ -256,7 +354,6 @@ function _from_json(json; repair = true)
length(lines) + 1, length(lines) + 1,
name_to_bus[dict["Source bus"]], name_to_bus[dict["Source bus"]],
name_to_bus[dict["Target bus"]], name_to_bus[dict["Target bus"]],
scalar(dict["Reactance (ohms)"]),
scalar(dict["Susceptance (S)"]), scalar(dict["Susceptance (S)"]),
timeseries( timeseries(
dict["Normal flow limit (MW)"], dict["Normal flow limit (MW)"],
@@ -279,7 +376,7 @@ function _from_json(json; repair = true)
# Read contingencies # Read contingencies
if "Contingencies" in keys(json) if "Contingencies" in keys(json)
for (cont_name, dict) in json["Contingencies"] for (cont_name, dict) in json["Contingencies"]
affected_units = Unit[] affected_units = ThermalUnit[]
affected_lines = TransmissionLine[] affected_lines = TransmissionLine[]
if "Affected lines" in keys(dict) if "Affected lines" in keys(dict)
affected_lines = affected_lines =
@@ -309,7 +406,55 @@ function _from_json(json; repair = true)
end end
end end
instance = UnitCommitmentInstance( # Read storage units
if "Storage units" in keys(json)
for (storage_name, dict) in json["Storage units"]
bus = name_to_bus[dict["Bus"]]
min_level =
timeseries(scalar(dict["Minimum level (MWh)"], default = 0.0))
max_level = timeseries(dict["Maximum level (MWh)"])
storage = StorageUnit(
storage_name,
bus,
min_level,
max_level,
timeseries(
scalar(
dict["Allow simultaneous charging and discharging"],
default = true,
),
),
timeseries(dict["Charge cost (\$/MW)"]),
timeseries(dict["Discharge cost (\$/MW)"]),
timeseries(scalar(dict["Charge efficiency"], default = 1.0)),
timeseries(scalar(dict["Discharge efficiency"], default = 1.0)),
timeseries(scalar(dict["Loss factor"], default = 0.0)),
timeseries(
scalar(dict["Minimum charge rate (MW)"], default = 0.0),
),
timeseries(dict["Maximum charge rate (MW)"]),
timeseries(
scalar(dict["Minimum discharge rate (MW)"], default = 0.0),
),
timeseries(dict["Maximum discharge rate (MW)"]),
scalar(dict["Initial level (MWh)"], default = 0.0),
scalar(
dict["Last period minimum level (MWh)"],
default = min_level[T],
),
scalar(
dict["Last period maximum level (MWh)"],
default = max_level[T],
),
)
push!(bus.storage_units, storage)
push!(storage_units, storage)
end
end
scenario = UnitCommitmentScenario(
name = scenario_name,
probability = probability,
buses_by_name = Dict(b.name => b for b in buses), buses_by_name = Dict(b.name => b for b in buses),
buses = buses, buses = buses,
contingencies_by_name = Dict(c.name => c for c in contingencies), contingencies_by_name = Dict(c.name => c for c in contingencies),
@@ -321,14 +466,19 @@ function _from_json(json; repair = true)
price_sensitive_loads = loads, price_sensitive_loads = loads,
reserves = reserves, reserves = reserves,
reserves_by_name = name_to_reserve, reserves_by_name = name_to_reserve,
shortfall_penalty = shortfall_penalty,
flexiramp_shortfall_penalty = flexiramp_shortfall_penalty,
time = T, time = T,
units_by_name = Dict(g.name => g for g in units), time_step = time_step,
units = units, thermal_units_by_name = Dict(g.name => g for g in thermal_units),
thermal_units = thermal_units,
profiled_units_by_name = Dict(pu.name => pu for pu in profiled_units),
profiled_units = profiled_units,
storage_units_by_name = Dict(su.name => su for su in storage_units),
storage_units = storage_units,
isf = spzeros(Float64, length(lines), length(buses) - 1),
lodf = spzeros(Float64, length(lines), length(lines)),
) )
if repair if repair
UnitCommitment.repair!(instance) UnitCommitment.repair!(scenario)
end end
return instance return scenario
end end

View File

@@ -6,8 +6,10 @@ mutable struct Bus
name::String name::String
offset::Int offset::Int
load::Vector{Float64} load::Vector{Float64}
units::Vector thermal_units::Vector
price_sensitive_loads::Vector price_sensitive_loads::Vector
profiled_units::Vector
storage_units::Vector
end end
mutable struct CostSegment mutable struct CostSegment
@@ -24,11 +26,11 @@ Base.@kwdef mutable struct Reserve
name::String name::String
type::String type::String
amount::Vector{Float64} amount::Vector{Float64}
units::Vector thermal_units::Vector
shortfall_penalty::Float64 shortfall_penalty::Float64
end end
mutable struct Unit mutable struct ThermalUnit
name::String name::String
bus::Bus bus::Bus
max_power::Vector{Float64} max_power::Vector{Float64}
@@ -46,6 +48,7 @@ mutable struct Unit
initial_power::Union{Float64,Nothing} initial_power::Union{Float64,Nothing}
startup_categories::Vector{StartupCategory} startup_categories::Vector{StartupCategory}
reserves::Vector{Reserve} reserves::Vector{Reserve}
commitment_status::Vector{Union{Bool,Nothing}}
end end
mutable struct TransmissionLine mutable struct TransmissionLine
@@ -53,7 +56,6 @@ mutable struct TransmissionLine
offset::Int offset::Int
source::Bus source::Bus
target::Bus target::Bus
reactance::Float64
susceptance::Float64 susceptance::Float64
normal_flow_limit::Vector{Float64} normal_flow_limit::Vector{Float64}
emergency_flow_limit::Vector{Float64} emergency_flow_limit::Vector{Float64}
@@ -63,7 +65,7 @@ end
mutable struct Contingency mutable struct Contingency
name::String name::String
lines::Vector{TransmissionLine} lines::Vector{TransmissionLine}
units::Vector{Unit} thermal_units::Vector{ThermalUnit}
end end
mutable struct PriceSensitiveLoad mutable struct PriceSensitiveLoad
@@ -73,35 +75,75 @@ mutable struct PriceSensitiveLoad
revenue::Vector{Float64} revenue::Vector{Float64}
end end
Base.@kwdef mutable struct UnitCommitmentInstance mutable struct ProfiledUnit
name::String
bus::Bus
min_power::Vector{Float64}
max_power::Vector{Float64}
cost::Vector{Float64}
end
mutable struct StorageUnit
name::String
bus::Bus
min_level::Vector{Float64}
max_level::Vector{Float64}
simultaneous_charge_and_discharge::Vector{Bool}
charge_cost::Vector{Float64}
discharge_cost::Vector{Float64}
charge_efficiency::Vector{Float64}
discharge_efficiency::Vector{Float64}
loss_factor::Vector{Float64}
min_charge_rate::Vector{Float64}
max_charge_rate::Vector{Float64}
min_discharge_rate::Vector{Float64}
max_discharge_rate::Vector{Float64}
initial_level::Float64
min_ending_level::Float64
max_ending_level::Float64
end
Base.@kwdef mutable struct UnitCommitmentScenario
buses_by_name::Dict{AbstractString,Bus} buses_by_name::Dict{AbstractString,Bus}
buses::Vector{Bus} buses::Vector{Bus}
contingencies_by_name::Dict{AbstractString,Contingency} contingencies_by_name::Dict{AbstractString,Contingency}
contingencies::Vector{Contingency} contingencies::Vector{Contingency}
isf::Array{Float64,2}
lines_by_name::Dict{AbstractString,TransmissionLine} lines_by_name::Dict{AbstractString,TransmissionLine}
lines::Vector{TransmissionLine} lines::Vector{TransmissionLine}
lodf::Array{Float64,2}
name::String
power_balance_penalty::Vector{Float64} power_balance_penalty::Vector{Float64}
price_sensitive_loads_by_name::Dict{AbstractString,PriceSensitiveLoad} price_sensitive_loads_by_name::Dict{AbstractString,PriceSensitiveLoad}
price_sensitive_loads::Vector{PriceSensitiveLoad} price_sensitive_loads::Vector{PriceSensitiveLoad}
reserves::Vector{Reserve} probability::Float64
profiled_units_by_name::Dict{AbstractString,ProfiledUnit}
profiled_units::Vector{ProfiledUnit}
reserves_by_name::Dict{AbstractString,Reserve} reserves_by_name::Dict{AbstractString,Reserve}
shortfall_penalty::Vector{Float64} reserves::Vector{Reserve}
flexiramp_shortfall_penalty::Vector{Float64} thermal_units_by_name::Dict{AbstractString,ThermalUnit}
thermal_units::Vector{ThermalUnit}
storage_units_by_name::Dict{AbstractString,StorageUnit}
storage_units::Vector{StorageUnit}
time::Int time::Int
units_by_name::Dict{AbstractString,Unit} time_step::Int
units::Vector{Unit} end
Base.@kwdef mutable struct UnitCommitmentInstance
time::Int
scenarios::Vector{UnitCommitmentScenario}
end end
function Base.show(io::IO, instance::UnitCommitmentInstance) function Base.show(io::IO, instance::UnitCommitmentInstance)
sc = instance.scenarios[1]
print(io, "UnitCommitmentInstance(") print(io, "UnitCommitmentInstance(")
print(io, "$(length(instance.units)) units, ") print(io, "$(length(instance.scenarios)) scenarios, ")
print(io, "$(length(instance.buses)) buses, ") print(io, "$(length(sc.thermal_units)) thermal units, ")
print(io, "$(length(instance.lines)) lines, ") print(io, "$(length(sc.profiled_units)) profiled units, ")
print(io, "$(length(instance.contingencies)) contingencies, ") print(io, "$(length(sc.buses)) buses, ")
print( print(io, "$(length(sc.lines)) lines, ")
io, print(io, "$(length(sc.contingencies)) contingencies, ")
"$(length(instance.price_sensitive_loads)) price sensitive loads, ", print(io, "$(length(sc.price_sensitive_loads)) price sensitive loads, ")
)
print(io, "$(instance.time) time steps") print(io, "$(instance.time) time steps")
print(io, ")") print(io, ")")
return return

212
src/lmp/aelmp.jl Normal file
View File

@@ -0,0 +1,212 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using JuMP
"""
function compute_lmp(
model::JuMP.Model,
method::AELMP;
optimizer,
)::OrderedDict{Tuple{String,Int},Float64}
Calculates the approximate extended locational marginal prices of the given unit commitment instance.
The AELPM does the following three things:
1. It sets the minimum power output of each generator to zero
2. It averages the start-up cost over the offer blocks for each generator
3. It relaxes all integrality constraints
Returns a dictionary mapping `(bus_name, time)` to the marginal price.
WARNING: This approximation method is not fully developed. The implementation is based on MISO Phase I only.
1. It only supports Fast Start resources. More specifically, the minimum up/down time has to be zero.
2. The method does NOT support time-varying start-up costs.
3. An asset is considered offline if it is never on throughout all time periods.
4. The method does NOT support multiple scenarios.
Arguments
---------
- `model`:
the UnitCommitment model, must be solved before calling this function if offline participation is not allowed.
- `method`:
the AELMP method.
- `optimizer`:
the optimizer for solving the LP problem.
Examples
--------
```julia
using UnitCommitment
using HiGHS
import UnitCommitment: AELMP
# Read benchmark instance
instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
# Build the model
model = UnitCommitment.build_model(
instance = instance,
optimizer = HiGHS.Optimizer,
)
# Optimize the model
UnitCommitment.optimize!(model)
# Compute the AELMPs
aelmp = UnitCommitment.compute_lmp(
model,
AELMP(
allow_offline_participation = false,
consider_startup_costs = true
),
optimizer = HiGHS.Optimizer
)
# Access the AELMPs
# Example: "s1" is the scenario name, "b1" is the bus name, 1 is the first time slot
# Note: although scenario is supported, the query still keeps the scenario keys for consistency.
@show aelmp["s1", "b1", 1]
```
"""
function compute_lmp(
model::JuMP.Model,
method::AELMP;
optimizer,
)::OrderedDict{Tuple{String,String,Int},Float64}
@info "Building the approximation model..."
instance = deepcopy(model[:instance])
_aelmp_check_parameters(instance, model, method)
_modify_scenario!(instance.scenarios[1], model, method)
# prepare the result dictionary and solve the model
elmp = OrderedDict()
@info "Solving the approximation model."
approx_model = build_model(instance = instance, variable_names = true)
# relax the binary constraint, and relax integrality
for v in all_variables(approx_model)
if is_binary(v)
unset_binary(v)
end
end
relax_integrality(approx_model)
set_optimizer(approx_model, optimizer)
# solve the model
set_silent(approx_model)
optimize!(approx_model)
# access the dual values
@info "Getting dual values (AELMPs)."
for (key, val) in approx_model[:eq_net_injection]
elmp[key] = dual(val)
end
return elmp
end
function _aelmp_check_parameters(
instance::UnitCommitmentInstance,
model::JuMP.Model,
method::AELMP,
)
# CHECK: model cannot have multiple scenarios
if length(instance.scenarios) > 1
error("The method does NOT support multiple scenarios.")
end
sc = instance.scenarios[1]
# CHECK: model must be solved if allow_offline_participation=false
if !method.allow_offline_participation
if isnothing(model) || !has_values(model)
error(
"A solved UC model is required if allow_offline_participation=false.",
)
end
end
all_units = sc.thermal_units
# CHECK: model cannot handle non-fast-starts (MISO Phase I: can ONLY solve fast-starts)
if any(u -> u.min_uptime > 1 || u.min_downtime > 1, all_units)
error(
"The minimum up/down time of all generators must be 1. AELMP only supports fast-starts.",
)
end
if any(u -> u.initial_power > 0, all_units)
error("The initial power of all generators must be 0.")
end
if any(u -> u.initial_status >= 0, all_units)
error("The initial status of all generators must be negative.")
end
# CHECK: model does not support startup costs (in time series)
if any(u -> length(u.startup_categories) > 1, all_units)
error("The method does NOT support time-varying start-up costs.")
end
end
function _modify_scenario!(
sc::UnitCommitmentScenario,
model::JuMP.Model,
method::AELMP,
)
# this function modifies the sc units (generators)
if !method.allow_offline_participation
# 1. remove (if NOT allowing) the offline generators
units_to_remove = []
for unit in sc.thermal_units
# remove based on the solved UC model result
# remove the unit if it is never on
if all(t -> value(model[:is_on][unit.name, t]) == 0, sc.time)
# unregister from the bus
filter!(x -> x.name != unit.name, unit.bus.thermal_units)
# unregister from the reserve
for r in unit.reserves
filter!(x -> x.name != unit.name, r.thermal_units)
end
# append the name to the remove list
push!(units_to_remove, unit.name)
end
end
# unregister the units from the remove list
filter!(x -> !(x.name in units_to_remove), sc.thermal_units)
end
for unit in sc.thermal_units
# 2. set min generation requirement to 0 by adding 0 to production curve and cost
# min_power & min_costs are vectors with dimension T
if unit.min_power[1] != 0
first_cost_segment = unit.cost_segments[1]
pushfirst!(
unit.cost_segments,
CostSegment(
ones(size(first_cost_segment.mw)) * unit.min_power[1],
ones(size(first_cost_segment.cost)) *
unit.min_power_cost[1] / unit.min_power[1],
),
)
unit.min_power = zeros(size(first_cost_segment.mw))
unit.min_power_cost = zeros(size(first_cost_segment.cost))
end
# 3. average the start-up costs (if considering)
# if consider_startup_costs = false, then use the current first_startup_cost
first_startup_cost = unit.startup_categories[1].cost
if method.consider_startup_costs
additional_unit_cost = first_startup_cost / unit.max_power[1]
for i in eachindex(unit.cost_segments)
unit.cost_segments[i].cost .+= additional_unit_cost
end
first_startup_cost = 0.0 # zero out the start up cost
end
unit.startup_categories =
StartupCategory[StartupCategory(0, first_startup_cost)]
end
return sc.thermal_units_by_name =
Dict(g.name => g for g in sc.thermal_units)
end

92
src/lmp/conventional.jl Normal file
View File

@@ -0,0 +1,92 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using JuMP
"""
function compute_lmp(
model::JuMP.Model,
method::ConventionalLMP;
optimizer,
)::OrderedDict{Tuple{String,String,Int},Float64}
Calculates conventional locational marginal prices of the given unit commitment
instance. Returns a dictionary mapping `(bus_name, time)` to the marginal price.
Arguments
---------
- `model`:
the UnitCommitment model, must be solved before calling this function.
- `method`:
the LMP method.
- `optimizer`:
the optimizer for solving the LP problem.
Examples
--------
```julia
using UnitCommitment
using HiGHS
import UnitCommitment: ConventionalLMP
# Read benchmark instance
instance = UnitCommitment.read_benchmark("matpower/case118/2018-01-01")
# Build the model
model = UnitCommitment.build_model(
instance = instance,
optimizer = HiGHS.Optimizer,
)
# Optimize the model
UnitCommitment.optimize!(model)
# Compute the LMPs using the conventional method
lmp = UnitCommitment.compute_lmp(
model,
ConventionalLMP(),
optimizer = HiGHS.Optimizer,
)
# Access the LMPs
# Example: "s1" is the scenario name, "b1" is the bus name, 1 is the first time slot
@show lmp["s1", "b1", 1]
```
"""
function compute_lmp(
model::JuMP.Model,
::ConventionalLMP;
optimizer,
)::OrderedDict{Tuple{String,String,Int},Float64}
if !has_values(model)
error("The UC model must be solved before calculating the LMPs.")
end
lmp = OrderedDict()
@info "Fixing binary variables and relaxing integrality..."
vals = Dict(v => value(v) for v in all_variables(model))
for v in all_variables(model)
if is_binary(v)
unset_binary(v)
fix(v, vals[v])
end
end
relax_integrality(model)
set_optimizer(model, optimizer)
@info "Solving the LP..."
JuMP.optimize!(model)
@info "Getting dual values (LMPs)..."
for (key, val) in model[:eq_net_injection]
lmp[key] = dual(val)
end
return lmp
end

28
src/lmp/structs.jl Normal file
View File

@@ -0,0 +1,28 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
abstract type PricingMethod end
struct ConventionalLMP <: PricingMethod end
"""
struct AELMP <: PricingMethod
allow_offline_participation::Bool = true
consider_startup_costs::Bool = true
end
Approximate Extended LMPs.
Arguments
---------
- `allow_offline_participation`:
If true, offline assets are allowed to participate in pricing.
- `consider_startup_costs`:
If true, the start-up costs are averaged over each unit production; otherwise the production costs stay the same.
"""
Base.@kwdef struct AELMP <: PricingMethod
allow_offline_participation::Bool = true
consider_startup_costs::Bool = true
end

219
src/market/market.jl Normal file
View File

@@ -0,0 +1,219 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
"""
solve_market(
da_path::Union{String, Vector{String}},
rt_paths::Vector{String},
settings::MarketSettings;
optimizer,
lp_optimizer = nothing,
after_build_da = nothing,
after_optimize_da = nothing,
after_build_rt = nothing,
after_optimize_rt = nothing,
)::OrderedDict
Solve the day-ahead and the real-time markets by the means of commitment status mapping.
The method firstly acquires the commitment status outcomes through the resolution of the day-ahead market;
and secondly resolves each real-time market based on the corresponding results obtained previously.
Arguments
---------
- `da_path`:
the data file path of the day-ahead market, can be stochastic.
- `rt_paths`:
the list of data file paths of the real-time markets, must be deterministic for each market.
- `settings`:
the MarketSettings which include the problem formulation, the solving method, and LMP method.
- `optimizer`:
the optimizer for solving the problem.
- `lp_optimizer`:
the linear programming optimizer for solving the LMP problem, defaults to `nothing`.
If not specified by the user, the program uses `optimizer` instead.
- `after_build_da`:
a user-defined function that allows modifying the DA model after building,
must have 2 arguments `model` and `instance` in order.
- `after_optimize_da`:
a user-defined function that allows handling additional steps after optimizing the DA model,
must have 3 arguments `solution`, `model` and `instance` in order.
- `after_build_rt`:
a user-defined function that allows modifying each RT model after building,
must have 2 arguments `model` and `instance` in order.
- `after_optimize_rt`:
a user-defined function that allows handling additional steps after optimizing each RT model,
must have 3 arguments `solution`, `model` and `instance` in order.
Examples
--------
```julia
using UnitCommitment, Cbc, HiGHS
import UnitCommitment:
MarketSettings,
XavQiuWanThi2019,
ConventionalLMP,
Formulation
solution = UnitCommitment.solve_market(
"da_instance.json",
["rt_instance_1.json", "rt_instance_2.json", "rt_instance_3.json"],
MarketSettings(
inner_method = XavQiuWanThi2019.Method(),
lmp_method = ConventionalLMP(),
formulation = Formulation(),
),
optimizer = Cbc.Optimizer,
lp_optimizer = HiGHS.Optimizer,
)
"""
function solve_market(
da_path::Union{String,Vector{String}},
rt_paths::Vector{String};
settings::MarketSettings = MarketSettings(),
optimizer,
lp_optimizer = nothing,
after_build_da = nothing,
after_optimize_da = nothing,
after_build_rt = nothing,
after_optimize_rt = nothing,
)::OrderedDict
# solve da instance as usual
@info "Solving the day-ahead market with file $da_path..."
instance_da = UnitCommitment.read(da_path)
# LP optimizer is optional: if not specified, use optimizer
lp_optimizer = lp_optimizer === nothing ? optimizer : lp_optimizer
# build and optimize the DA market
model_da, solution_da = _build_and_optimize(
instance_da,
settings,
optimizer = optimizer,
lp_optimizer = lp_optimizer,
after_build = after_build_da,
after_optimize = after_optimize_da,
)
# prepare the final solution
solution = OrderedDict()
solution["DA"] = solution_da
solution["RT"] = []
# count the time, sc.time = n-slots, sc.time_step = slot-interval
# sufficient to look at only one scenario
sc = instance_da.scenarios[1]
# max time (min) of the DA market
max_time = sc.time * sc.time_step
# current time increments through the RT market list
current_time = 0
# DA market time slots in (min)
da_time_intervals = [sc.time_step * ts for ts in 1:sc.time]
# get the uc status and set each uc fixed
solution_rt = OrderedDict()
prev_initial_status = OrderedDict()
for rt_path in rt_paths
@info "Solving the real-time market with file $rt_path..."
instance_rt = UnitCommitment.read(rt_path)
# check instance time
sc = instance_rt.scenarios[1]
# check each time slot in the RT model
for ts in 1:sc.time
slot_t_end = current_time + ts * sc.time_step
# ensure this RT's slot time ub never exceeds max time of DA
slot_t_end <= max_time || error(
"The time of the real-time market cannot exceed the time of the day-ahead market.",
)
# get the slot start time to determine commitment status
slot_t_start = slot_t_end - sc.time_step
# find the index of the first DA time slot that covers slot_t_start
da_time_slot = findfirst(ti -> slot_t_start < ti, da_time_intervals)
# update thermal unit commitment status
for g in sc.thermal_units
g.commitment_status[ts] =
value(model_da[:is_on][g.name, da_time_slot]) == 1.0
end
end
# update current time by ONE slot only
current_time += sc.time_step
# set initial status for all generators in all scenarios
if !isempty(solution_rt) && !isempty(prev_initial_status)
for g in sc.thermal_units
g.initial_power =
solution_rt["Thermal production (MW)"][g.name][1]
g.initial_status = UnitCommitment._determine_initial_status(
prev_initial_status[g.name],
[solution_rt["Is on"][g.name][1]],
)
end
end
# build and optimize the RT market
_, solution_rt = _build_and_optimize(
instance_rt,
settings,
optimizer = optimizer,
lp_optimizer = lp_optimizer,
after_build = after_build_rt,
after_optimize = after_optimize_rt,
)
prev_initial_status =
OrderedDict(g.name => g.initial_status for g in sc.thermal_units)
push!(solution["RT"], solution_rt)
end # end of for-loop that checks each RT market
return solution
end
function _build_and_optimize(
instance::UnitCommitmentInstance,
settings::MarketSettings;
optimizer,
lp_optimizer,
after_build = nothing,
after_optimize = nothing,
)::Tuple{JuMP.Model,OrderedDict}
# build model with after build
model = UnitCommitment.build_model(
instance = instance,
optimizer = optimizer,
formulation = settings.formulation,
)
if after_build !== nothing
after_build(model, instance)
end
# optimize model
UnitCommitment.optimize!(model, settings.inner_method)
solution = UnitCommitment.solution(model)
# compute lmp and add to solution
if settings.lmp_method !== nothing
lmp = UnitCommitment.compute_lmp(
model,
settings.lmp_method,
optimizer = lp_optimizer,
)
if length(instance.scenarios) == 1
solution["LMP (\$/MW)"] = lmp
else
for sc in instance.scenarios
solution[sc.name]["LMP (\$/MW)"] = OrderedDict(
key => val for (key, val) in lmp if key[1] == sc.name
)
end
end
end
# run after optimize with solution
if after_optimize !== nothing
after_optimize(solution, model, instance)
end
return model, solution
end

33
src/market/structs.jl Normal file
View File

@@ -0,0 +1,33 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import ..SolutionMethod
import ..PricingMethod
import ..Formulation
"""
struct MarketSettings
inner_method::SolutionMethod = XavQiuWanThi2019.Method()
lmp_method::Union{PricingMethod, Nothing} = ConventionalLMP()
formulation::Formulation = Formulation()
end
Market setting struct, typically used to map a day-ahead market to real-time markets.
Arguments
---------
- `inner_method`:
method to solve each marketing problem.
- `lmp_method`:
a PricingMethod method to calculate the locational marginal prices.
If it is set to `nothing`, the LMPs will not be calculated.
- `formulation`:
problem formulation.
"""
Base.@kwdef struct MarketSettings
inner_method::SolutionMethod = XavQiuWanThi2019.Method()
lmp_method::Union{PricingMethod,Nothing} = ConventionalLMP()
formulation::Formulation = Formulation()
end

View File

@@ -9,22 +9,59 @@ import JuMP: value, fix, set_name
function build_model(; function build_model(;
instance::UnitCommitmentInstance, instance::UnitCommitmentInstance,
optimizer = nothing, optimizer = nothing,
formulation = Formulation(),
variable_names::Bool = false, variable_names::Bool = false,
)::JuMP.Model )::JuMP.Model
Build the JuMP model corresponding to the given unit commitment instance. Build the JuMP model corresponding to the given unit commitment instance.
Arguments Arguments
========= ---------
- `instance`: - `instance`:
the instance. the instance.
- `optimizer`: - `optimizer`:
the optimizer factory that should be attached to this model (e.g. Cbc.Optimizer). the optimizer factory that should be attached to this model (e.g. Cbc.Optimizer).
If not provided, no optimizer will be attached. If not provided, no optimizer will be attached.
- `formulation`:
the MIP formulation to use. By default, uses a formulation that combines
modeling components from different publications that provides good
performance across a wide variety of instances. An alternative formulation
may also be provided.
- `variable_names`: - `variable_names`:
If true, set variable and constraint names. Important if the model is going if true, set variable and constraint names. Important if the model is going
to be exported to an MPS file. For large models, this can take significant to be exported to an MPS file. For large models, this can take significant
time, so it's disabled by default. time, so it's disabled by default.
Examples
--------
```julia
# Read benchmark instance
instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
# Construct model (using state-of-the-art defaults)
model = UnitCommitment.build_model(
instance = instance,
optimizer = Cbc.Optimizer,
)
# Construct model (using customized formulation)
model = UnitCommitment.build_model(
instance = instance,
optimizer = Cbc.Optimizer,
formulation = Formulation(
pwl_costs = KnuOstWat2018.PwlCosts(),
ramping = MorLatRam2013.Ramping(),
startup_costs = MorLatRam2013.StartupCosts(),
transmission = ShiftFactorsFormulation(
isf_cutoff = 0.005,
lodf_cutoff = 0.001,
),
),
)
```
""" """
function build_model(; function build_model(;
instance::UnitCommitmentInstance, instance::UnitCommitmentInstance,
@@ -40,20 +77,33 @@ function build_model(;
end end
model[:obj] = AffExpr() model[:obj] = AffExpr()
model[:instance] = instance model[:instance] = instance
_setup_transmission(model, formulation.transmission) for g in instance.scenarios[1].thermal_units
for l in instance.lines _add_unit_commitment!(model, g, formulation)
_add_transmission_line!(model, l, formulation.transmission)
end end
for b in instance.buses for sc in instance.scenarios
_add_bus!(model, b) @info "Building scenario $(sc.name) with " *
"probability $(sc.probability)"
_setup_transmission(formulation.transmission, sc)
for l in sc.lines
_add_transmission_line!(model, l, formulation.transmission, sc)
end
for b in sc.buses
_add_bus!(model, b, sc)
end
for ps in sc.price_sensitive_loads
_add_price_sensitive_load!(model, ps, sc)
end
for g in sc.thermal_units
_add_unit_dispatch!(model, g, formulation, sc)
end
for pu in sc.profiled_units
_add_profiled_unit!(model, pu, sc)
end
for su in sc.storage_units
_add_storage_unit!(model, su, sc)
end
_add_system_wide_eqs!(model, sc)
end end
for g in instance.units
_add_unit!(model, g, formulation)
end
for ps in instance.price_sensitive_loads
_add_price_sensitive_load!(model, ps)
end
_add_system_wide_eqs!(model)
@objective(model, Min, model[:obj]) @objective(model, Min, model[:obj])
end end
@info @sprintf("Built model in %.2f seconds", time_model) @info @sprintf("Built model in %.2f seconds", time_model)

View File

@@ -4,10 +4,11 @@
function _add_ramp_eqs!( function _add_ramp_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_prod_vars::Gar1962.ProdVars, formulation_prod_vars::Gar1962.ProdVars,
formulation_ramping::ArrCon2000.Ramping, formulation_ramping::ArrCon2000.Ramping,
formulation_status_vars::Gar1962.StatusVars, formulation_status_vars::Gar1962.StatusVars,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
# TODO: Move upper case constants to model[:instance] # TODO: Move upper case constants to model[:instance]
RESERVES_WHEN_START_UP = true RESERVES_WHEN_START_UP = true
@@ -22,7 +23,7 @@ function _add_ramp_eqs!(
eq_ramp_down = _init(model, :eq_ramp_down) eq_ramp_down = _init(model, :eq_ramp_down)
eq_ramp_up = _init(model, :eq_ramp_up) eq_ramp_up = _init(model, :eq_ramp_up)
is_initially_on = (g.initial_status > 0) is_initially_on = (g.initial_status > 0)
reserve = _total_reserves(model, g) reserve = _total_reserves(model, g, sc)
# Gar1962.ProdVars # Gar1962.ProdVars
prod_above = model[:prod_above] prod_above = model[:prod_above]
@@ -37,10 +38,10 @@ function _add_ramp_eqs!(
if t == 1 if t == 1
if is_initially_on if is_initially_on
# min power is _not_ multiplied by is_on because if !is_on, then ramp up is irrelevant # min power is _not_ multiplied by is_on because if !is_on, then ramp up is irrelevant
eq_ramp_up[gn, t] = @constraint( eq_ramp_up[sc.name, gn, t] = @constraint(
model, model,
g.min_power[t] + g.min_power[t] +
prod_above[gn, t] + prod_above[sc.name, gn, t] +
(RESERVES_WHEN_RAMP_UP ? reserve[t] : 0.0) <= (RESERVES_WHEN_RAMP_UP ? reserve[t] : 0.0) <=
g.initial_power + RU g.initial_power + RU
) )
@@ -48,16 +49,16 @@ function _add_ramp_eqs!(
else else
max_prod_this_period = max_prod_this_period =
g.min_power[t] * is_on[gn, t] + g.min_power[t] * is_on[gn, t] +
prod_above[gn, t] + prod_above[sc.name, gn, t] +
( (
RESERVES_WHEN_START_UP || RESERVES_WHEN_RAMP_UP ? RESERVES_WHEN_START_UP || RESERVES_WHEN_RAMP_UP ?
reserve[t] : 0.0 reserve[t] : 0.0
) )
min_prod_last_period = min_prod_last_period =
g.min_power[t-1] * is_on[gn, t-1] + prod_above[gn, t-1] g.min_power[t-1] * is_on[gn, t-1] + prod_above[sc.name, gn, t-1]
# Equation (24) in Kneuven et al. (2020) # Equation (24) in Kneuven et al. (2020)
eq_ramp_up[gn, t] = @constraint( eq_ramp_up[sc.name, gn, t] = @constraint(
model, model,
max_prod_this_period - min_prod_last_period <= max_prod_this_period - min_prod_last_period <=
RU * is_on[gn, t-1] + SU * switch_on[gn, t] RU * is_on[gn, t-1] + SU * switch_on[gn, t]
@@ -71,24 +72,25 @@ function _add_ramp_eqs!(
# min_power + RD < initial_power < SD # min_power + RD < initial_power < SD
# then the generator should be able to shut down at time t = 1, # then the generator should be able to shut down at time t = 1,
# but the constraint below will force the unit to produce power # but the constraint below will force the unit to produce power
eq_ramp_down[gn, t] = @constraint( eq_ramp_down[sc.name, gn, t] = @constraint(
model, model,
g.initial_power - (g.min_power[t] + prod_above[gn, t]) <= RD g.initial_power -
(g.min_power[t] + prod_above[sc.name, gn, t]) <= RD
) )
end end
else else
max_prod_last_period = max_prod_last_period =
g.min_power[t-1] * is_on[gn, t-1] + g.min_power[t-1] * is_on[gn, t-1] +
prod_above[gn, t-1] + prod_above[sc.name, gn, t-1] +
( (
RESERVES_WHEN_SHUT_DOWN || RESERVES_WHEN_RAMP_DOWN ? RESERVES_WHEN_SHUT_DOWN || RESERVES_WHEN_RAMP_DOWN ?
reserve[t-1] : 0.0 reserve[t-1] : 0.0
) )
min_prod_this_period = min_prod_this_period =
g.min_power[t] * is_on[gn, t] + prod_above[gn, t] g.min_power[t] * is_on[gn, t] + prod_above[sc.name, gn, t]
# Equation (25) in Kneuven et al. (2020) # Equation (25) in Kneuven et al. (2020)
eq_ramp_down[gn, t] = @constraint( eq_ramp_down[sc.name, gn, t] = @constraint(
model, model,
max_prod_last_period - min_prod_this_period <= max_prod_last_period - min_prod_this_period <=
RD * is_on[gn, t] + SD * switch_off[gn, t] RD * is_on[gn, t] + SD * switch_off[gn, t]

View File

@@ -4,10 +4,11 @@
function _add_production_piecewise_linear_eqs!( function _add_production_piecewise_linear_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_prod_vars::Gar1962.ProdVars, formulation_prod_vars::Gar1962.ProdVars,
formulation_pwl_costs::CarArr2006.PwlCosts, formulation_pwl_costs::CarArr2006.PwlCosts,
formulation_status_vars::StatusVarsFormulation, formulation_status_vars::StatusVarsFormulation,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
eq_prod_above_def = _init(model, :eq_prod_above_def) eq_prod_above_def = _init(model, :eq_prod_above_def)
eq_segprod_limit = _init(model, :eq_segprod_limit) eq_segprod_limit = _init(model, :eq_segprod_limit)
@@ -26,28 +27,32 @@ function _add_production_piecewise_linear_eqs!(
# difference between max power for segments k and k-1 so the # difference between max power for segments k and k-1 so the
# value of cost_segments[k].mw[t] is the max production *for # value of cost_segments[k].mw[t] is the max production *for
# that segment* # that segment*
eq_segprod_limit[gn, t, k] = @constraint( eq_segprod_limit[sc.name, gn, t, k] = @constraint(
model, model,
segprod[gn, t, k] <= g.cost_segments[k].mw[t] segprod[sc.name, gn, t, k] <= g.cost_segments[k].mw[t]
) )
# Also add this as an explicit upper bound on segprod to make the # Also add this as an explicit upper bound on segprod to make the
# solver's work a bit easier # solver's work a bit easier
set_upper_bound(segprod[gn, t, k], g.cost_segments[k].mw[t]) set_upper_bound(
segprod[sc.name, gn, t, k],
g.cost_segments[k].mw[t],
)
# Definition of production # Definition of production
# Equation (43) in Kneuven et al. (2020) # Equation (43) in Kneuven et al. (2020)
eq_prod_above_def[gn, t] = @constraint( eq_prod_above_def[sc.name, gn, t] = @constraint(
model, model,
prod_above[gn, t] == sum(segprod[gn, t, k] for k in 1:K) prod_above[sc.name, gn, t] ==
sum(segprod[sc.name, gn, t, k] for k in 1:K)
) )
# Objective function # Objective function
# Equation (44) in Kneuven et al. (2020) # Equation (44) in Kneuven et al. (2020)
add_to_expression!( add_to_expression!(
model[:obj], model[:obj],
segprod[gn, t, k], segprod[sc.name, gn, t, k],
g.cost_segments[k].cost[t], sc.probability * g.cost_segments[k].cost[t],
) )
end end
end end

View File

@@ -4,10 +4,11 @@
function _add_ramp_eqs!( function _add_ramp_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_prod_vars::Gar1962.ProdVars, formulation_prod_vars::Gar1962.ProdVars,
formulation_ramping::DamKucRajAta2016.Ramping, formulation_ramping::DamKucRajAta2016.Ramping,
formulation_status_vars::Gar1962.StatusVars, formulation_status_vars::Gar1962.StatusVars,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
# TODO: Move upper case constants to model[:instance] # TODO: Move upper case constants to model[:instance]
RESERVES_WHEN_START_UP = true RESERVES_WHEN_START_UP = true
@@ -23,7 +24,7 @@ function _add_ramp_eqs!(
gn = g.name gn = g.name
eq_str_ramp_down = _init(model, :eq_str_ramp_down) eq_str_ramp_down = _init(model, :eq_str_ramp_down)
eq_str_ramp_up = _init(model, :eq_str_ramp_up) eq_str_ramp_up = _init(model, :eq_str_ramp_up)
reserve = _total_reserves(model, g) reserve = _total_reserves(model, g, sc)
# Gar1962.ProdVars # Gar1962.ProdVars
prod_above = model[:prod_above] prod_above = model[:prod_above]
@@ -48,15 +49,15 @@ function _add_ramp_eqs!(
# end # end
max_prod_this_period = max_prod_this_period =
prod_above[gn, t] + prod_above[sc.name, gn, t] +
(RESERVES_WHEN_START_UP || RESERVES_WHEN_RAMP_UP ? reserve[t] : 0.0) (RESERVES_WHEN_START_UP || RESERVES_WHEN_RAMP_UP ? reserve[t] : 0.0)
min_prod_last_period = 0.0 min_prod_last_period = 0.0
if t > 1 && time_invariant if t > 1 && time_invariant
min_prod_last_period = prod_above[gn, t-1] min_prod_last_period = prod_above[sc.name, gn, t-1]
# Equation (35) in Kneuven et al. (2020) # Equation (35) in Kneuven et al. (2020)
# Sparser version of (24) # Sparser version of (24)
eq_str_ramp_up[gn, t] = @constraint( eq_str_ramp_up[sc.name, gn, t] = @constraint(
model, model,
max_prod_this_period - min_prod_last_period <= max_prod_this_period - min_prod_last_period <=
(SU - g.min_power[t] - RU) * switch_on[gn, t] + (SU - g.min_power[t] - RU) * switch_on[gn, t] +
@@ -65,7 +66,8 @@ function _add_ramp_eqs!(
elseif (t == 1 && is_initially_on) || (t > 1 && !time_invariant) elseif (t == 1 && is_initially_on) || (t > 1 && !time_invariant)
if t > 1 if t > 1
min_prod_last_period = min_prod_last_period =
prod_above[gn, t-1] + g.min_power[t-1] * is_on[gn, t-1] prod_above[sc.name, gn, t-1] +
g.min_power[t-1] * is_on[gn, t-1]
else else
min_prod_last_period = max(g.initial_power, 0.0) min_prod_last_period = max(g.initial_power, 0.0)
end end
@@ -76,7 +78,7 @@ function _add_ramp_eqs!(
# Modified version of equation (35) in Kneuven et al. (2020) # Modified version of equation (35) in Kneuven et al. (2020)
# Equivalent to (24) # Equivalent to (24)
eq_str_ramp_up[gn, t] = @constraint( eq_str_ramp_up[sc.name, gn, t] = @constraint(
model, model,
max_prod_this_period - min_prod_last_period <= max_prod_this_period - min_prod_last_period <=
(SU - RU) * switch_on[gn, t] + RU * is_on[gn, t] (SU - RU) * switch_on[gn, t] + RU * is_on[gn, t]
@@ -88,7 +90,7 @@ function _add_ramp_eqs!(
t > 1 && (RESERVES_WHEN_SHUT_DOWN || RESERVES_WHEN_RAMP_DOWN) ? t > 1 && (RESERVES_WHEN_SHUT_DOWN || RESERVES_WHEN_RAMP_DOWN) ?
reserve[t-1] : 0.0 reserve[t-1] : 0.0
) )
min_prod_this_period = prod_above[gn, t] min_prod_this_period = prod_above[sc.name, gn, t]
on_last_period = 0.0 on_last_period = 0.0
if t > 1 if t > 1
on_last_period = is_on[gn, t-1] on_last_period = is_on[gn, t-1]
@@ -98,7 +100,7 @@ function _add_ramp_eqs!(
if t > 1 && time_invariant if t > 1 && time_invariant
# Equation (36) in Kneuven et al. (2020) # Equation (36) in Kneuven et al. (2020)
eq_str_ramp_down[gn, t] = @constraint( eq_str_ramp_down[sc.name, gn, t] = @constraint(
model, model,
max_prod_last_period - min_prod_this_period <= max_prod_last_period - min_prod_this_period <=
(SD - g.min_power[t] - RD) * switch_off[gn, t] + (SD - g.min_power[t] - RD) * switch_off[gn, t] +
@@ -110,7 +112,7 @@ function _add_ramp_eqs!(
# Modified version of equation (36) in Kneuven et al. (2020) # Modified version of equation (36) in Kneuven et al. (2020)
# Equivalent to (25) # Equivalent to (25)
eq_str_ramp_down[gn, t] = @constraint( eq_str_ramp_down[sc.name, gn, t] = @constraint(
model, model,
max_prod_last_period - min_prod_this_period <= max_prod_last_period - min_prod_this_period <=
(SD - RD) * switch_off[gn, t] + RD * on_last_period (SD - RD) * switch_off[gn, t] + RD * on_last_period

View File

@@ -4,34 +4,35 @@
function _add_production_vars!( function _add_production_vars!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_prod_vars::Gar1962.ProdVars, formulation_prod_vars::Gar1962.ProdVars,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
prod_above = _init(model, :prod_above) prod_above = _init(model, :prod_above)
segprod = _init(model, :segprod) segprod = _init(model, :segprod)
for t in 1:model[:instance].time for t in 1:model[:instance].time
for k in 1:length(g.cost_segments) for k in 1:length(g.cost_segments)
segprod[g.name, t, k] = @variable(model, lower_bound = 0) segprod[sc.name, g.name, t, k] = @variable(model, lower_bound = 0)
end end
prod_above[g.name, t] = @variable(model, lower_bound = 0) prod_above[sc.name, g.name, t] = @variable(model, lower_bound = 0)
end end
return return
end end
function _add_production_limit_eqs!( function _add_production_limit_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_prod_vars::Gar1962.ProdVars, formulation_prod_vars::Gar1962.ProdVars,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
eq_prod_limit = _init(model, :eq_prod_limit) eq_prod_limit = _init(model, :eq_prod_limit)
is_on = model[:is_on] is_on = model[:is_on]
prod_above = model[:prod_above] prod_above = model[:prod_above]
reserve = _total_reserves(model, g) reserve = _total_reserves(model, g, sc)
gn = g.name gn = g.name
for t in 1:model[:instance].time for t in 1:model[:instance].time
# Objective function terms for production costs # Objective function terms for production costs
# Part of (69) of Kneuven et al. (2020) as C^R_g * u_g(t) term # Part of (69) of Kneuven et al. (2020) as C^R_g * u_g(t) term
add_to_expression!(model[:obj], is_on[gn, t], g.min_power_cost[t])
# Production limit # Production limit
# Equation (18) in Kneuven et al. (2020) # Equation (18) in Kneuven et al. (2020)
@@ -42,9 +43,10 @@ function _add_production_limit_eqs!(
if power_diff < 1e-7 if power_diff < 1e-7
power_diff = 0.0 power_diff = 0.0
end end
eq_prod_limit[gn, t] = @constraint( eq_prod_limit[sc.name, gn, t] = @constraint(
model, model,
prod_above[gn, t] + reserve[t] <= power_diff * is_on[gn, t] prod_above[sc.name, gn, t] + reserve[t] <=
power_diff * is_on[gn, t]
) )
end end
end end

View File

@@ -4,10 +4,11 @@
function _add_production_piecewise_linear_eqs!( function _add_production_piecewise_linear_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_prod_vars::Gar1962.ProdVars, formulation_prod_vars::Gar1962.ProdVars,
formulation_pwl_costs::Gar1962.PwlCosts, formulation_pwl_costs::Gar1962.PwlCosts,
formulation_status_vars::Gar1962.StatusVars, formulation_status_vars::Gar1962.StatusVars,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
eq_prod_above_def = _init(model, :eq_prod_above_def) eq_prod_above_def = _init(model, :eq_prod_above_def)
eq_segprod_limit = _init(model, :eq_segprod_limit) eq_segprod_limit = _init(model, :eq_segprod_limit)
@@ -24,9 +25,10 @@ function _add_production_piecewise_linear_eqs!(
for t in 1:model[:instance].time for t in 1:model[:instance].time
# Definition of production # Definition of production
# Equation (43) in Kneuven et al. (2020) # Equation (43) in Kneuven et al. (2020)
eq_prod_above_def[gn, t] = @constraint( eq_prod_above_def[sc.name, gn, t] = @constraint(
model, model,
prod_above[gn, t] == sum(segprod[gn, t, k] for k in 1:K) prod_above[sc.name, gn, t] ==
sum(segprod[sc.name, gn, t, k] for k in 1:K)
) )
for k in 1:K for k in 1:K
@@ -37,21 +39,25 @@ function _add_production_piecewise_linear_eqs!(
# difference between max power for segments k and k-1 so the # difference between max power for segments k and k-1 so the
# value of cost_segments[k].mw[t] is the max production *for # value of cost_segments[k].mw[t] is the max production *for
# that segment* # that segment*
eq_segprod_limit[gn, t, k] = @constraint( eq_segprod_limit[sc.name, gn, t, k] = @constraint(
model, model,
segprod[gn, t, k] <= g.cost_segments[k].mw[t] * is_on[gn, t] segprod[sc.name, gn, t, k] <=
g.cost_segments[k].mw[t] * is_on[gn, t]
) )
# Also add this as an explicit upper bound on segprod to make the # Also add this as an explicit upper bound on segprod to make the
# solver's work a bit easier # solver's work a bit easier
set_upper_bound(segprod[gn, t, k], g.cost_segments[k].mw[t]) set_upper_bound(
segprod[sc.name, gn, t, k],
g.cost_segments[k].mw[t],
)
# Objective function # Objective function
# Equation (44) in Kneuven et al. (2020) # Equation (44) in Kneuven et al. (2020)
add_to_expression!( add_to_expression!(
model[:obj], model[:obj],
segprod[gn, t, k], segprod[sc.name, gn, t, k],
g.cost_segments[k].cost[t], sc.probability * g.cost_segments[k].cost[t],
) )
end end
end end

View File

@@ -4,7 +4,7 @@
function _add_status_vars!( function _add_status_vars!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_status_vars::Gar1962.StatusVars, formulation_status_vars::Gar1962.StatusVars,
)::Nothing )::Nothing
is_on = _init(model, :is_on) is_on = _init(model, :is_on)
@@ -20,13 +20,14 @@ function _add_status_vars!(
switch_on[g.name, t] = @variable(model, binary = true) switch_on[g.name, t] = @variable(model, binary = true)
switch_off[g.name, t] = @variable(model, binary = true) switch_off[g.name, t] = @variable(model, binary = true)
end end
add_to_expression!(model[:obj], is_on[g.name, t], g.min_power_cost[t])
end end
return return
end end
function _add_status_eqs!( function _add_status_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_status_vars::Gar1962.StatusVars, formulation_status_vars::Gar1962.StatusVars,
)::Nothing )::Nothing
eq_binary_link = _init(model, :eq_binary_link) eq_binary_link = _init(model, :eq_binary_link)

View File

@@ -4,10 +4,11 @@
function _add_production_piecewise_linear_eqs!( function _add_production_piecewise_linear_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_prod_vars::Gar1962.ProdVars, formulation_prod_vars::Gar1962.ProdVars,
formulation_pwl_costs::KnuOstWat2018.PwlCosts, formulation_pwl_costs::KnuOstWat2018.PwlCosts,
formulation_status_vars::Gar1962.StatusVars, formulation_status_vars::Gar1962.StatusVars,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
eq_prod_above_def = _init(model, :eq_prod_above_def) eq_prod_above_def = _init(model, :eq_prod_above_def)
eq_segprod_limit_a = _init(model, :eq_segprod_limit_a) eq_segprod_limit_a = _init(model, :eq_segprod_limit_a)
@@ -58,27 +59,27 @@ function _add_production_piecewise_linear_eqs!(
if g.min_uptime > 1 if g.min_uptime > 1
# Equation (46) in Kneuven et al. (2020) # Equation (46) in Kneuven et al. (2020)
eq_segprod_limit_a[gn, t, k] = @constraint( eq_segprod_limit_a[sc.name, gn, t, k] = @constraint(
model, model,
segprod[gn, t, k] <= segprod[sc.name, gn, t, k] <=
g.cost_segments[k].mw[t] * is_on[gn, t] - g.cost_segments[k].mw[t] * is_on[gn, t] -
Cv * switch_on[gn, t] - Cv * switch_on[gn, t] -
(t < T ? Cw * switch_off[gn, t+1] : 0.0) (t < T ? Cw * switch_off[gn, t+1] : 0.0)
) )
else else
# Equation (47a)/(48a) in Kneuven et al. (2020) # Equation (47a)/(48a) in Kneuven et al. (2020)
eq_segprod_limit_b[gn, t, k] = @constraint( eq_segprod_limit_b[sc.name, gn, t, k] = @constraint(
model, model,
segprod[gn, t, k] <= segprod[sc.name, gn, t, k] <=
g.cost_segments[k].mw[t] * is_on[gn, t] - g.cost_segments[k].mw[t] * is_on[gn, t] -
Cv * switch_on[gn, t] - Cv * switch_on[gn, t] -
(t < T ? max(0, Cv - Cw) * switch_off[gn, t+1] : 0.0) (t < T ? max(0, Cv - Cw) * switch_off[gn, t+1] : 0.0)
) )
# Equation (47b)/(48b) in Kneuven et al. (2020) # Equation (47b)/(48b) in Kneuven et al. (2020)
eq_segprod_limit_c[gn, t, k] = @constraint( eq_segprod_limit_c[sc.name, gn, t, k] = @constraint(
model, model,
segprod[gn, t, k] <= segprod[sc.name, gn, t, k] <=
g.cost_segments[k].mw[t] * is_on[gn, t] - g.cost_segments[k].mw[t] * is_on[gn, t] -
max(0, Cw - Cv) * switch_on[gn, t] - max(0, Cw - Cv) * switch_on[gn, t] -
(t < T ? Cw * switch_off[gn, t+1] : 0.0) (t < T ? Cw * switch_off[gn, t+1] : 0.0)
@@ -87,22 +88,26 @@ function _add_production_piecewise_linear_eqs!(
# Definition of production # Definition of production
# Equation (43) in Kneuven et al. (2020) # Equation (43) in Kneuven et al. (2020)
eq_prod_above_def[gn, t] = @constraint( eq_prod_above_def[sc.name, gn, t] = @constraint(
model, model,
prod_above[gn, t] == sum(segprod[gn, t, k] for k in 1:K) prod_above[sc.name, gn, t] ==
sum(segprod[sc.name, gn, t, k] for k in 1:K)
) )
# Objective function # Objective function
# Equation (44) in Kneuven et al. (2020) # Equation (44) in Kneuven et al. (2020)
add_to_expression!( add_to_expression!(
model[:obj], model[:obj],
segprod[gn, t, k], segprod[sc.name, gn, t, k],
g.cost_segments[k].cost[t], sc.probability * g.cost_segments[k].cost[t],
) )
# Also add an explicit upper bound on segprod to make the solver's # Also add an explicit upper bound on segprod to make the solver's
# work a bit easier # work a bit easier
set_upper_bound(segprod[gn, t, k], g.cost_segments[k].mw[t]) set_upper_bound(
segprod[sc.name, gn, t, k],
g.cost_segments[k].mw[t],
)
end end
end end
end end

View File

@@ -4,10 +4,11 @@
function _add_ramp_eqs!( function _add_ramp_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_prod_vars::Gar1962.ProdVars, formulation_prod_vars::Gar1962.ProdVars,
formulation_ramping::MorLatRam2013.Ramping, formulation_ramping::MorLatRam2013.Ramping,
formulation_status_vars::Gar1962.StatusVars, formulation_status_vars::Gar1962.StatusVars,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
# TODO: Move upper case constants to model[:instance] # TODO: Move upper case constants to model[:instance]
RESERVES_WHEN_START_UP = true RESERVES_WHEN_START_UP = true
@@ -22,7 +23,7 @@ function _add_ramp_eqs!(
gn = g.name gn = g.name
eq_ramp_down = _init(model, :eq_ramp_down) eq_ramp_down = _init(model, :eq_ramp_down)
eq_ramp_up = _init(model, :eq_str_ramp_up) eq_ramp_up = _init(model, :eq_str_ramp_up)
reserve = _total_reserves(model, g) reserve = _total_reserves(model, g, sc)
# Gar1962.ProdVars # Gar1962.ProdVars
prod_above = model[:prod_above] prod_above = model[:prod_above]
@@ -39,10 +40,10 @@ function _add_ramp_eqs!(
# Ramp up limit # Ramp up limit
if t == 1 if t == 1
if is_initially_on if is_initially_on
eq_ramp_up[gn, t] = @constraint( eq_ramp_up[sc.name, gn, t] = @constraint(
model, model,
g.min_power[t] + g.min_power[t] +
prod_above[gn, t] + prod_above[sc.name, gn, t] +
(RESERVES_WHEN_RAMP_UP ? reserve[t] : 0.0) <= (RESERVES_WHEN_RAMP_UP ? reserve[t] : 0.0) <=
g.initial_power + RU g.initial_power + RU
) )
@@ -58,13 +59,14 @@ function _add_ramp_eqs!(
SU = g.startup_limit SU = g.startup_limit
max_prod_this_period = max_prod_this_period =
g.min_power[t] * is_on[gn, t] + g.min_power[t] * is_on[gn, t] +
prod_above[gn, t] + prod_above[sc.name, gn, t] +
( (
RESERVES_WHEN_START_UP || RESERVES_WHEN_RAMP_UP ? RESERVES_WHEN_START_UP || RESERVES_WHEN_RAMP_UP ?
reserve[t] : 0.0 reserve[t] : 0.0
) )
min_prod_last_period = min_prod_last_period =
g.min_power[t-1] * is_on[gn, t-1] + prod_above[gn, t-1] g.min_power[t-1] * is_on[gn, t-1] +
prod_above[sc.name, gn, t-1]
eq_ramp_up[gn, t] = @constraint( eq_ramp_up[gn, t] = @constraint(
model, model,
max_prod_this_period - min_prod_last_period <= max_prod_this_period - min_prod_last_period <=
@@ -74,11 +76,11 @@ function _add_ramp_eqs!(
# Equation (26) in Kneuven et al. (2020) # Equation (26) in Kneuven et al. (2020)
# TODO: what if RU < SU? places too stringent upper bound # TODO: what if RU < SU? places too stringent upper bound
# prod_above[gn, t] when starting up, and creates diff with (24). # prod_above[gn, t] when starting up, and creates diff with (24).
eq_ramp_up[gn, t] = @constraint( eq_ramp_up[sc.name, gn, t] = @constraint(
model, model,
prod_above[gn, t] + prod_above[sc.name, gn, t] +
(RESERVES_WHEN_RAMP_UP ? reserve[t] : 0.0) - (RESERVES_WHEN_RAMP_UP ? reserve[t] : 0.0) -
prod_above[gn, t-1] <= RU prod_above[sc.name, gn, t-1] <= RU
) )
end end
end end
@@ -90,9 +92,10 @@ function _add_ramp_eqs!(
# min_power + RD < initial_power < SD # min_power + RD < initial_power < SD
# then the generator should be able to shut down at time t = 1, # then the generator should be able to shut down at time t = 1,
# but the constraint below will force the unit to produce power # but the constraint below will force the unit to produce power
eq_ramp_down[gn, t] = @constraint( eq_ramp_down[sc.name, gn, t] = @constraint(
model, model,
g.initial_power - (g.min_power[t] + prod_above[gn, t]) <= RD g.initial_power -
(g.min_power[t] + prod_above[sc.name, gn, t]) <= RD
) )
end end
else else
@@ -102,13 +105,13 @@ function _add_ramp_eqs!(
SD = g.shutdown_limit SD = g.shutdown_limit
max_prod_last_period = max_prod_last_period =
g.min_power[t-1] * is_on[gn, t-1] + g.min_power[t-1] * is_on[gn, t-1] +
prod_above[gn, t-1] + prod_above[sc.name, gn, t-1] +
( (
RESERVES_WHEN_SHUT_DOWN || RESERVES_WHEN_RAMP_DOWN ? RESERVES_WHEN_SHUT_DOWN || RESERVES_WHEN_RAMP_DOWN ?
reserve[t-1] : 0.0 reserve[t-1] : 0.0
) )
min_prod_this_period = min_prod_this_period =
g.min_power[t] * is_on[gn, t] + prod_above[gn, t] g.min_power[t] * is_on[gn, t] + prod_above[sc.name, gn, t]
eq_ramp_down[gn, t] = @constraint( eq_ramp_down[gn, t] = @constraint(
model, model,
max_prod_last_period - min_prod_this_period <= max_prod_last_period - min_prod_this_period <=
@@ -118,11 +121,11 @@ function _add_ramp_eqs!(
# Equation (27) in Kneuven et al. (2020) # Equation (27) in Kneuven et al. (2020)
# TODO: Similar to above, what to do if shutting down in time t # TODO: Similar to above, what to do if shutting down in time t
# and RD < SD? There is a difference with (25). # and RD < SD? There is a difference with (25).
eq_ramp_down[gn, t] = @constraint( eq_ramp_down[sc.name, gn, t] = @constraint(
model, model,
prod_above[gn, t-1] + prod_above[sc.name, gn, t-1] +
(RESERVES_WHEN_RAMP_DOWN ? reserve[t-1] : 0.0) - (RESERVES_WHEN_RAMP_DOWN ? reserve[t-1] : 0.0) -
prod_above[gn, t] <= RD prod_above[sc.name, gn, t] <= RD
) )
end end
end end

View File

@@ -4,7 +4,7 @@
function _add_startup_cost_eqs!( function _add_startup_cost_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation::MorLatRam2013.StartupCosts, formulation::MorLatRam2013.StartupCosts,
)::Nothing )::Nothing
eq_startup_choose = _init(model, :eq_startup_choose) eq_startup_choose = _init(model, :eq_startup_choose)

View File

@@ -4,15 +4,16 @@
function _add_ramp_eqs!( function _add_ramp_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation_prod_vars::Gar1962.ProdVars, formulation_prod_vars::Gar1962.ProdVars,
formulation_ramping::PanGua2016.Ramping, formulation_ramping::PanGua2016.Ramping,
formulation_status_vars::Gar1962.StatusVars, formulation_status_vars::Gar1962.StatusVars,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
# TODO: Move upper case constants to model[:instance] # TODO: Move upper case constants to model[:instance]
RESERVES_WHEN_SHUT_DOWN = true RESERVES_WHEN_SHUT_DOWN = true
gn = g.name gn = g.name
reserve = _total_reserves(model, g) reserve = _total_reserves(model, g, sc)
eq_str_prod_limit = _init(model, :eq_str_prod_limit) eq_str_prod_limit = _init(model, :eq_str_prod_limit)
eq_prod_limit_ramp_up_extra_period = eq_prod_limit_ramp_up_extra_period =
_init(model, :eq_prod_limit_ramp_up_extra_period) _init(model, :eq_prod_limit_ramp_up_extra_period)
@@ -52,9 +53,9 @@ function _add_ramp_eqs!(
# Generalization of (20) # Generalization of (20)
# Necessary that if any of the switch_on = 1 in the sum, # Necessary that if any of the switch_on = 1 in the sum,
# then switch_off[gn, t+1] = 0 # then switch_off[gn, t+1] = 0
eq_str_prod_limit[gn, t] = @constraint( eq_str_prod_limit[sc.name, gn, t] = @constraint(
model, model,
prod_above[gn, t] + prod_above[sc.name, gn, t] +
g.min_power[t] * is_on[gn, t] + g.min_power[t] * is_on[gn, t] +
reserve[t] <= reserve[t] <=
Pbar * is_on[gn, t] - Pbar * is_on[gn, t] -
@@ -67,16 +68,17 @@ function _add_ramp_eqs!(
if UT - 2 < TRU if UT - 2 < TRU
# Equation (40) in Kneuven et al. (2020) # Equation (40) in Kneuven et al. (2020)
# Covers an additional time period of the ramp-up trajectory, compared to (38) # Covers an additional time period of the ramp-up trajectory, compared to (38)
eq_prod_limit_ramp_up_extra_period[gn, t] = @constraint( eq_prod_limit_ramp_up_extra_period[sc.name, gn, t] =
model, @constraint(
prod_above[gn, t] + model,
g.min_power[t] * is_on[gn, t] + prod_above[sc.name, gn, t] +
reserve[t] <= g.min_power[t] * is_on[gn, t] +
Pbar * is_on[gn, t] - sum( reserve[t] <=
(Pbar - (SU + i * RU)) * switch_on[gn, t-i] for Pbar * is_on[gn, t] - sum(
i in 0:min(UT - 1, TRU, t - 1) (Pbar - (SU + i * RU)) * switch_on[gn, t-i] for
i in 0:min(UT - 1, TRU, t - 1)
)
) )
)
end end
# Add in shutdown trajectory if KSD >= 0 (else this is dominated by (38)) # Add in shutdown trajectory if KSD >= 0 (else this is dominated by (38))
@@ -84,9 +86,9 @@ function _add_ramp_eqs!(
if KSD > 0 if KSD > 0
KSU = min(TRU, UT - 2 - KSD, t - 1) KSU = min(TRU, UT - 2 - KSD, t - 1)
# Equation (41) in Kneuven et al. (2020) # Equation (41) in Kneuven et al. (2020)
eq_prod_limit_shutdown_trajectory[gn, t] = @constraint( eq_prod_limit_shutdown_trajectory[sc.name, gn, t] = @constraint(
model, model,
prod_above[gn, t] + prod_above[sc.name, gn, t] +
g.min_power[t] * is_on[gn, t] + g.min_power[t] * is_on[gn, t] +
(RESERVES_WHEN_SHUT_DOWN ? reserve[t] : 0.0) <= (RESERVES_WHEN_SHUT_DOWN ? reserve[t] : 0.0) <=
Pbar * is_on[gn, t] - sum( Pbar * is_on[gn, t] - sum(

View File

@@ -4,10 +4,11 @@
function _add_ramp_eqs!( function _add_ramp_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
::Gar1962.ProdVars, ::Gar1962.ProdVars,
::WanHob2016.Ramping, ::WanHob2016.Ramping,
::Gar1962.StatusVars, ::Gar1962.StatusVars,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
is_initially_on = (g.initial_status > 0) is_initially_on = (g.initial_status > 0)
SU = g.startup_limit SU = g.startup_limit
@@ -38,41 +39,43 @@ function _add_ramp_eqs!(
for t in 1:model[:instance].time for t in 1:model[:instance].time
@constraint( @constraint(
model, model,
prod_above[gn, t] + (is_on[gn, t] * minp[t]) <= mfg[rn, gn, t] prod_above[sc.name, gn, t] + (is_on[gn, t] * minp[t]) <=
mfg[sc.name, gn, t]
) # Eq. (19) in Wang & Hobbs (2016) ) # Eq. (19) in Wang & Hobbs (2016)
@constraint(model, mfg[rn, gn, t] <= is_on[gn, t] * maxp[t]) # Eq. (22) in Wang & Hobbs (2016) @constraint(model, mfg[sc.name, gn, t] <= is_on[gn, t] * maxp[t]) # Eq. (22) in Wang & Hobbs (2016)
if t != model[:instance].time if t != model[:instance].time
@constraint( @constraint(
model, model,
minp[t] * (is_on[gn, t+1] + is_on[gn, t] - 1) <= minp[t] * (is_on[gn, t+1] + is_on[gn, t] - 1) <=
prod_above[gn, t] - dwflexiramp[rn, gn, t] + prod_above[sc.name, gn, t] -
(is_on[gn, t] * minp[t]) dwflexiramp[sc.name, rn, gn, t] + (is_on[gn, t] * minp[t])
) # first inequality of Eq. (20) in Wang & Hobbs (2016) ) # first inequality of Eq. (20) in Wang & Hobbs (2016)
@constraint( @constraint(
model, model,
prod_above[gn, t] - dwflexiramp[rn, gn, t] + prod_above[sc.name, gn, t] -
dwflexiramp[sc.name, rn, gn, t] +
(is_on[gn, t] * minp[t]) <= (is_on[gn, t] * minp[t]) <=
mfg[rn, gn, t+1] + (maxp[t] * (1 - is_on[gn, t+1])) mfg[sc.name, gn, t+1] + (maxp[t] * (1 - is_on[gn, t+1]))
) # second inequality of Eq. (20) in Wang & Hobbs (2016) ) # second inequality of Eq. (20) in Wang & Hobbs (2016)
@constraint( @constraint(
model, model,
minp[t] * (is_on[gn, t+1] + is_on[gn, t] - 1) <= minp[t] * (is_on[gn, t+1] + is_on[gn, t] - 1) <=
prod_above[gn, t] + prod_above[sc.name, gn, t] +
upflexiramp[rn, gn, t] + upflexiramp[sc.name, rn, gn, t] +
(is_on[gn, t] * minp[t]) (is_on[gn, t] * minp[t])
) # first inequality of Eq. (21) in Wang & Hobbs (2016) ) # first inequality of Eq. (21) in Wang & Hobbs (2016)
@constraint( @constraint(
model, model,
prod_above[gn, t] + prod_above[sc.name, gn, t] +
upflexiramp[rn, gn, t] + upflexiramp[sc.name, rn, gn, t] +
(is_on[gn, t] * minp[t]) <= (is_on[gn, t] * minp[t]) <=
mfg[rn, gn, t+1] + (maxp[t] * (1 - is_on[gn, t+1])) mfg[sc.name, gn, t+1] + (maxp[t] * (1 - is_on[gn, t+1]))
) # second inequality of Eq. (21) in Wang & Hobbs (2016) ) # second inequality of Eq. (21) in Wang & Hobbs (2016)
if t != 1 if t != 1
@constraint( @constraint(
model, model,
mfg[rn, gn, t] <= mfg[sc.name, gn, t] <=
prod_above[gn, t-1] + prod_above[sc.name, gn, t-1] +
(is_on[gn, t-1] * minp[t]) + (is_on[gn, t-1] * minp[t]) +
(RU * is_on[gn, t-1]) + (RU * is_on[gn, t-1]) +
(SU * (is_on[gn, t] - is_on[gn, t-1])) + (SU * (is_on[gn, t] - is_on[gn, t-1])) +
@@ -80,8 +83,13 @@ function _add_ramp_eqs!(
) # Eq. (23) in Wang & Hobbs (2016) ) # Eq. (23) in Wang & Hobbs (2016)
@constraint( @constraint(
model, model,
(prod_above[gn, t-1] + (is_on[gn, t-1] * minp[t])) - (
(prod_above[gn, t] + (is_on[gn, t] * minp[t])) <= prod_above[sc.name, gn, t-1] +
(is_on[gn, t-1] * minp[t])
) - (
prod_above[sc.name, gn, t] +
(is_on[gn, t] * minp[t])
) <=
RD * is_on[gn, t] + RD * is_on[gn, t] +
SD * (is_on[gn, t-1] - is_on[gn, t]) + SD * (is_on[gn, t-1] - is_on[gn, t]) +
maxp[t] * (1 - is_on[gn, t-1]) maxp[t] * (1 - is_on[gn, t-1])
@@ -89,7 +97,7 @@ function _add_ramp_eqs!(
else else
@constraint( @constraint(
model, model,
mfg[rn, gn, t] <= mfg[sc.name, gn, t] <=
initial_power + initial_power +
(RU * is_initially_on) + (RU * is_initially_on) +
(SU * (is_on[gn, t] - is_initially_on)) + (SU * (is_on[gn, t] - is_initially_on)) +
@@ -97,8 +105,10 @@ function _add_ramp_eqs!(
) # Eq. (23) in Wang & Hobbs (2016) for the first time period ) # Eq. (23) in Wang & Hobbs (2016) for the first time period
@constraint( @constraint(
model, model,
initial_power - initial_power - (
(prod_above[gn, t] + (is_on[gn, t] * minp[t])) <= prod_above[sc.name, gn, t] +
(is_on[gn, t] * minp[t])
) <=
RD * is_on[gn, t] + RD * is_on[gn, t] +
SD * (is_initially_on - is_on[gn, t]) + SD * (is_initially_on - is_on[gn, t]) +
maxp[t] * (1 - is_initially_on) maxp[t] * (1 - is_initially_on)
@@ -106,7 +116,7 @@ function _add_ramp_eqs!(
end end
@constraint( @constraint(
model, model,
mfg[rn, gn, t] <= mfg[sc.name, gn, t] <=
(SD * (is_on[gn, t] - is_on[gn, t+1])) + (SD * (is_on[gn, t] - is_on[gn, t+1])) +
(maxp[t] * is_on[gn, t+1]) (maxp[t] * is_on[gn, t+1])
) # Eq. (24) in Wang & Hobbs (2016) ) # Eq. (24) in Wang & Hobbs (2016)
@@ -114,11 +124,12 @@ function _add_ramp_eqs!(
model, model,
-RD * is_on[gn, t+1] - -RD * is_on[gn, t+1] -
SD * (is_on[gn, t] - is_on[gn, t+1]) - SD * (is_on[gn, t] - is_on[gn, t+1]) -
maxp[t] * (1 - is_on[gn, t]) <= upflexiramp[rn, gn, t] maxp[t] * (1 - is_on[gn, t]) <=
upflexiramp[sc.name, rn, gn, t]
) # first inequality of Eq. (26) in Wang & Hobbs (2016) ) # first inequality of Eq. (26) in Wang & Hobbs (2016)
@constraint( @constraint(
model, model,
upflexiramp[rn, gn, t] <= upflexiramp[sc.name, rn, gn, t] <=
RU * is_on[gn, t] + RU * is_on[gn, t] +
SU * (is_on[gn, t+1] - is_on[gn, t]) + SU * (is_on[gn, t+1] - is_on[gn, t]) +
maxp[t] * (1 - is_on[gn, t+1]) maxp[t] * (1 - is_on[gn, t+1])
@@ -126,11 +137,12 @@ function _add_ramp_eqs!(
@constraint( @constraint(
model, model,
-RU * is_on[gn, t] - SU * (is_on[gn, t+1] - is_on[gn, t]) - -RU * is_on[gn, t] - SU * (is_on[gn, t+1] - is_on[gn, t]) -
maxp[t] * (1 - is_on[gn, t+1]) <= dwflexiramp[rn, gn, t] maxp[t] * (1 - is_on[gn, t+1]) <=
dwflexiramp[sc.name, rn, gn, t]
) # first inequality of Eq. (27) in Wang & Hobbs (2016) ) # first inequality of Eq. (27) in Wang & Hobbs (2016)
@constraint( @constraint(
model, model,
dwflexiramp[rn, gn, t] <= dwflexiramp[sc.name, rn, gn, t] <=
RD * is_on[gn, t+1] + RD * is_on[gn, t+1] +
SD * (is_on[gn, t] - is_on[gn, t+1]) + SD * (is_on[gn, t] - is_on[gn, t+1]) +
maxp[t] * (1 - is_on[gn, t]) maxp[t] * (1 - is_on[gn, t])
@@ -138,26 +150,27 @@ function _add_ramp_eqs!(
@constraint( @constraint(
model, model,
-maxp[t] * is_on[gn, t] + minp[t] * is_on[gn, t+1] <= -maxp[t] * is_on[gn, t] + minp[t] * is_on[gn, t+1] <=
upflexiramp[rn, gn, t] upflexiramp[sc.name, rn, gn, t]
) # first inequality of Eq. (28) in Wang & Hobbs (2016) ) # first inequality of Eq. (28) in Wang & Hobbs (2016)
@constraint( @constraint(
model, model,
upflexiramp[rn, gn, t] <= maxp[t] * is_on[gn, t+1] upflexiramp[sc.name, rn, gn, t] <= maxp[t] * is_on[gn, t+1]
) # second inequality of Eq. (28) in Wang & Hobbs (2016) ) # second inequality of Eq. (28) in Wang & Hobbs (2016)
@constraint( @constraint(
model, model,
-maxp[t] * is_on[gn, t+1] <= dwflexiramp[rn, gn, t] -maxp[t] * is_on[gn, t+1] <=
dwflexiramp[sc.name, rn, gn, t]
) # first inequality of Eq. (29) in Wang & Hobbs (2016) ) # first inequality of Eq. (29) in Wang & Hobbs (2016)
@constraint( @constraint(
model, model,
dwflexiramp[rn, gn, t] <= dwflexiramp[sc.name, rn, gn, t] <=
(maxp[t] * is_on[gn, t]) - (minp[t] * is_on[gn, t+1]) (maxp[t] * is_on[gn, t]) - (minp[t] * is_on[gn, t+1])
) # second inequality of Eq. (29) in Wang & Hobbs (2016) ) # second inequality of Eq. (29) in Wang & Hobbs (2016)
else else
@constraint( @constraint(
model, model,
mfg[rn, gn, t] <= mfg[sc.name, gn, t] <=
prod_above[gn, t-1] + prod_above[sc.name, gn, t-1] +
(is_on[gn, t-1] * minp[t]) + (is_on[gn, t-1] * minp[t]) +
(RU * is_on[gn, t-1]) + (RU * is_on[gn, t-1]) +
(SU * (is_on[gn, t] - is_on[gn, t-1])) + (SU * (is_on[gn, t] - is_on[gn, t-1])) +
@@ -165,8 +178,11 @@ function _add_ramp_eqs!(
) # Eq. (23) in Wang & Hobbs (2016) for the last time period ) # Eq. (23) in Wang & Hobbs (2016) for the last time period
@constraint( @constraint(
model, model,
(prod_above[gn, t-1] + (is_on[gn, t-1] * minp[t])) - (
(prod_above[gn, t] + (is_on[gn, t] * minp[t])) <= prod_above[sc.name, gn, t-1] +
(is_on[gn, t-1] * minp[t])
) -
(prod_above[sc.name, gn, t] + (is_on[gn, t] * minp[t])) <=
RD * is_on[gn, t] + RD * is_on[gn, t] +
SD * (is_on[gn, t-1] - is_on[gn, t]) + SD * (is_on[gn, t-1] - is_on[gn, t]) +
maxp[t] * (1 - is_on[gn, t-1]) maxp[t] * (1 - is_on[gn, t-1])

View File

@@ -4,6 +4,7 @@
""" """
Formulation described in: Formulation described in:
B. Wang and B. F. Hobbs, "Real-Time Markets for Flexiramp: A Stochastic B. Wang and B. F. Hobbs, "Real-Time Markets for Flexiramp: A Stochastic
Unit Commitment-Based Analysis," in IEEE Transactions on Power Systems, Unit Commitment-Based Analysis," in IEEE Transactions on Power Systems,
vol. 31, no. 2, pp. 846-860, March 2016, doi: 10.1109/TPWRS.2015.2411268. vol. 31, no. 2, pp. 846-860, March 2016, doi: 10.1109/TPWRS.2015.2411268.

View File

@@ -2,22 +2,30 @@
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
function _add_bus!(model::JuMP.Model, b::Bus)::Nothing function _add_bus!(
model::JuMP.Model,
b::Bus,
sc::UnitCommitmentScenario,
)::Nothing
net_injection = _init(model, :expr_net_injection) net_injection = _init(model, :expr_net_injection)
curtail = _init(model, :curtail) curtail = _init(model, :curtail)
for t in 1:model[:instance].time for t in 1:model[:instance].time
# Fixed load # Fixed load
net_injection[b.name, t] = AffExpr(-b.load[t]) net_injection[sc.name, b.name, t] = AffExpr(-b.load[t])
# Load curtailment # Load curtailment
curtail[b.name, t] = curtail[sc.name, b.name, t] =
@variable(model, lower_bound = 0, upper_bound = b.load[t]) @variable(model, lower_bound = 0, upper_bound = b.load[t])
add_to_expression!(net_injection[b.name, t], curtail[b.name, t], 1.0) add_to_expression!(
net_injection[sc.name, b.name, t],
curtail[sc.name, b.name, t],
1.0,
)
add_to_expression!( add_to_expression!(
model[:obj], model[:obj],
curtail[b.name, t], curtail[sc.name, b.name, t],
model[:instance].power_balance_penalty[t], sc.power_balance_penalty[t] * sc.probability,
) )
end end
return return

View File

@@ -6,43 +6,43 @@ function _add_transmission_line!(
model::JuMP.Model, model::JuMP.Model,
lm::TransmissionLine, lm::TransmissionLine,
f::ShiftFactorsFormulation, f::ShiftFactorsFormulation,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
overflow = _init(model, :overflow) overflow = _init(model, :overflow)
for t in 1:model[:instance].time for t in 1:model[:instance].time
overflow[lm.name, t] = @variable(model, lower_bound = 0) overflow[sc.name, lm.name, t] = @variable(model, lower_bound = 0)
add_to_expression!( add_to_expression!(
model[:obj], model[:obj],
overflow[lm.name, t], overflow[sc.name, lm.name, t],
lm.flow_limit_penalty[t], lm.flow_limit_penalty[t] * sc.probability,
) )
end end
return return
end end
function _setup_transmission( function _setup_transmission(
model::JuMP.Model,
formulation::ShiftFactorsFormulation, formulation::ShiftFactorsFormulation,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
instance = model[:instance]
isf = formulation.precomputed_isf isf = formulation.precomputed_isf
lodf = formulation.precomputed_lodf lodf = formulation.precomputed_lodf
if length(instance.buses) == 1 if length(sc.buses) == 1
isf = zeros(0, 0) isf = zeros(0, 0)
lodf = zeros(0, 0) lodf = zeros(0, 0)
elseif isf === nothing elseif isf === nothing
@info "Computing injection shift factors..." @info "Computing injection shift factors..."
time_isf = @elapsed begin time_isf = @elapsed begin
isf = UnitCommitment._injection_shift_factors( isf = UnitCommitment._injection_shift_factors(
lines = instance.lines, buses = sc.buses,
buses = instance.buses, lines = sc.lines,
) )
end end
@info @sprintf("Computed ISF in %.2f seconds", time_isf) @info @sprintf("Computed ISF in %.2f seconds", time_isf)
@info "Computing line outage factors..." @info "Computing line outage factors..."
time_lodf = @elapsed begin time_lodf = @elapsed begin
lodf = UnitCommitment._line_outage_factors( lodf = UnitCommitment._line_outage_factors(
lines = instance.lines, buses = sc.buses,
buses = instance.buses, lines = sc.lines,
isf = isf, isf = isf,
) )
end end
@@ -55,7 +55,7 @@ function _setup_transmission(
isf[abs.(isf).<formulation.isf_cutoff] .= 0 isf[abs.(isf).<formulation.isf_cutoff] .= 0
lodf[abs.(lodf).<formulation.lodf_cutoff] .= 0 lodf[abs.(lodf).<formulation.lodf_cutoff] .= 0
end end
model[:isf] = isf sc.isf = isf
model[:lodf] = lodf sc.lodf = lodf
return return
end end

View File

@@ -5,21 +5,26 @@
function _add_price_sensitive_load!( function _add_price_sensitive_load!(
model::JuMP.Model, model::JuMP.Model,
ps::PriceSensitiveLoad, ps::PriceSensitiveLoad,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
loads = _init(model, :loads) loads = _init(model, :loads)
net_injection = _init(model, :expr_net_injection) net_injection = _init(model, :expr_net_injection)
for t in 1:model[:instance].time for t in 1:model[:instance].time
# Decision variable # Decision variable
loads[ps.name, t] = loads[sc.name, ps.name, t] =
@variable(model, lower_bound = 0, upper_bound = ps.demand[t]) @variable(model, lower_bound = 0, upper_bound = ps.demand[t])
# Objective function terms # Objective function terms
add_to_expression!(model[:obj], loads[ps.name, t], -ps.revenue[t]) add_to_expression!(
model[:obj],
loads[sc.name, ps.name, t],
-ps.revenue[t] * sc.probability,
)
# Net injection # Net injection
add_to_expression!( add_to_expression!(
net_injection[ps.bus.name, t], net_injection[sc.name, ps.bus.name, t],
loads[ps.name, t], loads[sc.name, ps.name, t],
-1.0, -1.0,
) )
end end

View File

@@ -0,0 +1,35 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
function _add_profiled_unit!(
model::JuMP.Model,
pu::ProfiledUnit,
sc::UnitCommitmentScenario,
)::Nothing
punits = _init(model, :prod_profiled)
net_injection = _init(model, :expr_net_injection)
for t in 1:model[:instance].time
# Decision variable
punits[sc.name, pu.name, t] = @variable(
model,
lower_bound = pu.min_power[t],
upper_bound = pu.max_power[t]
)
# Objective function terms
add_to_expression!(
model[:obj],
punits[sc.name, pu.name, t],
pu.cost[t] * sc.probability,
)
# Net injection
add_to_expression!(
net_injection[sc.name, pu.bus.name, t],
punits[sc.name, pu.name, t],
1.0,
)
end
return
end

View File

@@ -10,15 +10,15 @@ using SparseArrays, Base.Threads, LinearAlgebra, JuMP
Returns a (B-1)xL matrix M, where B is the number of buses and L is the number Returns a (B-1)xL matrix M, where B is the number of buses and L is the number
of transmission lines. For a given bus b and transmission line l, the entry of transmission lines. For a given bus b and transmission line l, the entry
M[l.offset, b.offset] indicates the amount of power (in MW) that flows through M[l.offset, b.offset] indicates the amount of power (in MW) that flows through
transmission line l when 1 MW of power is injected at the slack bus (the bus transmission line l when 1 MW of power is injected at b and withdrawn from the
that has offset zero) and withdrawn from b. slack bus (the bus that has offset zero).
""" """
function _injection_shift_factors(; function _injection_shift_factors(;
buses::Array{Bus}, buses::Array{Bus},
lines::Array{TransmissionLine}, lines::Array{TransmissionLine},
) )
susceptance = _susceptance_matrix(lines) susceptance = _susceptance_matrix(lines)
incidence = _reduced_incidence_matrix(lines = lines, buses = buses) incidence = _reduced_incidence_matrix(buses = buses, lines = lines)
laplacian = transpose(incidence) * susceptance * incidence laplacian = transpose(incidence) * susceptance * incidence
isf = susceptance * incidence * inv(Array(laplacian)) isf = susceptance * incidence * inv(Array(laplacian))
return isf return isf

View File

@@ -0,0 +1,125 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
function _add_storage_unit!(
model::JuMP.Model,
su::StorageUnit,
sc::UnitCommitmentScenario,
)::Nothing
# Initialize variables
storage_level = _init(model, :storage_level)
charge_rate = _init(model, :charge_rate)
discharge_rate = _init(model, :discharge_rate)
is_charging = _init(model, :is_charging)
is_discharging = _init(model, :is_discharging)
eq_min_charge_rate = _init(model, :eq_min_charge_rate)
eq_max_charge_rate = _init(model, :eq_max_charge_rate)
eq_min_discharge_rate = _init(model, :eq_min_discharge_rate)
eq_max_discharge_rate = _init(model, :eq_max_discharge_rate)
# Initialize constraints
net_injection = _init(model, :expr_net_injection)
eq_storage_transition = _init(model, :eq_storage_transition)
eq_ending_level = _init(model, :eq_ending_level)
# time in hours
time_step = sc.time_step / 60
for t in 1:model[:instance].time
# Decision variable
storage_level[sc.name, su.name, t] = @variable(
model,
lower_bound = su.min_level[t],
upper_bound = su.max_level[t]
)
charge_rate[sc.name, su.name, t] = @variable(model)
discharge_rate[sc.name, su.name, t] = @variable(model)
is_charging[sc.name, su.name, t] = @variable(model, binary = true)
is_discharging[sc.name, su.name, t] = @variable(model, binary = true)
# Objective function terms
add_to_expression!(
model[:obj],
charge_rate[sc.name, su.name, t],
su.charge_cost[t] * sc.probability,
)
add_to_expression!(
model[:obj],
discharge_rate[sc.name, su.name, t],
su.discharge_cost[t] * sc.probability,
)
# Net injection
add_to_expression!(
net_injection[sc.name, su.bus.name, t],
discharge_rate[sc.name, su.name, t],
1.0,
)
add_to_expression!(
net_injection[sc.name, su.bus.name, t],
charge_rate[sc.name, su.name, t],
-1.0,
)
# Simultaneous charging and discharging
if !su.simultaneous_charge_and_discharge[t]
# Initialize the model dictionary
eq_simultaneous_charge_and_discharge =
_init(model, :eq_simultaneous_charge_and_discharge)
# Constraints
eq_simultaneous_charge_and_discharge[sc.name, su.name, t] =
@constraint(
model,
is_charging[sc.name, su.name, t] +
is_discharging[sc.name, su.name, t] <= 1.0
)
end
# Charge and discharge constraints
eq_min_charge_rate[sc.name, su.name, t] = @constraint(
model,
charge_rate[sc.name, su.name, t] >=
is_charging[sc.name, su.name, t] * su.min_charge_rate[t]
)
eq_max_charge_rate[sc.name, su.name, t] = @constraint(
model,
charge_rate[sc.name, su.name, t] <=
is_charging[sc.name, su.name, t] * su.max_charge_rate[t]
)
eq_min_discharge_rate[sc.name, su.name, t] = @constraint(
model,
discharge_rate[sc.name, su.name, t] >=
is_discharging[sc.name, su.name, t] * su.min_discharge_rate[t]
)
eq_max_discharge_rate[sc.name, su.name, t] = @constraint(
model,
discharge_rate[sc.name, su.name, t] <=
is_discharging[sc.name, su.name, t] * su.max_discharge_rate[t]
)
# Storage energy transition constraint
prev_storage_level =
t == 1 ? su.initial_level : storage_level[sc.name, su.name, t-1]
eq_storage_transition[sc.name, su.name, t] = @constraint(
model,
storage_level[sc.name, su.name, t] ==
(1 - su.loss_factor[t]) * prev_storage_level +
charge_rate[sc.name, su.name, t] *
time_step *
su.charge_efficiency[t] -
discharge_rate[sc.name, su.name, t] * time_step /
su.discharge_efficiency[t]
)
# Storage ending level constraint
if t == sc.time
eq_ending_level[sc.name, su.name] = @constraint(
model,
su.min_ending_level <=
storage_level[sc.name, su.name, t] <=
su.max_ending_level
)
end
end
return
end

View File

@@ -9,6 +9,27 @@ abstract type StartupCostsFormulation end
abstract type StatusVarsFormulation end abstract type StatusVarsFormulation end
abstract type ProductionVarsFormulation end abstract type ProductionVarsFormulation end
"""
struct Formulation
prod_vars::ProductionVarsFormulation
pwl_costs::PiecewiseLinearCostsFormulation
ramping::RampingFormulation
startup_costs::StartupCostsFormulation
status_vars::StatusVarsFormulation
transmission::TransmissionFormulation
end
Struct provided to `build_model` that holds various formulation components.
# Fields
- `prod_vars`: Formulation for the production decision variables
- `pwl_costs`: Formulation for the piecewise linear costs
- `ramping`: Formulation for ramping constraints
- `startup_costs`: Formulation for time-dependent start-up costs
- `status_vars`: Formulation for the status variables (e.g. `is_on`, `is_off`)
- `transmission`: Formulation for transmission and N-1 security constraints
"""
struct Formulation struct Formulation
prod_vars::ProductionVarsFormulation prod_vars::ProductionVarsFormulation
pwl_costs::PiecewiseLinearCostsFormulation pwl_costs::PiecewiseLinearCostsFormulation
@@ -38,10 +59,10 @@ end
""" """
struct ShiftFactorsFormulation <: TransmissionFormulation struct ShiftFactorsFormulation <: TransmissionFormulation
isf_cutoff::Float64 isf_cutoff::Float64 = 0.005
lodf_cutoff::Float64 lodf_cutoff::Float64 = 0.001
precomputed_isf::Union{Nothing,Matrix{Float64}} precomputed_isf=nothing
precomputed_lodf::Union{Nothing,Matrix{Float64}} precomputed_lodf=nothing
end end
Transmission formulation based on Injection Shift Factors (ISF) and Line Transmission formulation based on Injection Shift Factors (ISF) and Line
@@ -49,15 +70,15 @@ Outage Distribution Factors (LODF). Constraints are enforced in a lazy way.
Arguments Arguments
--------- ---------
- `precomputed_isf::Union{Matrix{Float64},Nothing} = nothing`: - `precomputed_isf`:
the injection shift factors matrix. If not provided, it will be computed. the injection shift factors matrix. If not provided, it will be computed.
- `precomputed_lodf::Union{Matrix{Float64},Nothing} = nothing`: - `precomputed_lodf`:
the line outage distribution factors matrix. If not provided, it will be the line outage distribution factors matrix. If not provided, it will be
computed. computed.
- `isf_cutoff::Float64 = 0.005`: - `isf_cutoff`:
the cutoff that should be applied to the ISF matrix. Entries with magnitude the cutoff that should be applied to the ISF matrix. Entries with magnitude
smaller than this value will be set to zero. smaller than this value will be set to zero.
- `lodf_cutoff::Float64 = 0.001`: - `lodf_cutoff`:
the cutoff that should be applied to the LODF matrix. Entries with magnitude the cutoff that should be applied to the LODF matrix. Entries with magnitude
smaller than this value will be set to zero. smaller than this value will be set to zero.
""" """

View File

@@ -2,54 +2,68 @@
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
function _add_system_wide_eqs!(model::JuMP.Model)::Nothing function _add_system_wide_eqs!(
_add_net_injection_eqs!(model) model::JuMP.Model,
_add_spinning_reserve_eqs!(model) sc::UnitCommitmentScenario,
_add_flexiramp_reserve_eqs!(model) )::Nothing
_add_net_injection_eqs!(model, sc)
_add_spinning_reserve_eqs!(model, sc)
_add_flexiramp_reserve_eqs!(model, sc)
return return
end end
function _add_net_injection_eqs!(model::JuMP.Model)::Nothing function _add_net_injection_eqs!(
model::JuMP.Model,
sc::UnitCommitmentScenario,
)::Nothing
T = model[:instance].time T = model[:instance].time
net_injection = _init(model, :net_injection) net_injection = _init(model, :net_injection)
eq_net_injection = _init(model, :eq_net_injection) eq_net_injection = _init(model, :eq_net_injection)
eq_power_balance = _init(model, :eq_power_balance) eq_power_balance = _init(model, :eq_power_balance)
for t in 1:T, b in model[:instance].buses for t in 1:T, b in sc.buses
n = net_injection[b.name, t] = @variable(model) n = net_injection[sc.name, b.name, t] = @variable(model)
eq_net_injection[b.name, t] = eq_net_injection[sc.name, b.name, t] = @constraint(
@constraint(model, -n + model[:expr_net_injection][b.name, t] == 0) model,
-n + model[:expr_net_injection][sc.name, b.name, t] == 0
)
end end
for t in 1:T for t in 1:T
eq_power_balance[t] = @constraint( eq_power_balance[sc.name, t] = @constraint(
model, model,
sum(net_injection[b.name, t] for b in model[:instance].buses) == 0 sum(net_injection[sc.name, b.name, t] for b in sc.buses) == 0
) )
end end
return return
end end
function _add_spinning_reserve_eqs!(model::JuMP.Model)::Nothing function _add_spinning_reserve_eqs!(
instance = model[:instance] model::JuMP.Model,
sc::UnitCommitmentScenario,
)::Nothing
T = model[:instance].time
eq_min_spinning_reserve = _init(model, :eq_min_spinning_reserve) eq_min_spinning_reserve = _init(model, :eq_min_spinning_reserve)
for r in instance.reserves for r in sc.reserves
r.type == "spinning" || continue r.type == "spinning" || continue
for t in 1:instance.time for t in 1:T
# Equation (68) in Kneuven et al. (2020) # Equation (68) in Kneuven et al. (2020)
# As in Morales-España et al. (2013a) # As in Morales-España et al. (2013a)
# Akin to the alternative formulation with max_power_avail # Akin to the alternative formulation with max_power_avail
# from Carrión and Arroyo (2006) and Ostrowski et al. (2012) # from Carrión and Arroyo (2006) and Ostrowski et al. (2012)
eq_min_spinning_reserve[r.name, t] = @constraint( eq_min_spinning_reserve[sc.name, r.name, t] = @constraint(
model, model,
sum(model[:reserve][r.name, g.name, t] for g in r.units) + sum(
model[:reserve_shortfall][r.name, t] >= r.amount[t] model[:reserve][sc.name, r.name, g.name, t] for
g in r.thermal_units
) + model[:reserve_shortfall][sc.name, r.name, t] >=
r.amount[t]
) )
# Account for shortfall contribution to objective # Account for shortfall contribution to objective
if r.shortfall_penalty >= 0 if r.shortfall_penalty >= 0
add_to_expression!( add_to_expression!(
model[:obj], model[:obj],
r.shortfall_penalty, r.shortfall_penalty * sc.probability,
model[:reserve_shortfall][r.name, t], model[:reserve_shortfall][sc.name, r.name, t],
) )
end end
end end
@@ -57,7 +71,10 @@ function _add_spinning_reserve_eqs!(model::JuMP.Model)::Nothing
return return
end end
function _add_flexiramp_reserve_eqs!(model::JuMP.Model)::Nothing function _add_flexiramp_reserve_eqs!(
model::JuMP.Model,
sc::UnitCommitmentScenario,
)::Nothing
# Note: The flexpramp requirements in Wang & Hobbs (2016) are imposed as hard constraints # Note: The flexpramp requirements in Wang & Hobbs (2016) are imposed as hard constraints
# through Eq. (17) and Eq. (18). The constraints eq_min_upflexiramp and eq_min_dwflexiramp # through Eq. (17) and Eq. (18). The constraints eq_min_upflexiramp and eq_min_dwflexiramp
# provided below are modified versions of Eq. (17) and Eq. (18), respectively, in that # provided below are modified versions of Eq. (17) and Eq. (18), respectively, in that
@@ -65,29 +82,37 @@ function _add_flexiramp_reserve_eqs!(model::JuMP.Model)::Nothing
# objective function. # objective function.
eq_min_upflexiramp = _init(model, :eq_min_upflexiramp) eq_min_upflexiramp = _init(model, :eq_min_upflexiramp)
eq_min_dwflexiramp = _init(model, :eq_min_dwflexiramp) eq_min_dwflexiramp = _init(model, :eq_min_dwflexiramp)
instance = model[:instance] T = model[:instance].time
for r in instance.reserves for r in sc.reserves
r.type == "flexiramp" || continue r.type == "flexiramp" || continue
for t in 1:instance.time for t in 1:T
# Eq. (17) in Wang & Hobbs (2016) # Eq. (17) in Wang & Hobbs (2016)
eq_min_upflexiramp[r.name, t] = @constraint( eq_min_upflexiramp[sc.name, r.name, t] = @constraint(
model, model,
sum(model[:upflexiramp][r.name, g.name, t] for g in r.units) + model[:upflexiramp_shortfall][r.name, t] >= r.amount[t] sum(
model[:upflexiramp][sc.name, r.name, g.name, t] for
g in r.thermal_units
) + model[:upflexiramp_shortfall][sc.name, r.name, t] >=
r.amount[t]
) )
# Eq. (18) in Wang & Hobbs (2016) # Eq. (18) in Wang & Hobbs (2016)
eq_min_dwflexiramp[r.name, t] = @constraint( eq_min_dwflexiramp[sc.name, r.name, t] = @constraint(
model, model,
sum(model[:dwflexiramp][r.name, g.name, t] for g in r.units) + model[:dwflexiramp_shortfall][r.name, t] >= r.amount[t] sum(
model[:dwflexiramp][sc.name, r.name, g.name, t] for
g in r.thermal_units
) + model[:dwflexiramp_shortfall][sc.name, r.name, t] >=
r.amount[t]
) )
# Account for flexiramp shortfall contribution to objective # Account for flexiramp shortfall contribution to objective
if r.shortfall_penalty >= 0 if r.shortfall_penalty >= 0
add_to_expression!( add_to_expression!(
model[:obj], model[:obj],
r.shortfall_penalty, r.shortfall_penalty * sc.probability,
( (
model[:upflexiramp_shortfall][r.name, t] + model[:upflexiramp_shortfall][sc.name, r.name, t] +
model[:dwflexiramp_shortfall][r.name, t] model[:dwflexiramp_shortfall][sc.name, r.name, t]
), ),
) )
end end

View File

@@ -2,7 +2,13 @@
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
function _add_unit!(model::JuMP.Model, g::Unit, formulation::Formulation) # Function for adding variables, constraints, and objective function terms
# related to the binary commitment, startup and shutdown decisions of units
function _add_unit_commitment!(
model::JuMP.Model,
g::ThermalUnit,
formulation::Formulation,
)
if !all(g.must_run) && any(g.must_run) if !all(g.must_run) && any(g.must_run)
error("Partially must-run units are not currently supported") error("Partially must-run units are not currently supported")
end end
@@ -11,22 +17,41 @@ function _add_unit!(model::JuMP.Model, g::Unit, formulation::Formulation)
end end
# Variables # Variables
_add_production_vars!(model, g, formulation.prod_vars)
_add_spinning_reserve_vars!(model, g)
_add_flexiramp_reserve_vars!(model, g)
_add_startup_shutdown_vars!(model, g) _add_startup_shutdown_vars!(model, g)
_add_status_vars!(model, g, formulation.status_vars) _add_status_vars!(model, g, formulation.status_vars)
# Constraints and objective function # Constraints and objective function
_add_min_uptime_downtime_eqs!(model, g) _add_min_uptime_downtime_eqs!(model, g)
_add_net_injection_eqs!(model, g) _add_startup_cost_eqs!(model, g, formulation.startup_costs)
_add_production_limit_eqs!(model, g, formulation.prod_vars) _add_status_eqs!(model, g, formulation.status_vars)
_add_commitment_status_eqs!(model, g)
return
end
# Function for adding variables, constraints, and objective function terms
# related to the continuous dispatch decisions of units
function _add_unit_dispatch!(
model::JuMP.Model,
g::ThermalUnit,
formulation::Formulation,
sc::UnitCommitmentScenario,
)
# Variables
_add_production_vars!(model, g, formulation.prod_vars, sc)
_add_spinning_reserve_vars!(model, g, sc)
_add_flexiramp_reserve_vars!(model, g, sc)
# Constraints and objective function
_add_net_injection_eqs!(model, g, sc)
_add_production_limit_eqs!(model, g, formulation.prod_vars, sc)
_add_production_piecewise_linear_eqs!( _add_production_piecewise_linear_eqs!(
model, model,
g, g,
formulation.prod_vars, formulation.prod_vars,
formulation.pwl_costs, formulation.pwl_costs,
formulation.status_vars, formulation.status_vars,
sc,
) )
_add_ramp_eqs!( _add_ramp_eqs!(
model, model,
@@ -34,26 +59,31 @@ function _add_unit!(model::JuMP.Model, g::Unit, formulation::Formulation)
formulation.prod_vars, formulation.prod_vars,
formulation.ramping, formulation.ramping,
formulation.status_vars, formulation.status_vars,
sc,
) )
_add_startup_cost_eqs!(model, g, formulation.startup_costs) _add_startup_shutdown_limit_eqs!(model, g, sc)
_add_startup_shutdown_limit_eqs!(model, g)
_add_status_eqs!(model, g, formulation.status_vars)
return return
end end
_is_initially_on(g::Unit)::Float64 = (g.initial_status > 0 ? 1.0 : 0.0) _is_initially_on(g::ThermalUnit)::Float64 = (g.initial_status > 0 ? 1.0 : 0.0)
function _add_spinning_reserve_vars!(model::JuMP.Model, g::Unit)::Nothing function _add_spinning_reserve_vars!(
model::JuMP.Model,
g::ThermalUnit,
sc::UnitCommitmentScenario,
)::Nothing
reserve = _init(model, :reserve) reserve = _init(model, :reserve)
reserve_shortfall = _init(model, :reserve_shortfall) reserve_shortfall = _init(model, :reserve_shortfall)
for r in g.reserves for r in g.reserves
r.type == "spinning" || continue r.type == "spinning" || continue
for t in 1:model[:instance].time for t in 1:model[:instance].time
reserve[r.name, g.name, t] = @variable(model, lower_bound = 0) reserve[sc.name, r.name, g.name, t] =
if (r.name, t) keys(reserve_shortfall) @variable(model, lower_bound = 0)
reserve_shortfall[r.name, t] = @variable(model, lower_bound = 0) if (sc.name, r.name, t) keys(reserve_shortfall)
reserve_shortfall[sc.name, r.name, t] =
@variable(model, lower_bound = 0)
if r.shortfall_penalty < 0 if r.shortfall_penalty < 0
set_upper_bound(reserve_shortfall[r.name, t], 0.0) set_upper_bound(reserve_shortfall[sc.name, r.name, t], 0.0)
end end
end end
end end
@@ -61,27 +91,37 @@ function _add_spinning_reserve_vars!(model::JuMP.Model, g::Unit)::Nothing
return return
end end
function _add_flexiramp_reserve_vars!(model::JuMP.Model, g::Unit)::Nothing function _add_flexiramp_reserve_vars!(
model::JuMP.Model,
g::ThermalUnit,
sc::UnitCommitmentScenario,
)::Nothing
upflexiramp = _init(model, :upflexiramp) upflexiramp = _init(model, :upflexiramp)
upflexiramp_shortfall = _init(model, :upflexiramp_shortfall) upflexiramp_shortfall = _init(model, :upflexiramp_shortfall)
mfg = _init(model, :mfg) mfg = _init(model, :mfg)
dwflexiramp = _init(model, :dwflexiramp) dwflexiramp = _init(model, :dwflexiramp)
dwflexiramp_shortfall = _init(model, :dwflexiramp_shortfall) dwflexiramp_shortfall = _init(model, :dwflexiramp_shortfall)
for r in g.reserves for t in 1:model[:instance].time
r.type == "flexiramp" || continue # maximum feasible generation, \bar{g_{its}} in Wang & Hobbs (2016)
for t in 1:model[:instance].time mfg[sc.name, g.name, t] = @variable(model, lower_bound = 0)
# maximum feasible generation, \bar{g_{its}} in Wang & Hobbs (2016) for r in g.reserves
mfg[r.name, g.name, t] = @variable(model, lower_bound = 0) r.type == "flexiramp" || continue
upflexiramp[r.name, g.name, t] = @variable(model) # up-flexiramp, ur_{it} in Wang & Hobbs (2016) upflexiramp[sc.name, r.name, g.name, t] = @variable(model) # up-flexiramp, ur_{it} in Wang & Hobbs (2016)
dwflexiramp[r.name, g.name, t] = @variable(model) # down-flexiramp, dr_{it} in Wang & Hobbs (2016) dwflexiramp[sc.name, r.name, g.name, t] = @variable(model) # down-flexiramp, dr_{it} in Wang & Hobbs (2016)
if (r.name, t) keys(upflexiramp_shortfall) if (sc.name, r.name, t) keys(upflexiramp_shortfall)
upflexiramp_shortfall[r.name, t] = upflexiramp_shortfall[sc.name, r.name, t] =
@variable(model, lower_bound = 0) @variable(model, lower_bound = 0)
dwflexiramp_shortfall[r.name, t] = dwflexiramp_shortfall[sc.name, r.name, t] =
@variable(model, lower_bound = 0) @variable(model, lower_bound = 0)
if r.shortfall_penalty < 0 if r.shortfall_penalty < 0
set_upper_bound(upflexiramp_shortfall[r.name, t], 0.0) set_upper_bound(
set_upper_bound(dwflexiramp_shortfall[r.name, t], 0.0) upflexiramp_shortfall[sc.name, r.name, t],
0.0,
)
set_upper_bound(
dwflexiramp_shortfall[sc.name, r.name, t],
0.0,
)
end end
end end
end end
@@ -89,7 +129,7 @@ function _add_flexiramp_reserve_vars!(model::JuMP.Model, g::Unit)::Nothing
return return
end end
function _add_startup_shutdown_vars!(model::JuMP.Model, g::Unit)::Nothing function _add_startup_shutdown_vars!(model::JuMP.Model, g::ThermalUnit)::Nothing
startup = _init(model, :startup) startup = _init(model, :startup)
for t in 1:model[:instance].time for t in 1:model[:instance].time
for s in 1:length(g.startup_categories) for s in 1:length(g.startup_categories)
@@ -99,32 +139,36 @@ function _add_startup_shutdown_vars!(model::JuMP.Model, g::Unit)::Nothing
return return
end end
function _add_startup_shutdown_limit_eqs!(model::JuMP.Model, g::Unit)::Nothing function _add_startup_shutdown_limit_eqs!(
model::JuMP.Model,
g::ThermalUnit,
sc::UnitCommitmentScenario,
)::Nothing
eq_shutdown_limit = _init(model, :eq_shutdown_limit) eq_shutdown_limit = _init(model, :eq_shutdown_limit)
eq_startup_limit = _init(model, :eq_startup_limit) eq_startup_limit = _init(model, :eq_startup_limit)
is_on = model[:is_on] is_on = model[:is_on]
prod_above = model[:prod_above] prod_above = model[:prod_above]
reserve = _total_reserves(model, g) reserve = _total_reserves(model, g, sc)
switch_off = model[:switch_off] switch_off = model[:switch_off]
switch_on = model[:switch_on] switch_on = model[:switch_on]
T = model[:instance].time T = model[:instance].time
for t in 1:T for t in 1:T
# Startup limit # Startup limit
eq_startup_limit[g.name, t] = @constraint( eq_startup_limit[sc.name, g.name, t] = @constraint(
model, model,
prod_above[g.name, t] + reserve[t] <= prod_above[sc.name, g.name, t] + reserve[t] <=
(g.max_power[t] - g.min_power[t]) * is_on[g.name, t] - (g.max_power[t] - g.min_power[t]) * is_on[g.name, t] -
max(0, g.max_power[t] - g.startup_limit) * switch_on[g.name, t] max(0, g.max_power[t] - g.startup_limit) * switch_on[g.name, t]
) )
# Shutdown limit # Shutdown limit
if g.initial_power > g.shutdown_limit if g.initial_power > g.shutdown_limit
eq_shutdown_limit[g.name, 0] = eq_shutdown_limit[sc.name, g.name, 0] =
@constraint(model, switch_off[g.name, 1] <= 0) @constraint(model, switch_off[g.name, 1] <= 0)
end end
if t < T if t < T
eq_shutdown_limit[g.name, t] = @constraint( eq_shutdown_limit[sc.name, g.name, t] = @constraint(
model, model,
prod_above[g.name, t] <= prod_above[sc.name, g.name, t] <=
(g.max_power[t] - g.min_power[t]) * is_on[g.name, t] - (g.max_power[t] - g.min_power[t]) * is_on[g.name, t] -
max(0, g.max_power[t] - g.shutdown_limit) * max(0, g.max_power[t] - g.shutdown_limit) *
switch_off[g.name, t+1] switch_off[g.name, t+1]
@@ -136,51 +180,55 @@ end
function _add_ramp_eqs!( function _add_ramp_eqs!(
model::JuMP.Model, model::JuMP.Model,
g::Unit, g::ThermalUnit,
formulation::RampingFormulation, formulation::RampingFormulation,
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
prod_above = model[:prod_above] prod_above = model[:prod_above]
reserve = _total_reserves(model, g) reserve = _total_reserves(model, g, sc)
eq_ramp_up = _init(model, :eq_ramp_up) eq_ramp_up = _init(model, :eq_ramp_up)
eq_ramp_down = _init(model, :eq_ramp_down) eq_ramp_down = _init(model, :eq_ramp_down)
for t in 1:model[:instance].time for t in 1:model[:instance].time
# Ramp up limit # Ramp up limit
if t == 1 if t == 1
if _is_initially_on(g) == 1 if _is_initially_on(g) == 1
eq_ramp_up[g.name, t] = @constraint( eq_ramp_up[sc.name, g.name, t] = @constraint(
model, model,
prod_above[g.name, t] + reserve[t] <= prod_above[sc.name, g.name, t] + reserve[t] <=
(g.initial_power - g.min_power[t]) + g.ramp_up_limit (g.initial_power - g.min_power[t]) + g.ramp_up_limit
) )
end end
else else
eq_ramp_up[g.name, t] = @constraint( eq_ramp_up[sc.name, g.name, t] = @constraint(
model, model,
prod_above[g.name, t] + reserve[t] <= prod_above[sc.name, g.name, t] + reserve[t] <=
prod_above[g.name, t-1] + g.ramp_up_limit prod_above[sc.name, g.name, t-1] + g.ramp_up_limit
) )
end end
# Ramp down limit # Ramp down limit
if t == 1 if t == 1
if _is_initially_on(g) == 1 if _is_initially_on(g) == 1
eq_ramp_down[g.name, t] = @constraint( eq_ramp_down[sc.name, g.name, t] = @constraint(
model, model,
prod_above[g.name, t] >= prod_above[sc.name, g.name, t] >=
(g.initial_power - g.min_power[t]) - g.ramp_down_limit (g.initial_power - g.min_power[t]) - g.ramp_down_limit
) )
end end
else else
eq_ramp_down[g.name, t] = @constraint( eq_ramp_down[sc.name, g.name, t] = @constraint(
model, model,
prod_above[g.name, t] >= prod_above[sc.name, g.name, t] >=
prod_above[g.name, t-1] - g.ramp_down_limit prod_above[sc.name, g.name, t-1] - g.ramp_down_limit
) )
end end
end end
end end
function _add_min_uptime_downtime_eqs!(model::JuMP.Model, g::Unit)::Nothing function _add_min_uptime_downtime_eqs!(
model::JuMP.Model,
g::ThermalUnit,
)::Nothing
is_on = model[:is_on] is_on = model[:is_on]
switch_off = model[:switch_off] switch_off = model[:switch_off]
switch_on = model[:switch_on] switch_on = model[:switch_on]
@@ -223,30 +271,52 @@ function _add_min_uptime_downtime_eqs!(model::JuMP.Model, g::Unit)::Nothing
end end
end end
function _add_net_injection_eqs!(model::JuMP.Model, g::Unit)::Nothing function _add_commitment_status_eqs!(model::JuMP.Model, g::ThermalUnit)::Nothing
is_on = model[:is_on]
T = model[:instance].time
eq_commitment_status = _init(model, :eq_commitment_status)
for t in 1:T
if g.commitment_status[t] !== nothing
eq_commitment_status[g.name, t] = @constraint(
model,
is_on[g.name, t] == (g.commitment_status[t] ? 1.0 : 0.0)
)
end
end
return
end
function _add_net_injection_eqs!(
model::JuMP.Model,
g::ThermalUnit,
sc::UnitCommitmentScenario,
)::Nothing
expr_net_injection = model[:expr_net_injection] expr_net_injection = model[:expr_net_injection]
for t in 1:model[:instance].time for t in 1:model[:instance].time
# Add to net injection expression # Add to net injection expression
add_to_expression!( add_to_expression!(
expr_net_injection[g.bus.name, t], expr_net_injection[sc.name, g.bus.name, t],
model[:prod_above][g.name, t], model[:prod_above][sc.name, g.name, t],
1.0, 1.0,
) )
add_to_expression!( add_to_expression!(
expr_net_injection[g.bus.name, t], expr_net_injection[sc.name, g.bus.name, t],
model[:is_on][g.name, t], model[:is_on][g.name, t],
g.min_power[t], g.min_power[t],
) )
end end
end end
function _total_reserves(model, g)::Vector function _total_reserves(model, g, sc)::Vector
T = model[:instance].time T = model[:instance].time
reserve = [0.0 for _ in 1:T] reserve = [0.0 for _ in 1:T]
spinning_reserves = [r for r in g.reserves if r.type == "spinning"] spinning_reserves = [r for r in g.reserves if r.type == "spinning"]
if !isempty(spinning_reserves) if !isempty(spinning_reserves)
reserve += [ reserve += [
sum(model[:reserve][r.name, g.name, t] for r in spinning_reserves) for t in 1:model[:instance].time sum(
model[:reserve][sc.name, r.name, g.name, t] for
r in spinning_reserves
) for t in 1:model[:instance].time
] ]
end end
return reserve return reserve

View File

@@ -10,37 +10,43 @@ solution. Useful for computing LMPs.
""" """
function fix!(model::JuMP.Model, solution::AbstractDict)::Nothing function fix!(model::JuMP.Model, solution::AbstractDict)::Nothing
instance, T = model[:instance], model[:instance].time instance, T = model[:instance], model[:instance].time
"Thermal production (MW)" keys(solution) ?
solution = Dict("s1" => solution) : nothing
is_on = model[:is_on] is_on = model[:is_on]
prod_above = model[:prod_above] prod_above = model[:prod_above]
reserve = model[:reserve] reserve = model[:reserve]
for g in instance.units for sc in instance.scenarios
for t in 1:T for g in sc.thermal_units
is_on_value = round(solution["Is on"][g.name][t])
prod_value =
round(solution["Production (MW)"][g.name][t], digits = 5)
JuMP.fix(is_on[g.name, t], is_on_value, force = true)
JuMP.fix(
prod_above[g.name, t],
prod_value - is_on_value * g.min_power[t],
force = true,
)
end
end
for r in instance.reserves
r.type == "spinning" || continue
for g in r.units
for t in 1:T for t in 1:T
reserve_value = round( is_on_value = round(solution[sc.name]["Is on"][g.name][t])
solution["Spinning reserve (MW)"][r.name][g.name][t], prod_value = round(
solution[sc.name]["Thermal production (MW)"][g.name][t],
digits = 5, digits = 5,
) )
JuMP.fix(is_on[g.name, t], is_on_value, force = true)
JuMP.fix( JuMP.fix(
reserve[r.name, g.name, t], prod_above[sc.name, g.name, t],
reserve_value, prod_value - is_on_value * g.min_power[t],
force = true, force = true,
) )
end end
end end
for r in sc.reserves
r.type == "spinning" || continue
for g in r.thermal_units
for t in 1:T
reserve_value = round(
solution[sc.name]["Spinning reserve (MW)"][r.name][g.name][t],
digits = 5,
)
JuMP.fix(
reserve[sc.name, r.name, g.name, t],
reserve_value,
force = true,
)
end
end
end
end end
return return
end end

View File

@@ -0,0 +1,230 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using MPI, Printf
using TimerOutputs
import JuMP
const to = TimerOutput()
function optimize!(model::JuMP.Model, method::ProgressiveHedging)::Nothing
mpi = MpiInfo(MPI.COMM_WORLD)
iterations = PHIterationInfo[]
consensus_vars = [var for var in all_variables(model) if is_binary(var)]
nvars = length(consensus_vars)
weights = ones(nvars)
if method.initial_weights !== nothing
weights = copy(method.initial_weights)
end
target = zeros(nvars)
if method.initial_target !== nothing
target = copy(method.initial_target)
end
params = PHSubProblemParams(
ρ = method.ρ,
λ = [method.λ for _ in 1:nvars],
target = target,
)
sp = PHSubProblem(model, model[:obj], consensus_vars, weights)
while true
iteration_time = @elapsed begin
solution = solve_subproblem(sp, params, method.inner_method)
MPI.Barrier(mpi.comm)
global_obj = compute_global_objective(mpi, solution)
target = compute_target(mpi, solution)
update_λ_and_residuals!(solution, params, target)
global_infeas = compute_global_infeasibility(solution, mpi)
global_residual = compute_global_residual(mpi, solution)
if has_numerical_issues(target)
break
end
end
total_elapsed_time =
compute_total_elapsed_time(iteration_time, iterations)
current_iteration = PHIterationInfo(
global_infeas = global_infeas,
global_obj = global_obj,
global_residual = global_residual,
iteration_number = length(iterations) + 1,
iteration_time = iteration_time,
sp_vals = solution.vals,
sp_obj = solution.obj,
target = target,
total_elapsed_time = total_elapsed_time,
)
push!(iterations, current_iteration)
print_progress(mpi, current_iteration, method.print_interval)
if should_stop(mpi, iterations, method.termination)
break
end
end
return
end
function compute_total_elapsed_time(
iteration_time::Float64,
iterations::Array{PHIterationInfo,1},
)::Float64
length(iterations) > 0 ?
current_total_time = last(iterations).total_elapsed_time :
current_total_time = 0
return current_total_time + iteration_time
end
function compute_global_objective(
mpi::MpiInfo,
s::PhSubProblemSolution,
)::Float64
global_obj = MPI.Allreduce(s.obj, MPI.SUM, mpi.comm)
global_obj /= mpi.nprocs
return global_obj
end
function compute_target(mpi::MpiInfo, s::PhSubProblemSolution)::Array{Float64,1}
sp_vals = s.vals
target = MPI.Allreduce(sp_vals, MPI.SUM, mpi.comm)
target = target / mpi.nprocs
return target
end
function compute_global_residual(mpi::MpiInfo, s::PhSubProblemSolution)::Float64
n_vars = length(s.vals)
local_residual_sum = abs.(s.residuals)
global_residual_sum = MPI.Allreduce(local_residual_sum, MPI.SUM, mpi.comm)
return sum(global_residual_sum) / n_vars
end
function compute_global_infeasibility(
solution::PhSubProblemSolution,
mpi::MpiInfo,
)::Float64
local_infeasibility = norm(solution.residuals)
global_infeas = MPI.Allreduce(local_infeasibility, MPI.SUM, mpi.comm)
return global_infeas
end
function solve_subproblem(
sp::PHSubProblem,
params::PHSubProblemParams,
method::SolutionMethod,
)::PhSubProblemSolution
G = length(sp.consensus_vars)
if norm(params.λ) < 1e-3
@objective(sp.mip, Min, sp.obj)
else
@objective(
sp.mip,
Min,
sp.obj +
sum(
sp.weights[g] *
params.λ[g] *
(sp.consensus_vars[g] - params.target[g]) for g in 1:G
) +
(params.ρ / 2) * sum(
sp.weights[g] * (sp.consensus_vars[g] - params.target[g])^2 for
g in 1:G
)
)
end
optimize!(sp.mip, method)
obj = objective_value(sp.mip)
sp_vals = value.(sp.consensus_vars)
return PhSubProblemSolution(obj = obj, vals = sp_vals, residuals = zeros(G))
end
function update_λ_and_residuals!(
solution::PhSubProblemSolution,
params::PHSubProblemParams,
target::Array{Float64,1},
)::Nothing
n_vars = length(solution.vals)
params.target = target
for n in 1:n_vars
solution.residuals[n] = solution.vals[n] - params.target[n]
params.λ[n] += params.ρ * solution.residuals[n]
end
end
function print_header(mpi::MpiInfo)::Nothing
if !mpi.root
return
end
@info "Solving via Progressive Hedging:"
@info @sprintf(
"%8s %20s %20s %14s %8s %8s",
"iter",
"obj",
"infeas",
"consensus",
"time-it",
"time"
)
end
function print_progress(
mpi::MpiInfo,
iteration::PHIterationInfo,
print_interval,
)::Nothing
if !mpi.root
return
end
if iteration.iteration_number % print_interval != 0
return
end
@info @sprintf(
"%8d %20.6e %20.6e %12.2f %% %8.2f %8.2f",
iteration.iteration_number,
iteration.global_obj,
iteration.global_infeas,
iteration.global_residual * 100,
iteration.iteration_time,
iteration.total_elapsed_time
)
end
function has_numerical_issues(target::Array{Float64,1})::Bool
if target == NaN
@warn "Numerical issues detected. Stopping."
return true
end
return false
end
function should_stop(
mpi::MpiInfo,
iterations::Array{PHIterationInfo,1},
termination::PHTermination,
)::Bool
if length(iterations) >= termination.max_iterations
if mpi.root
@info "Iteration limit reached. Stopping."
end
return true
end
if length(iterations) < termination.min_iterations
return false
end
if last(iterations).total_elapsed_time > termination.max_time
if mpi.root
@info "Time limit reached. Stopping."
end
return true
end
curr_it = last(iterations)
prev_it = iterations[length(iterations)-1]
if curr_it.global_infeas < termination.min_feasibility
obj_change = abs(prev_it.global_obj - curr_it.global_obj)
if obj_change < termination.min_improvement
if mpi.root
@info "Feasibility limit reached. Stopping."
end
return true
end
end
return false
end

View File

@@ -0,0 +1,18 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
function read(
paths::Vector{String},
::ProgressiveHedging,
)::UnitCommitmentInstance
comm = MPI.COMM_WORLD
mpi = MpiInfo(comm)
(length(paths) % mpi.nprocs == 0) || error(
"Number of processes $(mpi.nprocs) is not a divisor of $(length(paths))",
)
bundled_scenarios = length(paths) ÷ mpi.nprocs
sc_num_start = (mpi.rank - 1) * bundled_scenarios + 1
sc_num_end = mpi.rank * bundled_scenarios
return read(paths[sc_num_start:sc_num_end])
end

View File

@@ -0,0 +1,83 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using MPI, DataStructures
const FIRST_STAGE_VARS = ["Is on", "Switch on", "Switch off"]
function solution(model::JuMP.Model, method::ProgressiveHedging)::OrderedDict
comm = MPI.COMM_WORLD
mpi = MpiInfo(comm)
sp_solution = UnitCommitment.solution(model)
gather_solution = OrderedDict()
for (solution_key, dict) in sp_solution
if solution_key !== "Spinning reserve (MW)" &&
solution_key FIRST_STAGE_VARS
push!(gather_solution, solution_key => OrderedDict())
for (gen_bus_key, values) in dict
global T = length(values)
receive_values =
MPI.UBuffer(Vector{Float64}(undef, T * mpi.nprocs), T)
MPI.Gather!(float.(values), receive_values, comm)
if mpi.root
push!(
gather_solution[solution_key],
gen_bus_key => receive_values.data,
)
end
end
end
end
push!(gather_solution, "Spinning reserve (MW)" => OrderedDict())
for (reserve_type, dict) in sp_solution["Spinning reserve (MW)"]
push!(
gather_solution["Spinning reserve (MW)"],
reserve_type => OrderedDict(),
)
for (gen_key, values) in dict
receive_values =
MPI.UBuffer(Vector{Float64}(undef, T * mpi.nprocs), T)
MPI.Gather!(float.(values), receive_values, comm)
if mpi.root
push!(
gather_solution["Spinning reserve (MW)"][reserve_type],
gen_key => receive_values.data,
)
end
end
end
aggregate_solution = OrderedDict()
if mpi.root
for first_stage_var in FIRST_STAGE_VARS
aggregate_solution[first_stage_var] = OrderedDict()
for gen_key in keys(sp_solution[first_stage_var])
aggregate_solution[first_stage_var][gen_key] =
sp_solution[first_stage_var][gen_key]
end
end
for i in 1:mpi.nprocs
push!(aggregate_solution, "s$i" => OrderedDict())
for (solution_key, solution_dict) in gather_solution
push!(aggregate_solution["s$i"], solution_key => OrderedDict())
if solution_key !== "Spinning reserve (MW)"
for (gen_bus_key, values) in solution_dict
aggregate_solution["s$i"][solution_key][gen_bus_key] =
gather_solution[solution_key][gen_bus_key][(i-1)*T+1:i*T]
end
else
for (reserve_name, reserve_dict) in solution_dict
push!(
aggregate_solution["s$i"][solution_key],
reserve_name => OrderedDict(),
)
for (gen_key, values) in reserve_dict
aggregate_solution["s$i"][solution_key][reserve_name][gen_key] =
gather_solution[solution_key][reserve_name][gen_key][(i-1)*T+1:i*T]
end
end
end
end
end
end
return aggregate_solution
end

View File

@@ -0,0 +1,73 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
using JuMP, MPI, TimerOutputs
Base.@kwdef mutable struct PHTermination
max_iterations::Int = 1000
max_time::Float64 = 14400.0
min_feasibility::Float64 = 1e-3
min_improvement::Float64 = 1e-3
min_iterations::Int = 2
end
Base.@kwdef mutable struct PHIterationInfo
global_infeas::Float64
global_obj::Float64
global_residual::Float64
iteration_number::Int
iteration_time::Float64
sp_vals::Array{Float64,1}
sp_obj::Float64
target::Array{Float64,1}
total_elapsed_time::Float64
end
Base.@kwdef mutable struct ProgressiveHedging <: SolutionMethod
initial_weights::Union{Vector{Float64},Nothing} = nothing
initial_target::Union{Vector{Float64},Nothing} = nothing
ρ::Float64 = 1.0
λ::Float64 = 0.0
print_interval::Int = 1
termination::PHTermination = PHTermination()
inner_method::SolutionMethod = XavQiuWanThi2019.Method()
end
struct SpResult
obj::Float64
vals::Array{Float64,1}
end
Base.@kwdef mutable struct PHSubProblem
mip::JuMP.Model
obj::AffExpr
consensus_vars::Array{VariableRef,1}
weights::Array{Float64,1}
end
Base.@kwdef struct PhSubProblemSolution
obj::Float64
vals::Array{Float64,1}
residuals::Array{Float64,1}
end
Base.@kwdef mutable struct PHSubProblemParams
ρ::Float64
λ::Array{Float64,1}
target::Array{Float64,1}
end
struct MpiInfo
comm::Any
rank::Int
root::Bool
nprocs::Int
function MpiInfo(comm)
rank = MPI.Comm_rank(comm) + 1
is_root = (rank == 1)
nprocs = MPI.Comm_size(comm)
return new(comm, rank, is_root, nprocs)
end
end

View File

@@ -0,0 +1,259 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
"""
optimize!(
instance::UnitCommitmentInstance,
method::TimeDecomposition;
optimizer,
after_build = nothing,
after_optimize = nothing,
)::OrderedDict
Solve the given unit commitment instance with time decomposition.
The model solves each sub-problem of a given time length specified by method.time_window,
and proceeds to the next sub-problem by incrementing the time length of `method.time_increment`.
Arguments
---------
- `instance`:
the UnitCommitment instance.
- `method`:
the `TimeDecomposition` method.
- `optimizer`:
the optimizer for solving the problem.
- `after_build`:
a user-defined function that allows modifying the model after building,
must have 2 arguments `model` and `instance` in order.
- `after_optimize`:
a user-defined function that allows handling additional steps after optimizing,
must have 3 arguments `solution`, `model` and `instance` in order.
Examples
--------
```julia
using UnitCommitment, JuMP, Cbc, HiGHS
import UnitCommitment:
TimeDecomposition,
ConventionalLMP,
XavQiuWanThi2019,
Formulation
# specifying the after_build and after_optimize functions
function after_build(model, instance)
@constraint(
model,
model[:is_on]["g3", 1] + model[:is_on]["g4", 1] <= 1,
)
end
lmps = []
function after_optimize(solution, model, instance)
lmp = UnitCommitment.compute_lmp(
model,
ConventionalLMP(),
optimizer = HiGHS.Optimizer,
)
return push!(lmps, lmp)
end
# assume the instance is given as a 120h problem
instance = UnitCommitment.read("instance.json")
solution = UnitCommitment.optimize!(
instance,
TimeDecomposition(
time_window = 36, # solve 36h problems
time_increment = 24, # advance by 24h each time
inner_method = XavQiuWanThi2019.Method(),
formulation = Formulation(),
),
optimizer = Cbc.Optimizer,
after_build = after_build,
after_optimize = after_optimize,
)
"""
function optimize!(
instance::UnitCommitmentInstance,
method::TimeDecomposition;
optimizer,
after_build = nothing,
after_optimize = nothing,
)::OrderedDict
# get instance total length
T = instance.time
solution = OrderedDict()
if length(instance.scenarios) > 1
for sc in instance.scenarios
solution[sc.name] = OrderedDict()
end
end
# for each iteration, time increment by method.time_increment
for t_start in 1:method.time_increment:T
t_end = t_start + method.time_window - 1
# if t_end exceed total T
t_end = t_end > T ? T : t_end
# slice the model
@info "Solving the sub-problem of time $t_start to $t_end..."
sub_instance = UnitCommitment.slice(instance, t_start:t_end)
# build and optimize the model
sub_model = UnitCommitment.build_model(
instance = sub_instance,
optimizer = optimizer,
formulation = method.formulation,
)
if after_build !== nothing
@info "Calling after build..."
after_build(sub_model, sub_instance)
end
UnitCommitment.optimize!(sub_model, method.inner_method)
# get the result of each time period
sub_solution = UnitCommitment.solution(sub_model)
if after_optimize !== nothing
@info "Calling after optimize..."
after_optimize(sub_solution, sub_model, sub_instance)
end
# merge solution
if length(instance.scenarios) == 1
_update_solution!(solution, sub_solution, method.time_increment)
else
for sc in instance.scenarios
_update_solution!(
solution[sc.name],
sub_solution[sc.name],
method.time_increment,
)
end
end
# set the initial status for the next sub-problem
_set_initial_status!(instance, solution, method.time_increment)
end
return solution
end
"""
_set_initial_status!(
instance::UnitCommitmentInstance,
solution::OrderedDict,
time_increment::Int,
)
Set the thermal units' initial power levels and statuses based on the last bunch of time slots
specified by time_increment in the solution dictionary.
"""
function _set_initial_status!(
instance::UnitCommitmentInstance,
solution::OrderedDict,
time_increment::Int,
)
for sc in instance.scenarios
for thermal_unit in sc.thermal_units
if length(instance.scenarios) == 1
prod = solution["Thermal production (MW)"][thermal_unit.name]
is_on = solution["Is on"][thermal_unit.name]
else
prod =
solution[sc.name]["Thermal production (MW)"][thermal_unit.name]
is_on = solution[sc.name]["Is on"][thermal_unit.name]
end
thermal_unit.initial_power = prod[end]
thermal_unit.initial_status = _determine_initial_status(
thermal_unit.initial_status,
is_on[end-time_increment+1:end],
)
end
end
end
"""
_determine_initial_status(
prev_initial_status::Union{Float64,Int},
status_sequence::Vector{Float64},
)::Union{Float64,Int}
Determines a thermal unit's initial status based on its previous initial status, and
the on/off statuses in the last operation.
"""
function _determine_initial_status(
prev_initial_status::Union{Float64,Int},
status_sequence::Vector{Float64},
)::Union{Float64,Int}
# initialize the two flags
on_status = prev_initial_status
off_status = prev_initial_status
# read through the status sequence
# at each time if the unit is on, reset off_status, increment on_status
# if the on_status < 0, set it to 1.0
# at each time if the unit is off, reset on_status, decrement off_status
# if the off_status > 0, set it to -1.0
for status in status_sequence
if status == 1.0
on_status = on_status < 0.0 ? 1.0 : on_status + 1.0
off_status = 0.0
else
on_status = 0.0
off_status = off_status > 0.0 ? -1.0 : off_status - 1.0
end
end
# only one of them has non-zero value
return on_status + off_status
end
"""
_update_solution!(
solution::OrderedDict,
sub_solution::OrderedDict,
time_increment::Int,
)
Updates the solution (of each scenario) by concatenating the first bunch of
time slots of the newly generated sub-solution to the end of the final solution dictionary.
This function traverses through the dictionary keys, finds the vector and finally
does the concatenation. For now, the function is hardcoded to traverse at most 3 layers
of depth until it finds a vector object.
"""
function _update_solution!(
solution::OrderedDict,
sub_solution::OrderedDict,
time_increment::Int,
)
# the solution has at most 3 layers
for (l1_k, l1_v) in sub_solution
for (l2_k, l2_v) in l1_v
if l2_v isa Array
# slice the sub_solution
values_of_interest = l2_v[1:time_increment]
sub_solution[l1_k][l2_k] = values_of_interest
# append to the solution
if !isempty(solution)
append!(solution[l1_k][l2_k], values_of_interest)
end
elseif l2_v isa OrderedDict
for (l3_k, l3_v) in l2_v
# slice the sub_solution
values_of_interest = l3_v[1:time_increment]
sub_solution[l1_k][l2_k][l3_k] = values_of_interest
# append to the solution
if !isempty(solution)
append!(solution[l1_k][l2_k][l3_k], values_of_interest)
end
end
end
end
end
# if solution is never initialized, deep copy the sliced sub_solution
if isempty(solution)
merge!(solution, sub_solution)
end
end

View File

@@ -0,0 +1,35 @@
# UnitCommitment.jl: Optimization Package for Security-Constrained Unit Commitment
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details.
import ..SolutionMethod
import ..Formulation
"""
mutable struct TimeDecomposition <: SolutionMethod
time_window::Int
time_increment::Int
inner_method::SolutionMethod = XavQiuWanThi2019.Method()
formulation::Formulation = Formulation()
end
Time decomposition method to solve a problem with moving time window.
Fields
------
- `time_window`:
the time window of each sub-problem during the entire optimization procedure.
- `time_increment`:
the time incremented to the next sub-problem.
- `inner_method`:
method to solve each sub-problem.
- `formulation`:
problem formulation.
"""
Base.@kwdef mutable struct TimeDecomposition <: SolutionMethod
time_window::Int
time_increment::Int
inner_method::SolutionMethod = XavQiuWanThi2019.Method()
formulation::Formulation = Formulation()
end

View File

@@ -5,13 +5,15 @@
function _enforce_transmission( function _enforce_transmission(
model::JuMP.Model, model::JuMP.Model,
violations::Vector{_Violation}, violations::Vector{_Violation},
sc::UnitCommitmentScenario,
)::Nothing )::Nothing
for v in violations for v in violations
_enforce_transmission( _enforce_transmission(
model = model, model = model,
sc = sc,
violation = v, violation = v,
isf = model[:isf], isf = sc.isf,
lodf = model[:lodf], lodf = sc.lodf,
) )
end end
return return
@@ -19,6 +21,7 @@ end
function _enforce_transmission(; function _enforce_transmission(;
model::JuMP.Model, model::JuMP.Model,
sc::UnitCommitmentScenario,
violation::_Violation, violation::_Violation,
isf::Matrix{Float64}, isf::Matrix{Float64},
lodf::Matrix{Float64}, lodf::Matrix{Float64},
@@ -31,19 +34,21 @@ function _enforce_transmission(;
if violation.outage_line === nothing if violation.outage_line === nothing
limit = violation.monitored_line.normal_flow_limit[violation.time] limit = violation.monitored_line.normal_flow_limit[violation.time]
@info @sprintf( @info @sprintf(
" %8.3f MW overflow in %-5s time %3d (pre-contingency)", " %8.3f MW overflow in %-5s time %3d (pre-contingency, scenario %s)",
violation.amount, violation.amount,
violation.monitored_line.name, violation.monitored_line.name,
violation.time, violation.time,
sc.name,
) )
else else
limit = violation.monitored_line.emergency_flow_limit[violation.time] limit = violation.monitored_line.emergency_flow_limit[violation.time]
@info @sprintf( @info @sprintf(
" %8.3f MW overflow in %-5s time %3d (outage: line %s)", " %8.3f MW overflow in %-5s time %3d (outage: line %s, scenario %s)",
violation.amount, violation.amount,
violation.monitored_line.name, violation.monitored_line.name,
violation.time, violation.time,
violation.outage_line.name, violation.outage_line.name,
sc.name,
) )
end end
@@ -51,7 +56,7 @@ function _enforce_transmission(;
t = violation.time t = violation.time
flow = @variable(model, base_name = "flow[$fm,$t]") flow = @variable(model, base_name = "flow[$fm,$t]")
v = overflow[violation.monitored_line.name, violation.time] v = overflow[sc.name, violation.monitored_line.name, violation.time]
@constraint(model, flow <= limit + v) @constraint(model, flow <= limit + v)
@constraint(model, -flow <= limit + v) @constraint(model, -flow <= limit + v)
@@ -59,23 +64,23 @@ function _enforce_transmission(;
@constraint( @constraint(
model, model,
flow == sum( flow == sum(
net_injection[b.name, violation.time] * net_injection[sc.name, b.name, violation.time] *
isf[violation.monitored_line.offset, b.offset] for isf[violation.monitored_line.offset, b.offset] for
b in instance.buses if b.offset > 0 b in sc.buses if b.offset > 0
) )
) )
else else
@constraint( @constraint(
model, model,
flow == sum( flow == sum(
net_injection[b.name, violation.time] * ( net_injection[sc.name, b.name, violation.time] * (
isf[violation.monitored_line.offset, b.offset] + ( isf[violation.monitored_line.offset, b.offset] + (
lodf[ lodf[
violation.monitored_line.offset, violation.monitored_line.offset,
violation.outage_line.offset, violation.outage_line.offset,
] * isf[violation.outage_line.offset, b.offset] ] * isf[violation.outage_line.offset, b.offset]
) )
) for b in instance.buses if b.offset > 0 ) for b in sc.buses if b.offset > 0
) )
) )
end end

View File

@@ -2,42 +2,38 @@
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
import Base.Threads: @threads import Base.Threads: @threads, maxthreadid
function _find_violations( function _find_violations(
model::JuMP.Model; model::JuMP.Model,
sc::UnitCommitmentScenario;
max_per_line::Int, max_per_line::Int,
max_per_period::Int, max_per_period::Int,
) )
instance = model[:instance] instance = model[:instance]
net_injection = model[:net_injection] net_injection = model[:net_injection]
overflow = model[:overflow] overflow = model[:overflow]
length(instance.buses) > 1 || return [] length(sc.buses) > 1 || return []
violations = [] violations = []
@info "Verifying transmission limits..."
time_screening = @elapsed begin non_slack_buses = [b for b in sc.buses if b.offset > 0]
non_slack_buses = [b for b in instance.buses if b.offset > 0] net_injection_values = [
net_injection_values = [ value(net_injection[sc.name, b.name, t]) for b in non_slack_buses,
value(net_injection[b.name, t]) for b in non_slack_buses, t in 1:instance.time
t in 1:instance.time ]
] overflow_values = [
overflow_values = [ value(overflow[sc.name, lm.name, t]) for lm in sc.lines,
value(overflow[lm.name, t]) for lm in instance.lines, t in 1:instance.time
t in 1:instance.time ]
] violations = UnitCommitment._find_violations(
violations = UnitCommitment._find_violations( instance = instance,
instance = instance, sc = sc,
net_injections = net_injection_values, net_injections = net_injection_values,
overflow = overflow_values, overflow = overflow_values,
isf = model[:isf], isf = sc.isf,
lodf = model[:lodf], lodf = sc.lodf,
max_per_line = max_per_line, max_per_line = max_per_line,
max_per_period = max_per_period, max_per_period = max_per_period,
)
end
@info @sprintf(
"Verified transmission limits in %.2f seconds",
time_screening
) )
return violations return violations
end end
@@ -64,6 +60,7 @@ matrix, where L is the number of transmission lines.
""" """
function _find_violations(; function _find_violations(;
instance::UnitCommitmentInstance, instance::UnitCommitmentInstance,
sc::UnitCommitmentScenario,
net_injections::Array{Float64,2}, net_injections::Array{Float64,2},
overflow::Array{Float64,2}, overflow::Array{Float64,2},
isf::Array{Float64,2}, isf::Array{Float64,2},
@@ -71,10 +68,10 @@ function _find_violations(;
max_per_line::Int, max_per_line::Int,
max_per_period::Int, max_per_period::Int,
)::Array{_Violation,1} )::Array{_Violation,1}
B = length(instance.buses) - 1 B = length(sc.buses) - 1
L = length(instance.lines) L = length(sc.lines)
T = instance.time T = instance.time
K = nthreads() K = maxthreadid()
size(net_injections) == (B, T) || error("net_injections has incorrect size") size(net_injections) == (B, T) || error("net_injections has incorrect size")
size(isf) == (L, B) || error("isf has incorrect size") size(isf) == (L, B) || error("isf has incorrect size")
@@ -93,21 +90,21 @@ function _find_violations(;
post_v::Array{Float64} = zeros(L, L, K) # post_v[lm, lc, thread] post_v::Array{Float64} = zeros(L, L, K) # post_v[lm, lc, thread]
normal_limits::Array{Float64,2} = [ normal_limits::Array{Float64,2} = [
l.normal_flow_limit[t] + overflow[l.offset, t] for l.normal_flow_limit[t] + overflow[l.offset, t] for l in sc.lines,
l in instance.lines, t in 1:T t in 1:T
] ]
emergency_limits::Array{Float64,2} = [ emergency_limits::Array{Float64,2} = [
l.emergency_flow_limit[t] + overflow[l.offset, t] for l.emergency_flow_limit[t] + overflow[l.offset, t] for l in sc.lines,
l in instance.lines, t in 1:T t in 1:T
] ]
is_vulnerable::Array{Bool} = zeros(Bool, L) is_vulnerable::Array{Bool} = zeros(Bool, L)
for c in instance.contingencies for c in sc.contingencies
is_vulnerable[c.lines[1].offset] = true is_vulnerable[c.lines[1].offset] = true
end end
@threads for t in 1:T @threads :static for t in 1:T
k = threadid() k = threadid()
# Pre-contingency flows # Pre-contingency flows
@@ -144,7 +141,7 @@ function _find_violations(;
filters[t], filters[t],
_Violation( _Violation(
time = t, time = t,
monitored_line = instance.lines[lm], monitored_line = sc.lines[lm],
outage_line = nothing, outage_line = nothing,
amount = pre_v[lm, k], amount = pre_v[lm, k],
), ),
@@ -159,8 +156,8 @@ function _find_violations(;
filters[t], filters[t],
_Violation( _Violation(
time = t, time = t,
monitored_line = instance.lines[lm], monitored_line = sc.lines[lm],
outage_line = instance.lines[lc], outage_line = sc.lines[lc],
amount = post_v[lm, lc, k], amount = post_v[lm, lc, k],
), ),
) )

View File

@@ -12,10 +12,15 @@ function optimize!(model::JuMP.Model, method::XavQiuWanThi2019.Method)::Nothing
end end
initial_time = time() initial_time = time()
large_gap = false large_gap = false
has_transmission = (length(model[:isf]) > 0) has_transmission = false
if has_transmission && method.two_phase_gap for sc in model[:instance].scenarios
set_gap(1e-2) if length(sc.isf) > 0
large_gap = true has_transmission = true
end
if has_transmission && method.two_phase_gap
set_gap(1e-2)
large_gap = true
end
end end
while true while true
time_elapsed = time() - initial_time time_elapsed = time() - initial_time
@@ -31,13 +36,41 @@ function optimize!(model::JuMP.Model, method::XavQiuWanThi2019.Method)::Nothing
JuMP.set_time_limit_sec(model, time_remaining) JuMP.set_time_limit_sec(model, time_remaining)
@info "Solving MILP..." @info "Solving MILP..."
JuMP.optimize!(model) JuMP.optimize!(model)
has_transmission || break has_transmission || break
violations = _find_violations(
model, @info "Verifying transmission limits..."
max_per_line = method.max_violations_per_line, time_screening = @elapsed begin
max_per_period = method.max_violations_per_period, violations = []
for sc in model[:instance].scenarios
push!(
violations,
_find_violations(
model,
sc,
max_per_line = method.max_violations_per_line,
max_per_period = method.max_violations_per_period,
),
)
end
end
@info @sprintf(
"Verified transmission limits in %.2f seconds",
time_screening
) )
if isempty(violations)
violations_found = false
for v in violations
if !isempty(v)
violations_found = true
end
end
if violations_found
for (i, v) in enumerate(violations)
_enforce_transmission(model, v, model[:instance].scenarios[i])
end
else
@info "No violations found" @info "No violations found"
if large_gap if large_gap
large_gap = false large_gap = false
@@ -45,8 +78,6 @@ function optimize!(model::JuMP.Model, method::XavQiuWanThi2019.Method)::Nothing
else else
break break
end end
else
_enforce_transmission(model, violations)
end end
end end
return return

View File

@@ -2,14 +2,6 @@
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
"""
Lazy constraint solution method described in:
Xavier, A. S., Qiu, F., Wang, F., & Thimmapuram, P. R. (2019). Transmission
constraint filtering in large-scale security-constrained unit commitment.
IEEE Transactions on Power Systems, 34(3), 2457-2460.
DOI: https://doi.org/10.1109/TPWRS.2019.2892620
"""
module XavQiuWanThi2019 module XavQiuWanThi2019
import ..SolutionMethod import ..SolutionMethod
""" """
@@ -21,6 +13,13 @@ import ..SolutionMethod
max_violations_per_period::Int max_violations_per_period::Int
end end
Lazy constraint solution method described in:
Xavier, A. S., Qiu, F., Wang, F., & Thimmapuram, P. R. (2019). Transmission
constraint filtering in large-scale security-constrained unit commitment.
IEEE Transactions on Power Systems, 34(3), 2457-2460.
DOI: https://doi.org/10.1109/TPWRS.2019.2892620
Fields Fields
------ ------

View File

@@ -3,9 +3,9 @@
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
""" """
function optimize!(model::JuMP.Model)::Nothing optimize!(model::JuMP.Model)::Nothing
Solve the given unit commitment model. Unlike JuMP.optimize!, this uses more Solve the given unit commitment model. Unlike `JuMP.optimize!`, this uses more
advanced methods to accelerate the solution process and to enforce transmission advanced methods to accelerate the solution process and to enforce transmission
and N-1 security constraints. and N-1 security constraints.
""" """

View File

@@ -2,36 +2,58 @@
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
"""
solution(model::JuMP.Model)::OrderedDict
Extracts the optimal solution from the UC.jl model. The model must be solved beforehand.
# Example
```julia
UnitCommitment.optimize!(model)
solution = UnitCommitment.solution(model)
```
"""
function solution(model::JuMP.Model)::OrderedDict function solution(model::JuMP.Model)::OrderedDict
instance, T = model[:instance], model[:instance].time instance, T = model[:instance], model[:instance].time
function timeseries(vars, collection) function timeseries(vars, collection; sc = nothing)
return OrderedDict( if sc === nothing
b.name => [round(value(vars[b.name, t]), digits = 5) for t in 1:T] return OrderedDict(
for b in collection b.name =>
) [round(value(vars[b.name, t]), digits = 5) for t in 1:T] for
b in collection
)
else
return OrderedDict(
b.name => [
round(value(vars[sc.name, b.name, t]), digits = 5) for
t in 1:T
] for b in collection
)
end
end end
function production_cost(g) function production_cost(g, sc)
return [ return [
value(model[:is_on][g.name, t]) * g.min_power_cost[t] + sum( value(model[:is_on][g.name, t]) * g.min_power_cost[t] + sum(
Float64[ Float64[
value(model[:segprod][g.name, t, k]) * value(model[:segprod][sc.name, g.name, t, k]) *
g.cost_segments[k].cost[t] for g.cost_segments[k].cost[t] for
k in 1:length(g.cost_segments) k in 1:length(g.cost_segments)
], ],
) for t in 1:T ) for t in 1:T
] ]
end end
function production(g) function production(g, sc)
return [ return [
value(model[:is_on][g.name, t]) * g.min_power[t] + sum( value(model[:is_on][g.name, t]) * g.min_power[t] + sum(
Float64[ Float64[
value(model[:segprod][g.name, t, k]) for value(model[:segprod][sc.name, g.name, t, k]) for
k in 1:length(g.cost_segments) k in 1:length(g.cost_segments)
], ],
) for t in 1:T ) for t in 1:T
] ]
end end
function startup_cost(g) function startup_cost(g, sc)
S = length(g.startup_categories) S = length(g.startup_categories)
return [ return [
sum( sum(
@@ -41,66 +63,111 @@ function solution(model::JuMP.Model)::OrderedDict
] ]
end end
sol = OrderedDict() sol = OrderedDict()
sol["Production (MW)"] = for sc in instance.scenarios
OrderedDict(g.name => production(g) for g in instance.units) sol[sc.name] = OrderedDict()
sol["Production cost (\$)"] = if !isempty(sc.thermal_units)
OrderedDict(g.name => production_cost(g) for g in instance.units) sol[sc.name]["Thermal production (MW)"] = OrderedDict(
sol["Startup cost (\$)"] = g.name => production(g, sc) for g in sc.thermal_units
OrderedDict(g.name => startup_cost(g) for g in instance.units) )
sol["Is on"] = timeseries(model[:is_on], instance.units) sol[sc.name]["Thermal production cost (\$)"] = OrderedDict(
sol["Switch on"] = timeseries(model[:switch_on], instance.units) g.name => production_cost(g, sc) for g in sc.thermal_units
sol["Switch off"] = timeseries(model[:switch_off], instance.units) )
sol["Net injection (MW)"] = sol[sc.name]["Startup cost (\$)"] = OrderedDict(
timeseries(model[:net_injection], instance.buses) g.name => startup_cost(g, sc) for g in sc.thermal_units
sol["Load curtail (MW)"] = timeseries(model[:curtail], instance.buses) )
if !isempty(instance.lines) sol[sc.name]["Is on"] = timeseries(model[:is_on], sc.thermal_units)
sol["Line overflow (MW)"] = timeseries(model[:overflow], instance.lines) sol[sc.name]["Switch on"] =
timeseries(model[:switch_on], sc.thermal_units)
sol[sc.name]["Switch off"] =
timeseries(model[:switch_off], sc.thermal_units)
sol[sc.name]["Net injection (MW)"] =
timeseries(model[:net_injection], sc.buses, sc = sc)
sol[sc.name]["Load curtail (MW)"] =
timeseries(model[:curtail], sc.buses, sc = sc)
end
if !isempty(sc.lines)
sol[sc.name]["Line overflow (MW)"] =
timeseries(model[:overflow], sc.lines, sc = sc)
end
if !isempty(sc.price_sensitive_loads)
sol[sc.name]["Price-sensitive loads (MW)"] =
timeseries(model[:loads], sc.price_sensitive_loads, sc = sc)
end
if !isempty(sc.profiled_units)
sol[sc.name]["Profiled production (MW)"] =
timeseries(model[:prod_profiled], sc.profiled_units, sc = sc)
sol[sc.name]["Profiled production cost (\$)"] = OrderedDict(
pu.name => [
value(model[:prod_profiled][sc.name, pu.name, t]) *
pu.cost[t] for t in 1:instance.time
] for pu in sc.profiled_units
)
end
if !isempty(sc.storage_units)
sol[sc.name]["Storage level (MWh)"] =
timeseries(model[:storage_level], sc.storage_units, sc = sc)
sol[sc.name]["Is charging"] =
timeseries(model[:is_charging], sc.storage_units, sc = sc)
sol[sc.name]["Storage charging rates (MW)"] =
timeseries(model[:charge_rate], sc.storage_units, sc = sc)
sol[sc.name]["Storage charging cost (\$)"] = OrderedDict(
su.name => [
value(model[:charge_rate][sc.name, su.name, t]) *
su.charge_cost[t] for t in 1:instance.time
] for su in sc.storage_units
)
sol[sc.name]["Is discharging"] =
timeseries(model[:is_discharging], sc.storage_units, sc = sc)
sol[sc.name]["Storage discharging rates (MW)"] =
timeseries(model[:discharge_rate], sc.storage_units, sc = sc)
sol[sc.name]["Storage discharging cost (\$)"] = OrderedDict(
su.name => [
value(model[:discharge_rate][sc.name, su.name, t]) *
su.discharge_cost[t] for t in 1:instance.time
] for su in sc.storage_units
)
end
sol[sc.name]["Spinning reserve (MW)"] = OrderedDict(
r.name => OrderedDict(
g.name => [
value(model[:reserve][sc.name, r.name, g.name, t]) for t in 1:instance.time
] for g in r.thermal_units
) for r in sc.reserves if r.type == "spinning"
)
sol[sc.name]["Spinning reserve shortfall (MW)"] = OrderedDict(
r.name => [
value(model[:reserve_shortfall][sc.name, r.name, t]) for
t in 1:instance.time
] for r in sc.reserves if r.type == "spinning"
)
sol[sc.name]["Up-flexiramp (MW)"] = OrderedDict(
r.name => OrderedDict(
g.name => [
value(model[:upflexiramp][sc.name, r.name, g.name, t]) for t in 1:instance.time
] for g in r.thermal_units
) for r in sc.reserves if r.type == "flexiramp"
)
sol[sc.name]["Up-flexiramp shortfall (MW)"] = OrderedDict(
r.name => [
value(model[:upflexiramp_shortfall][sc.name, r.name, t]) for t in 1:instance.time
] for r in sc.reserves if r.type == "flexiramp"
)
sol[sc.name]["Down-flexiramp (MW)"] = OrderedDict(
r.name => OrderedDict(
g.name => [
value(model[:dwflexiramp][sc.name, r.name, g.name, t]) for t in 1:instance.time
] for g in r.thermal_units
) for r in sc.reserves if r.type == "flexiramp"
)
sol[sc.name]["Down-flexiramp shortfall (MW)"] = OrderedDict(
r.name => [
value(model[:dwflexiramp_shortfall][sc.name, r.name, t]) for t in 1:instance.time
] for r in sc.reserves if r.type == "flexiramp"
)
end end
if !isempty(instance.price_sensitive_loads) if length(instance.scenarios) == 1
sol["Price-sensitive loads (MW)"] = return first(values(sol))
timeseries(model[:loads], instance.price_sensitive_loads) else
return sol
end end
sol["Spinning reserve (MW)"] = OrderedDict(
r.name => OrderedDict(
g.name => [
value(model[:reserve][r.name, g.name, t]) for
t in 1:instance.time
] for g in r.units
) for r in instance.reserves if r.type == "spinning"
)
sol["Spinning reserve shortfall (MW)"] = OrderedDict(
r.name => [
value(model[:reserve_shortfall][r.name, t]) for
t in 1:instance.time
] for r in instance.reserves if r.type == "spinning"
)
sol["Up-flexiramp (MW)"] = OrderedDict(
r.name => OrderedDict(
g.name => [
value(model[:upflexiramp][r.name, g.name, t]) for
t in 1:instance.time
] for g in r.units
) for r in instance.reserves if r.type == "flexiramp"
)
sol["Up-flexiramp shortfall (MW)"] = OrderedDict(
r.name => [
value(model[:upflexiramp_shortfall][r.name, t]) for
t in 1:instance.time
] for r in instance.reserves if r.type == "flexiramp"
)
sol["Down-flexiramp (MW)"] = OrderedDict(
r.name => OrderedDict(
g.name => [
value(model[:dwflexiramp][r.name, g.name, t]) for
t in 1:instance.time
] for g in r.units
) for r in instance.reserves if r.type == "flexiramp"
)
sol["Down-flexiramp shortfall (MW)"] = OrderedDict(
r.name => [
value(model[:upflexiramp_shortfall][r.name, t]) for
t in 1:instance.time
] for r in instance.reserves if r.type == "flexiramp"
)
return sol
end end

View File

@@ -5,7 +5,7 @@
function set_warm_start!(model::JuMP.Model, solution::AbstractDict)::Nothing function set_warm_start!(model::JuMP.Model, solution::AbstractDict)::Nothing
instance, T = model[:instance], model[:instance].time instance, T = model[:instance], model[:instance].time
is_on = model[:is_on] is_on = model[:is_on]
for g in instance.units for g in instance.thermal_units
for t in 1:T for t in 1:T
JuMP.set_start_value(is_on[g.name, t], solution["Is on"][g.name][t]) JuMP.set_start_value(is_on[g.name, t], solution["Is on"][g.name][t])
JuMP.set_start_value( JuMP.set_start_value(

View File

@@ -2,6 +2,18 @@
# Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
"""
write(filename::AbstractString, solution::AbstractDict)::Nothing
Write the given solution to a JSON file.
# Example
```julia
solution = UnitCommitment.solution(model)
UnitCommitment.write("/tmp/output.json", solution)
```
"""
function write(filename::AbstractString, solution::AbstractDict)::Nothing function write(filename::AbstractString, solution::AbstractDict)::Nothing
open(filename, "w") do file open(filename, "w") do file
return JSON.print(file, solution, 2) return JSON.print(file, solution, 2)

View File

@@ -15,26 +15,49 @@ function generate_initial_conditions!(
instance::UnitCommitmentInstance, instance::UnitCommitmentInstance,
optimizer, optimizer,
)::Nothing )::Nothing
G = instance.units # Process first scenario
B = instance.buses _generate_initial_conditions!(instance.scenarios[1], optimizer)
# Copy initial conditions to remaining scenarios
for (si, sc) in enumerate(instance.scenarios)
si > 1 || continue
for (gi, g) in sc.thermal_units
g_ref = instance.scenarios[1].thermal_units[gi]
g.initial_power = g_ref.initial_power
g.initial_status = g_ref.initial_status
end
end
end
function _generate_initial_conditions!(
sc::UnitCommitmentScenario,
optimizer,
)::Nothing
G = sc.thermal_units
B = sc.buses
PU = sc.profiled_units
t = 1 t = 1
mip = JuMP.Model(optimizer) mip = JuMP.Model(optimizer)
# Decision variables # Decision variables
@variable(mip, x[G], Bin) @variable(mip, x[G], Bin)
@variable(mip, p[G] >= 0) @variable(mip, p[G] >= 0)
@variable(mip, pu[PU])
# Constraint: Minimum power # Constraint: Minimum power
@constraint(mip, min_power[g in G], p[g] >= g.min_power[t] * x[g]) @constraint(mip, min_power[g in G], p[g] >= g.min_power[t] * x[g])
@constraint(mip, pu_min_power[k in PU], pu[k] >= k.min_power[t])
# Constraint: Maximum power # Constraint: Maximum power
@constraint(mip, max_power[g in G], p[g] <= g.max_power[t] * x[g]) @constraint(mip, max_power[g in G], p[g] <= g.max_power[t] * x[g])
@constraint(mip, pu_max_power[k in PU], pu[k] <= k.max_power[t])
# Constraint: Production equals demand # Constraint: Production equals demand
@constraint( @constraint(
mip, mip,
power_balance, power_balance,
sum(b.load[t] for b in B) == sum(p[g] for g in G) sum(b.load[t] for b in B) ==
sum(p[g] for g in G) + sum(pu[k] for k in PU)
) )
# Constraint: Must run # Constraint: Must run
@@ -58,7 +81,12 @@ function generate_initial_conditions!(
return c / mw return c / mw
end end
end end
@objective(mip, Min, sum(p[g] * cost_slope(g) for g in G)) @objective(
mip,
Min,
sum(p[g] * cost_slope(g) for g in G) +
sum(pu[k] * k.cost[t] for k in PU)
)
JuMP.optimize!(mip) JuMP.optimize!(mip)

View File

@@ -2,17 +2,11 @@
# Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved. # Copyright (C) 2020-2021, UChicago Argonne, LLC. All rights reserved.
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
"""
Methods described in:
Xavier, Álinson S., Feng Qiu, and Shabbir Ahmed. "Learning to solve
large-scale security-constrained unit commitment problems." INFORMS
Journal on Computing 33.2 (2021): 739-756. DOI: 10.1287/ijoc.2020.0976
"""
module XavQiuAhm2021 module XavQiuAhm2021
using Distributions using Distributions
import ..UnitCommitmentInstance import ..UnitCommitmentInstance
import ..UnitCommitmentScenario
""" """
struct Randomization struct Randomization
@@ -55,6 +49,13 @@ load profile, as follows:
The default parameters were obtained based on an analysis of publicly available The default parameters were obtained based on an analysis of publicly available
bid and hourly data from PJM, corresponding to the month of January, 2017. For bid and hourly data from PJM, corresponding to the month of January, 2017. For
more details, see Section 4.2 of the paper. more details, see Section 4.2 of the paper.
# References
- **Xavier, Álinson S., Feng Qiu, and Shabbir Ahmed.** *"Learning to solve
large-scale security-constrained unit commitment problems."* INFORMS Journal
on Computing 33.2 (2021): 739-756. DOI: 10.1287/ijoc.2020.0976
""" """
Base.@kwdef struct Randomization Base.@kwdef struct Randomization
cost = Uniform(0.95, 1.05) cost = Uniform(0.95, 1.05)
@@ -119,10 +120,10 @@ end
function _randomize_costs( function _randomize_costs(
rng, rng,
instance::UnitCommitmentInstance, sc::UnitCommitmentScenario,
distribution, distribution,
)::Nothing )::Nothing
for unit in instance.units for unit in sc.thermal_units
α = rand(rng, distribution) α = rand(rng, distribution)
unit.min_power_cost *= α unit.min_power_cost *= α
for k in unit.cost_segments for k in unit.cost_segments
@@ -132,22 +133,29 @@ function _randomize_costs(
s.cost *= α s.cost *= α
end end
end end
for pu in sc.profiled_units
α = rand(rng, distribution)
pu.cost *= α
end
for su in sc.storage_units
α = rand(rng, distribution)
su.charge_cost *= α
su.discharge_cost *= α
end
return return
end end
function _randomize_load_share( function _randomize_load_share(
rng, rng,
instance::UnitCommitmentInstance, sc::UnitCommitmentScenario,
distribution, distribution,
)::Nothing )::Nothing
α = rand(rng, distribution, length(instance.buses)) α = rand(rng, distribution, length(sc.buses))
for t in 1:instance.time for t in 1:sc.time
total = sum(bus.load[t] for bus in instance.buses) total = sum(bus.load[t] for bus in sc.buses)
den = sum( den =
bus.load[t] / total * α[i] for sum(bus.load[t] / total * α[i] for (i, bus) in enumerate(sc.buses))
(i, bus) in enumerate(instance.buses) for (i, bus) in enumerate(sc.buses)
)
for (i, bus) in enumerate(instance.buses)
bus.load[t] *= α[i] / den bus.load[t] *= α[i] / den
end end
end end
@@ -156,12 +164,12 @@ end
function _randomize_load_profile( function _randomize_load_profile(
rng, rng,
instance::UnitCommitmentInstance, sc::UnitCommitmentScenario,
params::Randomization, params::Randomization,
)::Nothing )::Nothing
# Generate new system load # Generate new system load
system_load = [1.0] system_load = [1.0]
for t in 2:instance.time for t in 2:sc.time
idx = (t - 1) % length(params.load_profile_mu) + 1 idx = (t - 1) % length(params.load_profile_mu) + 1
gamma = rand( gamma = rand(
rng, rng,
@@ -169,14 +177,14 @@ function _randomize_load_profile(
) )
push!(system_load, system_load[t-1] * gamma) push!(system_load, system_load[t-1] * gamma)
end end
capacity = sum(maximum(u.max_power) for u in instance.units) capacity = sum(maximum(u.max_power) for u in sc.thermal_units)
peak_load = rand(rng, params.peak_load) * capacity peak_load = rand(rng, params.peak_load) * capacity
system_load = system_load ./ maximum(system_load) .* peak_load system_load = system_load ./ maximum(system_load) .* peak_load
# Scale bus loads to match the new system load # Scale bus loads to match the new system load
prev_system_load = sum(b.load for b in instance.buses) prev_system_load = sum(b.load for b in sc.buses)
for b in instance.buses for b in sc.buses
for t in 1:instance.time for t in 1:sc.time
b.load[t] *= system_load[t] / prev_system_load[t] b.load[t] *= system_load[t] / prev_system_load[t]
end end
end end
@@ -200,16 +208,54 @@ function randomize!(
method::XavQiuAhm2021.Randomization; method::XavQiuAhm2021.Randomization;
rng = MersenneTwister(), rng = MersenneTwister(),
)::Nothing )::Nothing
if method.randomize_costs for sc in instance.scenarios
XavQiuAhm2021._randomize_costs(rng, instance, method.cost) randomize!(sc, method; rng)
end
if method.randomize_load_share
XavQiuAhm2021._randomize_load_share(rng, instance, method.load_share)
end
if method.randomize_load_profile
XavQiuAhm2021._randomize_load_profile(rng, instance, method)
end end
return return
end end
function randomize!(
sc::UnitCommitment.UnitCommitmentScenario,
method::XavQiuAhm2021.Randomization;
rng = MersenneTwister(),
)::Nothing
if method.randomize_costs
XavQiuAhm2021._randomize_costs(rng, sc, method.cost)
end
if method.randomize_load_share
XavQiuAhm2021._randomize_load_share(rng, sc, method.load_share)
end
if method.randomize_load_profile
XavQiuAhm2021._randomize_load_profile(rng, sc, method)
end
return
end
"""
function randomize!(
instance::UnitCommitmentInstance;
method = UnitCommitment.XavQiuAhm2021.Randomization();
rng = MersenneTwister(),
)::Nothing
Randomizes instance parameters according to the provided randomization method.
# Example
```julia
instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
UnitCommitment.randomize!(instance)
model = UnitCommitment.build_model(; instance)
```
"""
function randomize!(
instance::UnitCommitment.UnitCommitmentInstance;
method = XavQiuAhm2021.Randomization(),
rng = MersenneTwister(),
)::Nothing
randomize!(instance, method; rng)
return
end
export randomize! export randomize!

View File

@@ -12,10 +12,11 @@ conditions are also not modified.
Example Example
------- -------
# Build a 2-hour UC instance ```julia
instance = UnitCommitment.read_benchmark("test/case14") # Build a 2-hour UC instance
modified = UnitCommitment.slice(instance, 1:2) instance = UnitCommitment.read_benchmark("matpower/case118/2017-02-01")
modified = UnitCommitment.slice(instance, 1:2)
```
""" """
function slice( function slice(
instance::UnitCommitmentInstance, instance::UnitCommitmentInstance,
@@ -23,31 +24,53 @@ function slice(
)::UnitCommitmentInstance )::UnitCommitmentInstance
modified = deepcopy(instance) modified = deepcopy(instance)
modified.time = length(range) modified.time = length(range)
modified.power_balance_penalty = modified.power_balance_penalty[range] for sc in modified.scenarios
for r in modified.reserves sc.power_balance_penalty = sc.power_balance_penalty[range]
r.amount = r.amount[range] for r in sc.reserves
end r.amount = r.amount[range]
for u in modified.units end
u.max_power = u.max_power[range] for u in sc.thermal_units
u.min_power = u.min_power[range] u.max_power = u.max_power[range]
u.must_run = u.must_run[range] u.min_power = u.min_power[range]
u.min_power_cost = u.min_power_cost[range] u.must_run = u.must_run[range]
for s in u.cost_segments u.min_power_cost = u.min_power_cost[range]
s.mw = s.mw[range] for s in u.cost_segments
s.cost = s.cost[range] s.mw = s.mw[range]
s.cost = s.cost[range]
end
end
for pu in sc.profiled_units
pu.max_power = pu.max_power[range]
pu.min_power = pu.min_power[range]
pu.cost = pu.cost[range]
end
for b in sc.buses
b.load = b.load[range]
end
for l in sc.lines
l.normal_flow_limit = l.normal_flow_limit[range]
l.emergency_flow_limit = l.emergency_flow_limit[range]
l.flow_limit_penalty = l.flow_limit_penalty[range]
end
for ps in sc.price_sensitive_loads
ps.demand = ps.demand[range]
ps.revenue = ps.revenue[range]
end
for su in sc.storage_units
su.min_level = su.min_level[range]
su.max_level = su.max_level[range]
su.simultaneous_charge_and_discharge =
su.simultaneous_charge_and_discharge[range]
su.charge_cost = su.charge_cost[range]
su.discharge_cost = su.discharge_cost[range]
su.charge_efficiency = su.charge_efficiency[range]
su.discharge_efficiency = su.discharge_efficiency[range]
su.loss_factor = su.loss_factor[range]
su.min_charge_rate = su.min_charge_rate[range]
su.max_charge_rate = su.max_charge_rate[range]
su.min_discharge_rate = su.min_discharge_rate[range]
su.max_discharge_rate = su.max_discharge_rate[range]
end end
end
for b in modified.buses
b.load = b.load[range]
end
for l in modified.lines
l.normal_flow_limit = l.normal_flow_limit[range]
l.emergency_flow_limit = l.emergency_flow_limit[range]
l.flow_limit_penalty = l.flow_limit_penalty[range]
end
for ps in modified.price_sensitive_loads
ps.demand = ps.demand[range]
ps.revenue = ps.revenue[range]
end end
return modified return modified
end end

View File

@@ -3,19 +3,19 @@
# Released under the modified BSD license. See COPYING.md for more details. # Released under the modified BSD license. See COPYING.md for more details.
""" """
repair!(instance) repair!(sc)
Verifies that the given unit commitment instance is valid and automatically Verifies that the given unit commitment scenario is valid and automatically
fixes some validation errors if possible, issuing a warning for each error fixes some validation errors if possible, issuing a warning for each error
found. If a validation error cannot be automatically fixed, issues an found. If a validation error cannot be automatically fixed, issues an
exception. exception.
Returns the number of validation errors found. Returns the number of validation errors found.
""" """
function repair!(instance::UnitCommitmentInstance)::Int function repair!(sc::UnitCommitmentScenario)::Int
n_errors = 0 n_errors = 0
for g in instance.units for g in sc.thermal_units
# Startup costs and delays must be increasing # Startup costs and delays must be increasing
for s in 2:length(g.startup_categories) for s in 2:length(g.startup_categories)
@@ -38,7 +38,7 @@ function repair!(instance::UnitCommitmentInstance)::Int
end end
end end
for t in 1:instance.time for t in 1:sc.time
# Production cost curve should be convex # Production cost curve should be convex
for k in 2:length(g.cost_segments) for k in 2:length(g.cost_segments)
cost = g.cost_segments[k].cost[t] cost = g.cost_segments[k].cost[t]

View File

@@ -28,6 +28,8 @@ function validate(
instance::UnitCommitmentInstance, instance::UnitCommitmentInstance,
solution::Union{Dict,OrderedDict}, solution::Union{Dict,OrderedDict},
)::Bool )::Bool
"Thermal production (MW)" keys(solution) ?
solution = Dict("s1" => solution) : nothing
err_count = 0 err_count = 0
err_count += _validate_units(instance, solution) err_count += _validate_units(instance, solution)
err_count += _validate_reserve_and_demand(instance, solution) err_count += _validate_reserve_and_demand(instance, solution)
@@ -42,358 +44,613 @@ end
function _validate_units(instance::UnitCommitmentInstance, solution; tol = 0.01) function _validate_units(instance::UnitCommitmentInstance, solution; tol = 0.01)
err_count = 0 err_count = 0
for sc in instance.scenarios
for unit in instance.units for unit in sc.thermal_units
production = solution["Production (MW)"][unit.name] production = solution[sc.name]["Thermal production (MW)"][unit.name]
reserve = [0.0 for _ in 1:instance.time] reserve = [0.0 for _ in 1:instance.time]
spinning_reserves = [r for r in unit.reserves if r.type == "spinning"] spinning_reserves =
if !isempty(spinning_reserves) [r for r in unit.reserves if r.type == "spinning"]
reserve += sum( if !isempty(spinning_reserves)
solution["Spinning reserve (MW)"][r.name][unit.name] for reserve += sum(
r in spinning_reserves solution[sc.name]["Spinning reserve (MW)"][r.name][unit.name]
) for r in spinning_reserves
end )
actual_production_cost = solution["Production cost (\$)"][unit.name]
actual_startup_cost = solution["Startup cost (\$)"][unit.name]
is_on = bin(solution["Is on"][unit.name])
for t in 1:instance.time
# Auxiliary variables
if t == 1
is_starting_up = (unit.initial_status < 0) && is_on[t]
is_shutting_down = (unit.initial_status > 0) && !is_on[t]
ramp_up =
max(0, production[t] + reserve[t] - unit.initial_power)
ramp_down = max(0, unit.initial_power - production[t])
else
is_starting_up = !is_on[t-1] && is_on[t]
is_shutting_down = is_on[t-1] && !is_on[t]
ramp_up = max(0, production[t] + reserve[t] - production[t-1])
ramp_down = max(0, production[t-1] - production[t])
end end
actual_production_cost =
solution[sc.name]["Thermal production cost (\$)"][unit.name]
actual_startup_cost =
solution[sc.name]["Startup cost (\$)"][unit.name]
is_on = bin(solution[sc.name]["Is on"][unit.name])
# Compute production costs for t in 1:instance.time
production_cost, startup_cost = 0, 0 # Auxiliary variables
if is_on[t] if t == 1
production_cost += unit.min_power_cost[t] is_starting_up = (unit.initial_status < 0) && is_on[t]
residual = max(0, production[t] - unit.min_power[t]) is_shutting_down = (unit.initial_status > 0) && !is_on[t]
for s in unit.cost_segments ramp_up =
cleared = min(residual, s.mw[t]) max(0, production[t] + reserve[t] - unit.initial_power)
production_cost += cleared * s.cost[t] ramp_down = max(0, unit.initial_power - production[t])
residual = max(0, residual - s.mw[t]) else
is_starting_up = !is_on[t-1] && is_on[t]
is_shutting_down = is_on[t-1] && !is_on[t]
ramp_up =
max(0, production[t] + reserve[t] - production[t-1])
ramp_down = max(0, production[t-1] - production[t])
end end
end
# Production should be non-negative # Compute production costs
if production[t] < -tol production_cost, startup_cost = 0, 0
@error @sprintf( if is_on[t]
"Unit %s produces negative amount of power at time %d (%.2f)", production_cost += unit.min_power_cost[t]
unit.name, residual = max(0, production[t] - unit.min_power[t])
t, for s in unit.cost_segments
production[t] cleared = min(residual, s.mw[t])
) production_cost += cleared * s.cost[t]
err_count += 1 residual = max(0, residual - s.mw[t])
end end
end
# Verify must-run # Production should be non-negative
if !is_on[t] && unit.must_run[t] if production[t] < -tol
@error @sprintf( @error @sprintf(
"Must-run unit %s is offline at time %d", "Unit %s produces negative amount of power at time %d (%.2f)",
unit.name, unit.name,
t t,
) production[t]
err_count += 1 )
end err_count += 1
end
# Verify reserve eligibility # Verify must-run
for r in instance.reserves if !is_on[t] && unit.must_run[t]
if r.type == "spinning" @error @sprintf(
if unit r.units && "Must-run unit %s is offline at time %d",
(unit in keys(solution["Spinning reserve (MW)"][r.name])) unit.name,
t
)
err_count += 1
end
# Verify reserve eligibility
for r in sc.reserves
if r.type == "spinning"
if unit r.thermal_units && (
unit in keys(
solution[sc.name]["Spinning reserve (MW)"][r.name],
)
)
@error @sprintf(
"Unit %s is not eligible to provide reserve %s",
unit.name,
r.name,
)
err_count += 1
end
end
end
# If unit is on, must produce at least its minimum power
if is_on[t] && (production[t] < unit.min_power[t] - tol)
@error @sprintf(
"Unit %s produces below its minimum limit at time %d (%.2f < %.2f)",
unit.name,
t,
production[t],
unit.min_power[t]
)
err_count += 1
end
# If unit is on, must produce at most its maximum power
if is_on[t] &&
(production[t] + reserve[t] > unit.max_power[t] + tol)
@error @sprintf(
"Unit %s produces above its maximum limit at time %d (%.2f + %.2f> %.2f)",
unit.name,
t,
production[t],
reserve[t],
unit.max_power[t]
)
err_count += 1
end
# If unit is off, must produce zero
if !is_on[t] && production[t] + reserve[t] > tol
@error @sprintf(
"Unit %s produces power at time %d while off (%.2f + %.2f > 0)",
unit.name,
t,
production[t],
reserve[t],
)
err_count += 1
end
# Startup limit
if is_starting_up && (ramp_up > unit.startup_limit + tol)
@error @sprintf(
"Unit %s exceeds startup limit at time %d (%.2f > %.2f)",
unit.name,
t,
ramp_up,
unit.startup_limit
)
err_count += 1
end
# Shutdown limit
if is_shutting_down && (ramp_down > unit.shutdown_limit + tol)
@error @sprintf(
"Unit %s exceeds shutdown limit at time %d (%.2f > %.2f)",
unit.name,
t,
ramp_down,
unit.shutdown_limit
)
err_count += 1
end
# Ramp-up limit
if !is_starting_up &&
!is_shutting_down &&
(ramp_up > unit.ramp_up_limit + tol)
@error @sprintf(
"Unit %s exceeds ramp up limit at time %d (%.2f > %.2f)",
unit.name,
t,
ramp_up,
unit.ramp_up_limit
)
err_count += 1
end
# Ramp-down limit
if !is_starting_up &&
!is_shutting_down &&
(ramp_down > unit.ramp_down_limit + tol)
@error @sprintf(
"Unit %s exceeds ramp down limit at time %d (%.2f > %.2f)",
unit.name,
t,
ramp_down,
unit.ramp_down_limit
)
err_count += 1
end
# Verify startup costs & minimum downtime
if is_starting_up
# Calculate how much time the unit has been offline
time_down = 0
for k in 1:(t-1)
if !is_on[t-k]
time_down += 1
else
break
end
end
if (t == time_down + 1) && (unit.initial_status < 0)
time_down -= unit.initial_status
end
# Calculate startup costs
for c in unit.startup_categories
if time_down >= c.delay
startup_cost = c.cost
end
end
# Check minimum downtime
if time_down < unit.min_downtime
@error @sprintf( @error @sprintf(
"Unit %s is not eligible to provide reserve %s", "Unit %s violates minimum downtime at time %d",
unit.name, unit.name,
r.name, t
) )
err_count += 1 err_count += 1
end end
end end
end
# If unit is on, must produce at least its minimum power # Verify minimum uptime
if is_on[t] && (production[t] < unit.min_power[t] - tol) if is_shutting_down
@error @sprintf(
"Unit %s produces below its minimum limit at time %d (%.2f < %.2f)",
unit.name,
t,
production[t],
unit.min_power[t]
)
err_count += 1
end
# If unit is on, must produce at most its maximum power # Calculate how much time the unit has been online
if is_on[t] && time_up = 0
(production[t] + reserve[t] > unit.max_power[t] + tol) for k in 1:(t-1)
@error @sprintf( if is_on[t-k]
"Unit %s produces above its maximum limit at time %d (%.2f + %.2f> %.2f)", time_up += 1
unit.name, else
t, break
production[t], end
reserve[t], end
unit.max_power[t] if (t == time_up + 1) && (unit.initial_status > 0)
) time_up += unit.initial_status
err_count += 1
end
# If unit is off, must produce zero
if !is_on[t] && production[t] + reserve[t] > tol
@error @sprintf(
"Unit %s produces power at time %d while off (%.2f + %.2f > 0)",
unit.name,
t,
production[t],
reserve[t],
)
err_count += 1
end
# Startup limit
if is_starting_up && (ramp_up > unit.startup_limit + tol)
@error @sprintf(
"Unit %s exceeds startup limit at time %d (%.2f > %.2f)",
unit.name,
t,
ramp_up,
unit.startup_limit
)
err_count += 1
end
# Shutdown limit
if is_shutting_down && (ramp_down > unit.shutdown_limit + tol)
@error @sprintf(
"Unit %s exceeds shutdown limit at time %d (%.2f > %.2f)",
unit.name,
t,
ramp_down,
unit.shutdown_limit
)
err_count += 1
end
# Ramp-up limit
if !is_starting_up &&
!is_shutting_down &&
(ramp_up > unit.ramp_up_limit + tol)
@error @sprintf(
"Unit %s exceeds ramp up limit at time %d (%.2f > %.2f)",
unit.name,
t,
ramp_up,
unit.ramp_up_limit
)
err_count += 1
end
# Ramp-down limit
if !is_starting_up &&
!is_shutting_down &&
(ramp_down > unit.ramp_down_limit + tol)
@error @sprintf(
"Unit %s exceeds ramp down limit at time %d (%.2f > %.2f)",
unit.name,
t,
ramp_down,
unit.ramp_down_limit
)
err_count += 1
end
# Verify startup costs & minimum downtime
if is_starting_up
# Calculate how much time the unit has been offline
time_down = 0
for k in 1:(t-1)
if !is_on[t-k]
time_down += 1
else
break
end end
end
if (t == time_down + 1) && (unit.initial_status < 0)
time_down -= unit.initial_status
end
# Calculate startup costs # Check minimum uptime
for c in unit.startup_categories if time_up < unit.min_uptime
if time_down >= c.delay @error @sprintf(
startup_cost = c.cost "Unit %s violates minimum uptime at time %d",
unit.name,
t
)
err_count += 1
end end
end end
# Check minimum downtime # Verify production costs
if time_down < unit.min_downtime if abs(actual_production_cost[t] - production_cost) > 1.00
@error @sprintf( @error @sprintf(
"Unit %s violates minimum downtime at time %d", "Unit %s has unexpected production cost at time %d (%.2f should be %.2f)",
unit.name, unit.name,
t t,
actual_production_cost[t],
production_cost
)
err_count += 1
end
# Verify startup costs
if abs(actual_startup_cost[t] - startup_cost) > 1.00
@error @sprintf(
"Unit %s has unexpected startup cost at time %d (%.2f should be %.2f)",
unit.name,
t,
actual_startup_cost[t],
startup_cost
) )
err_count += 1 err_count += 1
end end
end end
end
for pu in sc.profiled_units
production = solution[sc.name]["Profiled production (MW)"][pu.name]
# Verify minimum uptime for t in 1:instance.time
if is_shutting_down # Unit must produce at least its minimum power
if production[t] < pu.min_power[t] - tol
# Calculate how much time the unit has been online
time_up = 0
for k in 1:(t-1)
if is_on[t-k]
time_up += 1
else
break
end
end
if (t == time_up + 1) && (unit.initial_status > 0)
time_up += unit.initial_status
end
# Check minimum uptime
if time_up < unit.min_uptime
@error @sprintf( @error @sprintf(
"Unit %s violates minimum uptime at time %d", "Profiled unit %s produces below its minimum limit at time %d (%.2f < %.2f)",
unit.name, pu.name,
t t,
production[t],
pu.min_power[t]
)
err_count += 1
end
# Unit must produce at most its maximum power
if production[t] > pu.max_power[t] + tol
@error @sprintf(
"Profiled unit %s produces above its maximum limit at time %d (%.2f > %.2f)",
pu.name,
t,
production[t],
pu.max_power[t]
) )
err_count += 1 err_count += 1
end end
end end
end
for su in sc.storage_units
storage_level = solution[sc.name]["Storage level (MWh)"][su.name]
charge_rate =
solution[sc.name]["Storage charging rates (MW)"][su.name]
discharge_rate =
solution[sc.name]["Storage discharging rates (MW)"][su.name]
actual_charge_cost =
solution[sc.name]["Storage charging cost (\$)"][su.name]
actual_discharge_cost =
solution[sc.name]["Storage discharging cost (\$)"][su.name]
is_charging = bin(solution[sc.name]["Is charging"][su.name])
is_discharging = bin(solution[sc.name]["Is discharging"][su.name])
# time in hours
time_step = sc.time_step / 60
# Verify production costs for t in 1:instance.time
if abs(actual_production_cost[t] - production_cost) > 1.00 # Unit must store at least its minimum level
@error @sprintf( if storage_level[t] < su.min_level[t] - tol
"Unit %s has unexpected production cost at time %d (%.2f should be %.2f)", @error @sprintf(
unit.name, "Storage unit %s stores below its minimum level at time %d (%.2f < %.2f)",
t, su.name,
actual_production_cost[t], t,
production_cost storage_level[t],
) su.min_level[t]
err_count += 1 )
end err_count += 1
end
# Unit must store at most its maximum level
if storage_level[t] > su.max_level[t] + tol
@error @sprintf(
"Storage unit %s stores above its maximum level at time %d (%.2f > %.2f)",
su.name,
t,
storage_level[t],
su.max_level[t]
)
err_count += 1
end
# Verify startup costs if t == instance.time
if abs(actual_startup_cost[t] - startup_cost) > 1.00 # Unit must store at least its minimum level at last time period
@error @sprintf( if storage_level[t] < su.min_ending_level - tol
"Unit %s has unexpected startup cost at time %d (%.2f should be %.2f)", @error @sprintf(
unit.name, "Storage unit %s stores below its minimum ending level (%.2f < %.2f)",
t, su.name,
actual_startup_cost[t], storage_level[t],
startup_cost su.min_ending_level
) )
err_count += 1 err_count += 1
end
# Unit must store at most its maximum level at last time period
if storage_level[t] > su.max_ending_level + tol
@error @sprintf(
"Storage unit %s stores above its maximum ending level (%.2f > %.2f)",
su.name,
storage_level[t],
su.max_ending_level
)
err_count += 1
end
end
# Unit must follow the energy transition constraint
prev_level = t == 1 ? su.initial_level : storage_level[t-1]
current_level =
(1 - su.loss_factor[t]) * prev_level +
time_step * (
charge_rate[t] * su.charge_efficiency[t] -
discharge_rate[t] / su.discharge_efficiency[t]
)
if abs(storage_level[t] - current_level) > tol
@error @sprintf(
"Storage unit %s has unexpected level at time %d (%.2f should be %.2f)",
unit.name,
t,
storage_level[t],
current_level
)
err_count += 1
end
# Unit cannot simultaneous charge and discharge if it is not allowed
if !su.simultaneous_charge_and_discharge[t] &&
is_charging[t] &&
is_discharging[t]
@error @sprintf(
"Storage unit %s is charging and discharging simultaneous at time %d",
su.name,
t
)
err_count += 1
end
# Unit must charge at least its minimum rate
if is_charging[t] &&
(charge_rate[t] < su.min_charge_rate[t] - tol)
@error @sprintf(
"Storage unit %s charges below its minimum limit at time %d (%.2f < %.2f)",
unit.name,
t,
charge_rate[t],
su.min_charge_rate[t]
)
err_count += 1
end
# Unit must charge at most its maximum rate
if is_charging[t] &&
(charge_rate[t] > su.max_charge_rate[t] + tol)
@error @sprintf(
"Storage unit %s charges above its maximum limit at time %d (%.2f > %.2f)",
unit.name,
t,
charge_rate[t],
su.max_charge_rate[t]
)
err_count += 1
end
# Unit must have zero charge when it is not charging
if !is_charging[t] && (charge_rate[t] > tol)
@error @sprintf(
"Storage unit %s charges power at time %d while not charging (%.2f > 0)",
unit.name,
t,
charge_rate[t]
)
err_count += 1
end
# Unit must discharge at least its minimum rate
if is_discharging[t] &&
(discharge_rate[t] < su.min_discharge_rate[t] - tol)
@error @sprintf(
"Storage unit %s discharges below its minimum limit at time %d (%.2f < %.2f)",
unit.name,
t,
discharge_rate[t],
su.min_discharge_rate[t]
)
err_count += 1
end
# Unit must discharge at most its maximum rate
if is_discharging[t] &&
(discharge_rate[t] > su.max_discharge_rate[t] + tol)
@error @sprintf(
"Storage unit %s discharges above its maximum limit at time %d (%.2f > %.2f)",
unit.name,
t,
discharge_rate[t],
su.max_discharge_rate[t]
)
err_count += 1
end
# Unit must have zero discharge when it is not charging
if !is_discharging[t] && (discharge_rate[t] > tol)
@error @sprintf(
"Storage unit %s discharges power at time %d while not discharging (%.2f > 0)",
unit.name,
t,
discharge_rate[t]
)
err_count += 1
end
# Compute storage costs
charge_cost = su.charge_cost[t] * charge_rate[t]
discharge_cost = su.discharge_cost[t] * discharge_rate[t]
# Compare costs
if abs(actual_charge_cost[t] - charge_cost) > tol
@error @sprintf(
"Storage unit %s has unexpected charge cost at time %d (%.2f should be %.2f)",
unit.name,
t,
actual_charge_cost[t],
charge_cost
)
err_count += 1
end
if abs(actual_discharge_cost[t] - discharge_cost) > tol
@error @sprintf(
"Storage unit %s has unexpected discharge cost at time %d (%.2f should be %.2f)",
unit.name,
t,
actual_discharge_cost[t],
discharge_cost
)
err_count += 1
end
end end
end end
end end
return err_count return err_count
end end
function _validate_reserve_and_demand(instance, solution, tol = 0.01) function _validate_reserve_and_demand(instance, solution, tol = 0.01)
err_count = 0 err_count = 0
for t in 1:instance.time for sc in instance.scenarios
load_curtail = 0 for t in 1:instance.time
fixed_load = sum(b.load[t] for b in instance.buses) load_curtail = 0
ps_load = 0 fixed_load = sum(b.load[t] for b in sc.buses)
if length(instance.price_sensitive_loads) > 0 ps_load = 0
ps_load = sum( production = 0
solution["Price-sensitive loads (MW)"][ps.name][t] for storage_charge = 0
ps in instance.price_sensitive_loads storage_discharge = 0
) if length(sc.price_sensitive_loads) > 0
end ps_load = sum(
production = solution[sc.name]["Price-sensitive loads (MW)"][ps.name][t]
sum(solution["Production (MW)"][g.name][t] for g in instance.units) for ps in sc.price_sensitive_loads
if "Load curtail (MW)" in keys(solution)
load_curtail = sum(
solution["Load curtail (MW)"][b.name][t] for
b in instance.buses
)
end
balance = fixed_load - load_curtail - production + ps_load
# Verify that production equals demand
if abs(balance) > tol
@error @sprintf(
"Non-zero power balance at time %d (%.2f + %.2f - %.2f - %.2f != 0)",
t,
fixed_load,
ps_load,
load_curtail,
production,
)
err_count += 1
end
# Verify reserves
for r in instance.reserves
if r.type == "spinning"
provided = sum(
solution["Spinning reserve (MW)"][r.name][g.name][t] for
g in r.units
) )
shortfall = end
solution["Spinning reserve shortfall (MW)"][r.name][t] if length(sc.thermal_units) > 0
required = r.amount[t] production = sum(
solution[sc.name]["Thermal production (MW)"][g.name][t]
if provided + shortfall < required - tol for g in sc.thermal_units
@error @sprintf(
"Insufficient reserve %s at time %d (%.2f + %.2f < %.2f)",
r.name,
t,
provided,
shortfall,
required,
)
end
elseif r.type == "flexiramp"
upflexiramp = sum(
solution["Up-flexiramp (MW)"][r.name][g.name][t] for
g in r.units
) )
upflexiramp_shortfall = end
solution["Up-flexiramp shortfall (MW)"][r.name][t] if length(sc.profiled_units) > 0
production += sum(
if upflexiramp + upflexiramp_shortfall < r.amount[t] - tol solution[sc.name]["Profiled production (MW)"][pu.name][t]
@error @sprintf( for pu in sc.profiled_units
"Insufficient up-flexiramp at time %d (%.2f + %.2f < %.2f)",
t,
upflexiramp,
upflexiramp_shortfall,
r.amount[t],
)
err_count += 1
end
dwflexiramp = sum(
solution["Down-flexiramp (MW)"][r.name][g.name][t] for
g in r.units
) )
dwflexiramp_shortfall = end
solution["Down-flexiramp shortfall (MW)"][r.name][t] if length(sc.storage_units) > 0
storage_charge += sum(
solution[sc.name]["Storage charging rates (MW)"][su.name][t]
for su in sc.storage_units
)
storage_discharge += sum(
solution[sc.name]["Storage discharging rates (MW)"][su.name][t]
for su in sc.storage_units
)
end
if "Load curtail (MW)" in keys(solution)
load_curtail = sum(
solution[sc.name]["Load curtail (MW)"][b.name][t] for
b in sc.buses
)
end
balance =
fixed_load - load_curtail - production +
ps_load +
storage_charge - storage_discharge
if dwflexiramp + dwflexiramp_shortfall < r.amount[t] - tol # Verify that production equals demand
@error @sprintf( if abs(balance) > tol
"Insufficient down-flexiramp at time %d (%.2f + %.2f < %.2f)", @error @sprintf(
t, "Non-zero power balance at time %d (%.2f + %.2f - %.2f - %.2f + %.2f - %.2f != 0)",
dwflexiramp, t,
dwflexiramp_shortfall, fixed_load,
r.amount[t], ps_load,
load_curtail,
production,
storage_charge,
storage_discharge,
)
err_count += 1
end
# Verify reserves
for r in sc.reserves
if r.type == "spinning"
provided = sum(
solution[sc.name]["Spinning reserve (MW)"][r.name][g.name][t]
for g in r.thermal_units
) )
err_count += 1 shortfall =
solution[sc.name]["Spinning reserve shortfall (MW)"][r.name][t]
required = r.amount[t]
if provided + shortfall < required - tol
@error @sprintf(
"Insufficient reserve %s at time %d (%.2f + %.2f < %.2f)",
r.name,
t,
provided,
shortfall,
required,
)
end
elseif r.type == "flexiramp"
upflexiramp = sum(
solution[sc.name]["Up-flexiramp (MW)"][r.name][g.name][t]
for g in r.thermal_units
)
upflexiramp_shortfall =
solution[sc.name]["Up-flexiramp shortfall (MW)"][r.name][t]
if upflexiramp + upflexiramp_shortfall < r.amount[t] - tol
@error @sprintf(
"Insufficient up-flexiramp at time %d (%.2f + %.2f < %.2f)",
t,
upflexiramp,
upflexiramp_shortfall,
r.amount[t],
)
err_count += 1
end
dwflexiramp = sum(
solution[sc.name]["Down-flexiramp (MW)"][r.name][g.name][t]
for g in r.thermal_units
)
dwflexiramp_shortfall =
solution[sc.name]["Down-flexiramp shortfall (MW)"][r.name][t]
if dwflexiramp + dwflexiramp_shortfall < r.amount[t] - tol
@error @sprintf(
"Insufficient down-flexiramp at time %d (%.2f + %.2f < %.2f)",
t,
dwflexiramp,
dwflexiramp_shortfall,
r.amount[t],
)
err_count += 1
end
else
error("Unknown reserve type: $(r.type)")
end end
else
error("Unknown reserve type: $(r.type)")
end end
end end
end end

View File

@@ -1,25 +1,21 @@
name = "UnitCommitmentT"
uuid = "a3b7a17a-ab64-45e4-a924-cd5ae7dc644e"
authors = ["Alinson S. Xavier <git@axavier.org>"]
version = "0.1.0"
[deps] [deps]
Cbc = "9961bab8-2fa3-5c5a-9d89-47fab24efd76" Cbc = "9961bab8-2fa3-5c5a-9d89-47fab24efd76"
DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8" DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f" Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
GZip = "92fee26a-97fe-5a0c-ad85-20a5f3185b63" GZip = "92fee26a-97fe-5a0c-ad85-20a5f3185b63"
HiGHS = "87dc4568-4c63-4d18-b0c0-bb2238e4078b"
JSON = "682c06a0-de6a-54ab-a142-c8b1cf79cde6" JSON = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572" JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
JuliaFormatter = "98e50ef6-434e-11e9-1051-2b60c6c9e899"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Logging = "56ddb016-857b-54e1-b83d-db4d58db5568" MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee" MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
PackageCompiler = "9b87118b-4619-50d2-8e1e-99f35a4d4d9d"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf" Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40" Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
UnitCommitment = "64606440-39ea-11e9-0f29-3303a1d3d877"
[compat]
DataStructures = "0.18"
Distributions = "0.25"
GZip = "0.5"
JSON = "0.21"
JuMP = "1"
MathOptInterface = "1"
PackageCompiler = "1"
julia = "1"

BIN
test/fixtures/aelmp_simple.json.gz vendored Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
test/fixtures/case14-flex.json.gz vendored Normal file

Binary file not shown.

BIN
test/fixtures/case14-profiled.json.gz vendored Normal file

Binary file not shown.

BIN
test/fixtures/case14-storage.json.gz vendored Normal file

Binary file not shown.

BIN
test/fixtures/case14-sub-hourly.json.gz vendored Normal file

Binary file not shown.

BIN
test/fixtures/case14.json.gz vendored Normal file

Binary file not shown.

BIN
test/fixtures/lmp_simple_test_1.json.gz vendored Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More