You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/Criteria_and_instructions.md
+24-17Lines changed: 24 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,31 +1,40 @@
1
+
## Goals for the benchmark set
2
+
3
+
We encourage submission of benchmarks that help the project meet the following overall targets:
4
+
5
+
1. A set of benchmarks that are diverse in terms of modelling frameworks that generated them, problem structure, and model features. For instance, we would like models that consider innovative technologies (e.g., electrolyzers, CO2 capture) or policy-driven constraints (e.g., on CO2 emissions). By "features" we mean the different kinds of energy planning problems that can be modeleld by the framework (e.g., capacity expansion, power system operations, resource adequacy).
6
+
7
+
1. Benchmarks using model features that are implemented using MILP constraints, especially features other than unit commitment.
8
+
9
+
1. Benchmarks that help open-source solver developers improve their solvers: benchmarks that can be solved rapidly (< 5 minutes) by Gurobi but are slow (~1 hour or higher) or fail when solved by an open-source solver.
10
+
1
11
## Criteria for the selection of benchmarks
2
12
3
13
The Solver Benchmark project is open and encourages the community to submit benchmark problems. Please ensure that submissions adhere to the following criteria:
4
14
5
-
1. Benchmarks must be in the `.lp` file format, that it suitable for providing to the solver directly as input (i.e., no further pre-processing must be necessary).
15
+
1. Benchmarks must be in the `.lp`or `.mps`file formats, that are suitable for providing to the solver directly as input (i.e., no further pre-processing must be necessary). An advantage of using these formats is that they preserve [confidentiality of the model's input data](https://www.gams.com/48/docs/S_CONVERT.html?search=confidential) as they contain only mathemetical equations and it is near impossible to reconstruct the underlying energy specification and technological data.
6
16
7
17
1. Benchmarks must be Linear Programming (LP) or Mixed Integer Linear Programming (MILP) problems. We do not currently accept other kinds of problems such as non-linear, or multi-objective problems.
8
18
9
-
1. Benchmarks must be solvable using Gurobi in 1 hour or less on a machine with [TBD] 4 CPUs and 16 GB memory (e.g. a an `e2-standard-4` VM on Google Cloud).
10
-
11
19
1. Benchmarks must be problems generated by bottom-up energy system models (see *Target modelling frameworks* below).
12
20
13
-
1. If possible, benchmarks that have a "size" parameter (e.g. number of nodes, number of clusters) that can be varied in order to obtain the same benchmarks in multiple sizes: small, medium, large.
14
-
15
-
We also encourage benchmarks to help the project meet the following overall targets:
21
+
1. Benchmarks must be solvable in one of the following time limits, depending on the size category:
22
+
- Small: under 10 minutes HiGHS solving time
23
+
- Medium: under 1 hour HiGHS solving time
24
+
- Large / Real: under 10 hours Gurobi solving time
16
25
17
-
1. A set of benchmarks that are diverse in terms of modelling frameworks that generated them, problem structure, and model features.
26
+
where HiGHS runtimes are measured with the latest solver versions on a machine with [TBD] 2 vCPUs and 8 GB memory (e.g. an `e2-standard-2` VM on Google Cloud) and Gurobi solving time is on a [TBD -- reasonable machine?].
18
27
19
-
1. Benchmarks that help open-source solver developers improve their solvers: benchmarks that can be solved rapidly (< 5 minutes) by Gurobi but are slow (~1 hour or higher) when solved by an open-source solver.
28
+
Whenever possible, we prefer benchmarks that can be generated in multiple "sizes" by varying the time scale (single-stage / multi-stage planning horizons), temporal resolution (hourly, daily, etc), or spatial resolution (number of regions / nodes).
20
29
21
30
## Instructions for submitting benchmarks
22
31
23
32
The prefered and recommended approach for submission is to open a pull request to this repository that adds to the `benchmarks/<framework>/` folder:
24
33
- Metadata (name, description, etc; see below) added to a YAML file `benchmarks/<framework>/metadata.yaml`, create this if it doesn't exist already
25
34
- A configuration file that is used as an input to the modelling framework
26
-
- A dockerfile that specifies the modelling framework version (preferably a commit hash), pinned versions of all dependencies, and a script to run the modelling framework and obtain the LP file given to the solver.
35
+
- A dockerfile that specifies the modelling framework version (preferably a commit hash), pinned versions of all dependencies, and a script to run the modelling framework and obtain the LP/MPS file given to the solver.
27
36
- For example, see the benchmarks in the `benchmarks/pypsa/` folder.
28
-
- For non fully open-source modelling frameworks, where LP files cannot be reproduced automatically as above, we will accept LP files hosted on a public immutable file storage service such as Zenodo. In such cases, the metadata file and a script to download the benchmark (prefereably via a permalink) is sufficient.
37
+
- For non fully open-source modelling frameworks, where LP/MPS files cannot be reproduced automatically as above, we will accept LP/MPS files hosted on a public immutable file storage service such as Zenodo. In such cases, the metadata file containing a URL to download the benchmark (prefereably via a permalink) is sufficient.
29
38
30
39
### Benchmark metadata
31
40
@@ -40,12 +49,10 @@ Please include along with each benchmark submission, the following metadata. Fur
40
49
|**Technique**| LP | MILP |
41
50
|**Kind of problem**| Infrastructure (capacity expansion) | Operational (dispatch only) | Other (please indicate) |
0 commit comments