Website: Add benchmark set summary slides and table#128
Website: Add benchmark set summary slides and table#128siddharth-krishna merged 22 commits intomainfrom
Conversation
…l' of https://github.com/open-energy-transition/solver-benchmark into 115-Benchmark-details-add-relative-performance-plot-from-Matthias
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Thanks, Jacek! A few requests, please:
- Can we have the text columns in the table aligned left and the numerical columns aligned right?
- "Total n. of different problems" -> "Total number of benchmark problems", "Multiple size instances" -> "Total number of benchmark size instances", "MILP Feature" -> "MILP Features"
- Could we add a column to the end called Total that sums up the numbers across the columns for each model framework? This way we can see e.g. the total number of LPs vs MILPs
- Can we merge table cells vertically so that e.g. the cell saying "Technique" spans the two rows containing "LP" and "MILP"?
- And can you please double check the numbers against the screenshot in #119 ? Right now the "Multiple size instances" row has the same numbers as the row above it, for instance.
@danielelerede-oet could you please also take a look at the table this PR adds to the "Benchmark details" dashboard? Just to check if it looks okay, but also to see if any of the data would be better presented by a graph. For instance, would it be better to have a stacked bar chart for "MILP Features" with one bar per feature on the x-axis, and different colors in the bar for each model framework?
I checked eg: Model: Tulipa Total number of benchmark problems: 1 Total number of benchmark size instances: 6 Model: Power Models Total number of benchmark size instances: 5 |
…er-benchmark into 119-Benchmark-summary-table-from-slides-(or-nice-graphs)
|
Hi @siddharth-krishna @jacek-oet . The logic behind the computation of "Total number of benchmark problems" and "Total number of benchmark size instances" seems to work well, but I'm not sure that the definition "Total number of benchmark size instances" is straightforward. Could it be better to have something life "N. of different size instances developed from the same benchmark"? In that case, e.g. for the pglib ones it would be "Total number of benchmark problems": 5, "N. of different size instances developed from the same benchmark": 0 because all the instances are separate ones (but the way Jacek developed it now is in agreement with the current definition). Concerning representation, it's definitely worth to have a summary bar plot with the different MILP features as Sid was suggesting, and the same would be valid for "Time horizon", "Kind of problem". For sizes it could be useful to develop something more complex, comparing the n. of constraints and variables with spatial and time resolution (though the latter may be a varying concept depending on the modelling platforms, i.e., for instance, nodes in PyPSA, regions in TIMES, but solving a model for a single region in TIMES can be far complex than solving a model for a single node in PyPSA, as the two concepts do not overlap). |
e72c4c7 to
3dfd768
Compare
|
Hi @siddharth-krishna @jacek-oet here's my proposal for the table to use for the website (please ignore the numbers, it's just to show the structure I would adopt). A row has been removed ("Kind of problem" as it doesn't add much on the specific modelling platform - users know they can perform both power and sector-coupled modelling on PyPSA while they can't do power sector analyses on TEMOA). I would also reduce the detail on MILP features. Now we have both single features and combinations, but I'd suggest we avoid this on the website and double count the benchmarks in each of the two cells corresponding to the adopted MILP feature. |
|
@danielelerede-oet @jacek-oet I propose a row of plots instead of Daniele's amended table from last week, something like this: (but with the legends showing on each subplot instead of all together on the right side -- I couldn't figure this out in plotly) Note: I removed the MILP features plot for now, because it's hard to extract the categories Daniele proposed automatically. I'll make an issue for this, I think we may have to modify the metadata file to have comma separated values in the Note also: the size plot will change soon, as we will shift to using num vars to determine size, and have 'real'ness as a separate category. So let's not spend too much time on it in this PR. |
|
Hi @siddharth-krishna , the graph looks great, especially with your proposed amendments. |
|
Ah yes, you're right, I forgot to mention that in my previous comment. Indeed, there was a confusion between number of benchmarks and number of benchmark instances, which is why in my current proposal I'm suggesting we always use number of benchmark instances (including sizes, so the larger number) in all the plots, to simplify things. We can have a sentence above/below the row of plots that explain this. Do you think that's okay? People can still use the filters and the table of all benchmarks below to answer queries like "how many benchmark problems (not instances) do we have that use PyPSA and MILP". |
|
@jacek-oet I had a call with Daniele and we aligned on the following proposal: let's replace the current table (titled Model Distribution Matrix) with a row of plots that look like the above, but displaying the following info:
For all plots, let's use the number of benchmark instances (including sizes) on the y-axis for simplicity. If this makes sense, please can you amend the 'Benchmark details' page as follows:
|
…er-benchmark into 119-Benchmark-summary-table-from-slides-(or-nice-graphs)
…p comments in chart type definitions for clarity
There was a problem hiding this comment.
Thanks so much Jacek! It looks good to me on a high level. @danielelerede-oet can I request you run this branch on your laptop and take a look at the 'Benchmark details' page and the new page you get to when you click the 'See more details' button?
Jacek a few minor requests from me please:
- Can we remove the
DetailSectiontop bar frombenchmark-details.tsxandbenchmark-summary.tsx? - In benchmark-details, can we move the filters to be below the
List of All Benchmarksheader? - In benchmark-summary, can we have a H1 header at the top of the page saying
Distribution of Model Features in Benchmark Set? (Daniele, please suggest a better title for that page if you have one!) - Can we have the breadcrumbs in benchmark-summary look like
Benchmark Details > Feature Distribution(again, open to better suggestions here)
Thanks!




closes #119