|
13 | 13 | "id": "dffd5e98", |
14 | 14 | "metadata": {}, |
15 | 15 | "source": [ |
16 | | - "# Case study 4: Calibrate Marrmot M14 using GRDC data" |
| 16 | + "# Case study 5: Calibrate Marrmot M14 using GRDC data" |
17 | 17 | ] |
18 | 18 | }, |
19 | 19 | { |
|
25 | 25 | "The model used in this notebook is: [MARRMoT](https://github.com/wknoben/MARRMoT) M14 (TOPMODEL)\n", |
26 | 26 | "We use [CMA Evolution Strategy (CMA-ES) algorithm](https://github.com/CMA-ES/pycma) package for finding the best parameters for the model. The procedure is explained in the cells below.\n", |
27 | 27 | "\n", |
| 28 | + "This notebook is written in a different style of programming than the other notebooks in this repo. In this notebook functions for the most important parts of the analyses are written first, and subsequently called, instead of having a more script like style of programming. This is done for two reasons.\n", |
| 29 | + "\n", |
| 30 | + "1. Since many optimization algorithms, CMA-ES included, require functions to be optimized to be passed to the algorithm. This notebooks shows that the eWaterCycle platform support this type of workflow.\n", |
| 31 | + "2. This notebook shows that the script style of programming used in the other notebooks of this repository is not a requirement of the platform but rather a choice of the user and other styles of programming are equally supported.\n", |
28 | 32 | "\n", |
29 | 33 | "***NOTE: this notebooks is computationally expensive. Although it might possible to execute it on the demo machine (be sure to adapt the POPSIZE and MAXITER settings below), this is not recommended. This notebook has been executed on a machine with 24 cpu cores and took about 4 hours to complete. It is included here to demonstrate what is possible with ewatercycle, not as a performance benchmark. Please use it with care.***" |
30 | 34 | ] |
31 | 35 | }, |
32 | 36 | { |
33 | 37 | "cell_type": "code", |
34 | | - "execution_count": 2, |
| 38 | + "execution_count": 1, |
35 | 39 | "id": "21bb49ea", |
36 | 40 | "metadata": {}, |
37 | 41 | "outputs": [ |
|
54 | 58 | }, |
55 | 59 | { |
56 | 60 | "cell_type": "code", |
57 | | - "execution_count": null, |
| 61 | + "execution_count": 2, |
58 | 62 | "id": "2c8216fe", |
59 | 63 | "metadata": { |
60 | 64 | "tags": [] |
|
106 | 110 | }, |
107 | 111 | { |
108 | 112 | "cell_type": "code", |
109 | | - "execution_count": 2, |
| 113 | + "execution_count": 3, |
110 | 114 | "id": "9bebaff3", |
111 | 115 | "metadata": { |
112 | 116 | "tags": [] |
|
129 | 133 | "## 1. Loading forcing data" |
130 | 134 | ] |
131 | 135 | }, |
| 136 | + { |
| 137 | + "cell_type": "markdown", |
| 138 | + "id": "7b0f49af-39f9-4427-bc10-b7fb4bac66e1", |
| 139 | + "metadata": {}, |
| 140 | + "source": [ |
| 141 | + "Note that while this notebook was run on the Cartesius supercomputer, the file system of the SURF research cloud has (on purpose) a similar structure to make sure that calls like the one below work on both machines." |
| 142 | + ] |
| 143 | + }, |
132 | 144 | { |
133 | 145 | "cell_type": "code", |
134 | | - "execution_count": 3, |
| 146 | + "execution_count": 4, |
135 | 147 | "id": "863ad1b9", |
136 | 148 | "metadata": {}, |
137 | 149 | "outputs": [], |
|
141 | 153 | }, |
142 | 154 | { |
143 | 155 | "cell_type": "code", |
144 | | - "execution_count": 4, |
| 156 | + "execution_count": 5, |
145 | 157 | "id": "e8f244c9", |
146 | 158 | "metadata": {}, |
147 | 159 | "outputs": [ |
|
191 | 203 | "and about `ask-and-tell interface` see an example at https://pypi.org/project/cma/.\n", |
192 | 204 | "\n", |
193 | 205 | "### Practical hints\n", |
194 | | - "The `run_model()` function uses two CPU cores (model-octave, and bmi-server)\n", |
| 206 | + "The `run_model()` function uses 1.33 CPU cores (1 for model-octave, and 1/3 for communication)\n", |
195 | 207 | "and takes about 4 min to run the model for the CALIBRATION period.\n", |
196 | 208 | "There are 24 cores available in a `normal` partition on Cartesius.\n", |
197 | 209 | "If you want to keep the running time to 4 min for e.g. CALIBRATION, you can\n", |
198 | | - "have 12 runs simultaneously. Therefore, the `popsize:12` in the cma-es option.\n", |
| 210 | + "have 18 runs simultaneously. Therefore, the `popsize:18` in the cma-es option.\n", |
199 | 211 | "If you want to get the results of the whole notebook after a\n", |
200 | 212 | "reasenable amount of time, you can change `maxiter` in the cma-es option." |
201 | 213 | ] |
|
210 | 222 | }, |
211 | 223 | { |
212 | 224 | "cell_type": "code", |
213 | | - "execution_count": 5, |
| 225 | + "execution_count": 6, |
214 | 226 | "id": "fec6e64d", |
215 | 227 | "metadata": {}, |
216 | 228 | "outputs": [], |
217 | 229 | "source": [ |
218 | | - "POPSIZE = 18 # it can be equal to number of available cores/2\n", |
| 230 | + "POPSIZE = 18 # it can be equal to number of available cores * 0.75\n", |
219 | 231 | "MAXITER = 50 # with this, the notebook takes ~4 hours (maximum)" |
220 | 232 | ] |
221 | 233 | }, |
222 | 234 | { |
223 | 235 | "cell_type": "code", |
224 | | - "execution_count": 6, |
| 236 | + "execution_count": 7, |
225 | 237 | "id": "feda0536", |
226 | 238 | "metadata": {}, |
227 | 239 | "outputs": [], |
|
392 | 404 | }, |
393 | 405 | { |
394 | 406 | "cell_type": "code", |
395 | | - "execution_count": 7, |
| 407 | + "execution_count": 8, |
396 | 408 | "id": "e19135a5", |
397 | 409 | "metadata": { |
398 | 410 | "execution": { |
|
453 | 465 | }, |
454 | 466 | { |
455 | 467 | "cell_type": "code", |
456 | | - "execution_count": 8, |
| 468 | + "execution_count": 9, |
457 | 469 | "id": "e19fbdeb", |
458 | 470 | "metadata": { |
459 | 471 | "execution": { |
|
505 | 517 | }, |
506 | 518 | { |
507 | 519 | "cell_type": "code", |
508 | | - "execution_count": 9, |
| 520 | + "execution_count": 10, |
509 | 521 | "id": "8d96f4b3", |
510 | 522 | "metadata": { |
511 | 523 | "execution": { |
|
581 | 593 | " return fig" |
582 | 594 | ] |
583 | 595 | }, |
| 596 | + { |
| 597 | + "cell_type": "markdown", |
| 598 | + "id": "1cc3fbe3-e522-4b26-bdfe-f053b4806d17", |
| 599 | + "metadata": {}, |
| 600 | + "source": [ |
| 601 | + "Since this run is done on the Cartesius supercomputer the output figure is written into the output folder as set in the configuration file. For this repo, this file has been manually copied into the figures sub-directory." |
| 602 | + ] |
| 603 | + }, |
584 | 604 | { |
585 | 605 | "cell_type": "code", |
586 | | - "execution_count": 10, |
| 606 | + "execution_count": 11, |
587 | 607 | "id": "ebf65971", |
588 | 608 | "metadata": { |
589 | 609 | "execution": { |
|
616 | 636 | "fig = plot_parameters(calibration_results)\n", |
617 | 637 | "fig.savefig(f\"{filename}.png\", bbox_inches=\"tight\", dpi=300)" |
618 | 638 | ] |
619 | | - }, |
620 | | - { |
621 | | - "cell_type": "code", |
622 | | - "execution_count": null, |
623 | | - "id": "41172288", |
624 | | - "metadata": {}, |
625 | | - "outputs": [], |
626 | | - "source": [] |
627 | 639 | } |
628 | 640 | ], |
629 | 641 | "metadata": { |
630 | 642 | "kernelspec": { |
631 | | - "display_name": "Python 3", |
| 643 | + "display_name": "Python 3 (ipykernel)", |
632 | 644 | "language": "python", |
633 | 645 | "name": "python3" |
634 | 646 | }, |
|
642 | 654 | "name": "python", |
643 | 655 | "nbconvert_exporter": "python", |
644 | 656 | "pygments_lexer": "ipython3", |
645 | | - "version": "3.7.7" |
| 657 | + "version": "3.9.7" |
646 | 658 | } |
647 | 659 | }, |
648 | 660 | "nbformat": 4, |
|
0 commit comments