|
12 | 12 | "cell_type": "markdown", |
13 | 13 | "metadata": {}, |
14 | 14 | "source": [ |
15 | | - "This is a tutorial for the unsequa module in CLIMADA. A detailled description can be found in [Kropf (2021)](https://eartharxiv.org/repository/view/3123/)." |
| 15 | + "This is a tutorial for the unsequa module in CLIMADA. A detailled description can be found in [Kropf et al. (2022)](https://doi.org/10.5194/gmd-15-7177-2022)." |
16 | 16 | ] |
17 | 17 | }, |
18 | 18 | { |
|
31 | 31 | "\n", |
32 | 32 | "In this module, it is possible to perform global uncertainty analysis, as well as a sensitivity analysis. The word global is meant as opposition to the 'one-factor-at-a-time' (OAT) strategy. The OAT strategy, which consists in analyzing the effect of varying one model input factor at a time while keeping all other fixed, is popular among modellers, but has major shortcomings [Saltelli (2010)](https://www.sciencedirect.com/science/article/abs/pii/S1364815210001180), [Saltelli(2019)](http://www.sciencedirect.com/science/article/pii/S1364815218302822) and should not be used.\n", |
33 | 33 | "\n", |
34 | | - "A rough schemata of how to perform uncertainty and sensitivity analysis (taken from [Kropf(2021)](https://eartharxiv.org/repository/view/3123/))" |
| 34 | + "A rough schemata of how to perform uncertainty and sensitivity analysis (taken from [Kropf et al. (2022)](https://doi.org/10.5194/gmd-15-7177-2022)." |
35 | 35 | ] |
36 | 36 | }, |
37 | 37 | { |
|
50 | 50 | "cell_type": "markdown", |
51 | 51 | "metadata": {}, |
52 | 52 | "source": [ |
53 | | - "1. [Kropf, C.M. et al. Uncertainty and sensitivity analysis for global probabilistic weather and climate risk modelling: an implementation in the CLIMADA platform (2021)](https://eartharxiv.org/repository/view/3123/)\n", |
| 53 | + "1. [Kropf, C.M. et al. Uncertainty and sensitivity analysis for probabilistic weather and climate-risk modelling: an implementation in CLIMADA v.3.1.0. Geoscientific Model Development, 15, 7177–7201 (2022)](https://doi.org/10.5194/gmd-15-7177-2022).\n", |
54 | 54 | "2. [Pianosi, F. et al. Sensitivity analysis of environmental models: A systematic review with practical workflow. Environmental Modelling & Software 79, 214–232 (2016)](https://www.sciencedirect.com/science/article/pii/S1364815216300287).\n", |
55 | 55 | "3.[Douglas-Smith, D., Iwanaga, T., Croke, B. F. W. & Jakeman, A. J. Certain trends in uncertainty and sensitivity analysis: An overview of software tools and techniques. Environmental Modelling & Software 124, 104588 (2020)](https://doi.org/10.1007/978-1-4899-7547-8_5)\n", |
56 | 56 | "4. [Knüsel, B. Epistemological Issues in Data-Driven Modeling in Climate Research. (ETH Zurich, 2020)](https://www.research-collection.ethz.ch/handle/20.500.11850/399735)\n", |
|
542 | 542 | "source": [ |
543 | 543 | "| Attribute | Type | Description |\n", |
544 | 544 | "| --- | --- | --- |\n", |
545 | | - "| sampling_method | str | The sampling method as defined in [SALib](https://salib.readthedocs.io/en/latest/api.html). Possible choices: 'saltelli', 'fast_sampler', 'latin', 'morris', 'dgsm', 'ff'|\n", |
| 545 | + "| sampling_method | str | The sampling method as defined in [SALib](https://salib.readthedocs.io/en/latest/api.html). Possible choices: 'saltelli', 'fast_sampler', 'latin', 'morris', 'dgsm', 'ff', 'finite_diff'|\n", |
546 | 546 | "| sampling_kwargs | dict | Keyword arguments for the sampling_method. |\n", |
547 | 547 | "| n_samples | int | Effective number of samples (number of rows of samples_df)|\n", |
548 | 548 | "| param_labels | list(str) | Name of all the uncertainty input parameters|\n", |
549 | 549 | "| problem_sa | dict | The description of the uncertainty variables and their distribution as used in [SALib](https://salib.readthedocs.io/en/latest/basics.html). |\n", |
550 | | - "| sensitivity_method | str | Sensitivity analysis method from [SALib.analyse](https://salib.readthedocs.io/en/latest/api.html) Possible choices: 'fast', 'rbd_fact', 'morris', 'sobol', 'delta', 'ff'. Note that in Salib, sampling methods and sensitivity analysis methods should be used in specific pairs.|\n", |
| 550 | + "| sensitivity_method | str | Sensitivity analysis method from [SALib.analyse](https://salib.readthedocs.io/en/latest/api.html) Possible choices: 'sobol', 'fast', 'rbd_fast', 'morris', 'dgsm', 'ff', 'pawn', 'rhdm', 'rsa', 'discrepancy', 'hdmr'. Note that in Salib, sampling methods and sensitivity analysis methods should be used in specific pairs.|\n", |
551 | 551 | "| sensitivity_kwargs | dict | Keyword arguments for sensitivity_method. |\n", |
552 | 552 | "| unit | str | Unit of the exposures value |" |
553 | 553 | ] |
|
2466 | 2466 | }, |
2467 | 2467 | { |
2468 | 2468 | "cell_type": "code", |
2469 | | - "execution_count": 51, |
| 2469 | + "execution_count": null, |
2470 | 2470 | "metadata": {}, |
2471 | 2471 | "outputs": [], |
2472 | 2472 | "source": [ |
|
2475 | 2475 | "haz.basin = [\"NA\"] * haz.size\n", |
2476 | 2476 | "\n", |
2477 | 2477 | "# apply climate change factors\n", |
2478 | | - "haz_26 = haz.apply_climate_scenario_knu(ref_year=2050, rcp_scenario=26)\n", |
2479 | | - "haz_45 = haz.apply_climate_scenario_knu(ref_year=2050, rcp_scenario=45)\n", |
2480 | | - "haz_60 = haz.apply_climate_scenario_knu(ref_year=2050, rcp_scenario=60)\n", |
2481 | | - "haz_85 = haz.apply_climate_scenario_knu(ref_year=2050, rcp_scenario=85)\n", |
| 2478 | + "haz_26 = haz.apply_climate_scenario_knu(target_year=2050, scenario=\"2.6\")\n", |
| 2479 | + "haz_45 = haz.apply_climate_scenario_knu(target_year=2050, scenario=\"4.5\")\n", |
| 2480 | + "haz_60 = haz.apply_climate_scenario_knu(target_year=2050, scenario=\"6.0\")\n", |
| 2481 | + "haz_85 = haz.apply_climate_scenario_knu(target_year=2050, scenario=\"8.5\")\n", |
2482 | 2482 | "\n", |
2483 | 2483 | "# pack future hazard sets into dictionary - we want to sample from this dictionary later\n", |
2484 | 2484 | "haz_fut_list = [haz_26, haz_45, haz_60, haz_85]\n", |
|
2489 | 2489 | }, |
2490 | 2490 | { |
2491 | 2491 | "cell_type": "code", |
2492 | | - "execution_count": 52, |
| 2492 | + "execution_count": null, |
2493 | 2493 | "metadata": {}, |
2494 | 2494 | "outputs": [], |
2495 | 2495 | "source": [ |
|
2501 | 2501 | "\n", |
2502 | 2502 | "def exp_base_func(x_exp, exp_base):\n", |
2503 | 2503 | " exp = exp_base.copy()\n", |
2504 | | - " exp.gdf[\"value\"] *= x_exp\n", |
| 2504 | + " exp.data[\"value\"] *= x_exp\n", |
2505 | 2505 | " return exp\n", |
2506 | 2506 | "\n", |
2507 | 2507 | "\n", |
|
2821 | 2821 | }, |
2822 | 2822 | { |
2823 | 2823 | "cell_type": "code", |
2824 | | - "execution_count": 61, |
| 2824 | + "execution_count": null, |
2825 | 2825 | "metadata": { |
2826 | 2826 | "ExecuteTime": { |
2827 | 2827 | "end_time": "2023-08-03T12:00:12.180767Z", |
|
2844 | 2844 | "\n", |
2845 | 2845 | " entity = Entity.from_excel(ENT_DEMO_TODAY)\n", |
2846 | 2846 | " entity.exposures.ref_year = 2018\n", |
2847 | | - " entity.exposures.gdf[\"value\"] *= x_ent\n", |
| 2847 | + " entity.exposures.data[\"value\"] *= x_ent\n", |
2848 | 2848 | " return entity\n", |
2849 | 2849 | "\n", |
2850 | 2850 | "\n", |
|
2954 | 2954 | }, |
2955 | 2955 | { |
2956 | 2956 | "cell_type": "code", |
2957 | | - "execution_count": 64, |
| 2957 | + "execution_count": null, |
2958 | 2958 | "metadata": { |
2959 | 2959 | "ExecuteTime": { |
2960 | 2960 | "end_time": "2023-08-03T12:00:12.959984Z", |
|
3070 | 3070 | ], |
3071 | 3071 | "source": [ |
3072 | 3072 | "ent_avg = ent_today_iv.evaluate()\n", |
3073 | | - "ent_avg.exposures.gdf.head()" |
| 3073 | + "ent_avg.exposures.data.head()" |
3074 | 3074 | ] |
3075 | 3075 | }, |
3076 | 3076 | { |
|
5320 | 5320 | }, |
5321 | 5321 | { |
5322 | 5322 | "cell_type": "code", |
5323 | | - "execution_count": 77, |
| 5323 | + "execution_count": null, |
5324 | 5324 | "metadata": {}, |
5325 | 5325 | "outputs": [], |
5326 | 5326 | "source": [ |
|
5335 | 5335 | "\n", |
5336 | 5336 | "def exp_func(cnt, x_exp, exp_list=exp_list):\n", |
5337 | 5337 | " exp = exp_list[int(cnt)].copy()\n", |
5338 | | - " exp.gdf[\"value\"] *= x_exp\n", |
| 5338 | + " exp.data[\"value\"] *= x_exp\n", |
5339 | 5339 | " return exp\n", |
5340 | 5340 | "\n", |
5341 | 5341 | "\n", |
|
5523 | 5523 | "source": [ |
5524 | 5524 | "Loading Hazards or Exposures from file is a rather lengthy operation. Thus, we want to minimize the reading operations, ideally reading each file only once. Simultaneously, Hazard and Exposures can be large in memory, and thus we would like to have at most one of each loaded at a time. Thus, we do not want to use the list capacity from the helper method InputVar.exposures and InputVar.hazard.\n", |
5525 | 5525 | "\n", |
5526 | | - "For demonstration purposes, we will use below as exposures files the litpop for three countries, and for tha hazard files the winter storms for the same three countries. Note that this does not make a lot of sense for an uncertainty analysis. For your use case, please replace the set of exposures and/or hazard files with meaningful sets, for instance sets of exposures for different resolutions or hazards for different model runs.\n" |
| 5526 | + "For demonstration purposes, we will use below as exposures files the litpop for three countries, and for the hazard files the winter storms for the same three countries. Note that this does not make a lot of sense for an uncertainty analysis. For your use case, please replace the set of exposures and/or hazard files with meaningful sets, for instance sets of exposures for different resolutions or hazards for different model runs.\n" |
5527 | 5527 | ] |
5528 | 5528 | }, |
5529 | 5529 | { |
|
5600 | 5600 | "def exp_func(f_exp, x_exp, filename_list=f_exp_list):\n", |
5601 | 5601 | " filename = filename_list[int(f_exp)]\n", |
5602 | 5602 | " global exp_base\n", |
5603 | | - " if \"exp_base\" in globals():\n", |
5604 | | - " if isinstance(exp_base, Exposures):\n", |
5605 | | - " if exp_base.gdf[\"filename\"] != str(filename):\n", |
5606 | | - " exp_base = Exposures.from_hdf5(filename)\n", |
5607 | | - " exp_base.gdf[\"filename\"] = str(filename)\n", |
| 5603 | + " if (\n", |
| 5604 | + " \"exp_base\" in globals()\n", |
| 5605 | + " and isinstance(exp_base, Exposures)\n", |
| 5606 | + " and exp_base.description == str(filename)\n", |
| 5607 | + " ):\n", |
| 5608 | + " pass # if correct file is already loaded in memory, we do not need to reload it\n", |
5608 | 5609 | " else:\n", |
5609 | 5610 | " exp_base = Exposures.from_hdf5(filename)\n", |
5610 | | - " exp_base.gdf[\"filename\"] = str(filename)\n", |
| 5611 | + " exp_base.description = str(filename)\n", |
5611 | 5612 | "\n", |
5612 | 5613 | " exp = exp_base.copy()\n", |
5613 | | - " exp.gdf[\"value\"] *= x_exp\n", |
| 5614 | + " exp.data[\"value\"] *= x_exp\n", |
5614 | 5615 | " return exp\n", |
5615 | 5616 | "\n", |
5616 | 5617 | "\n", |
|
5624 | 5625 | "def haz_func(f_haz, i_haz, filename_list=f_haz_list):\n", |
5625 | 5626 | " filename = filename_list[int(f_haz)]\n", |
5626 | 5627 | " global haz_base\n", |
5627 | | - " if \"haz_base\" in globals():\n", |
5628 | | - " if isinstance(haz_base, Hazard):\n", |
5629 | | - " if haz_base.filename != str(filename):\n", |
5630 | | - " haz_base = Hazard.from_hdf5(filename)\n", |
5631 | | - " haz_base.filename = str(filename)\n", |
| 5628 | + " if (\n", |
| 5629 | + " \"haz_base\" in globals()\n", |
| 5630 | + " and isinstance(haz_base, Hazard)\n", |
| 5631 | + " and hasattr(haz_base, \"description\")\n", |
| 5632 | + " and haz_base.description == str(filename)\n", |
| 5633 | + " ):\n", |
| 5634 | + " pass\n", |
5632 | 5635 | " else:\n", |
5633 | 5636 | " haz_base = Hazard.from_hdf5(filename)\n", |
5634 | | - " haz_base.filename = str(filename)\n", |
| 5637 | + " setattr(haz_base, \"description\", str(filename))\n", |
5635 | 5638 | "\n", |
5636 | 5639 | " haz = copy.deepcopy(haz_base)\n", |
5637 | 5640 | " haz.intensity *= i_haz\n", |
|
5707 | 5710 | "source": [ |
5708 | 5711 | "# Ordering of the samples by hazard first and exposures second\n", |
5709 | 5712 | "output_imp = calc_imp.make_sample(N=2**2, sampling_kwargs={\"skip_values\": 2**3})\n", |
5710 | | - "output_imp.order_samples(by=[\"f_haz\", \"f_exp\"])" |
| 5713 | + "output_imp.order_samples(by_parameters=[\"f_haz\", \"f_exp\"])" |
5711 | 5714 | ] |
5712 | 5715 | }, |
5713 | 5716 | { |
|
0 commit comments