Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1,613 changes: 1,613 additions & 0 deletions 202403_winterstorms_projections_Europe/1_load_process_data.ipynb

Large diffs are not rendered by default.

2,295 changes: 2,295 additions & 0 deletions 202403_winterstorms_projections_Europe/2_damage_calc.ipynb

Large diffs are not rendered by default.

Large diffs are not rendered by default.

49 changes: 49 additions & 0 deletions 202403_winterstorms_projections_Europe/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Projections and uncertainties of winter windstorm damage in Europe in a changing climate
These scripts reproduce the main results of the paper:
Projections and uncertainties of winter windstorm damage in Europe in a changing climate

Luca G. Severino (1), Chahan M. Kropf (2,3) Hilla Afargan-Gerstman (1), Christopher Fairless (2), Andries Jan de Vries (4), Daniela I.V. Domeisen (4,1), and David N. Bresch (2,3)
1 Institute for Atmospheric and Climate Science, ETH Zürich, Zürich, Switzerland
2 Institute for Environmental Decisions, ETH Zürich, Zürich, Switzerland
3 Federal Office of Meteorology and Climatology MeteoSwiss, Zürich, Switzerland
4 Institute of Earth Surface Dynamics, University of Lausanne, Lausanne, Switzerland
Correspondence: Luca Severino ([email protected])

## Content:
This folder contains material to reproduce the main results from the publication Projections and uncertainties of winter windstorm damage in Europe in a changing climate. The python jupyter notebook 1_load_process_data.ipynb serves to load and process the hazard data from a data archive containing CMIP6 netcdf files; the notebook 2_damage_calc.ipynb computes the damages projections; and the notebook 3_uncertainty-sensitivity_analyses.ipynb reproduces the results from the uncertainty and sensitivity analysis. The SL_bias_correction.py contains the script to bias-correct the hazard data taken from https://github.com/samluethi/HeatMortality2021/blob/main/Data_preparation/bias_correction.py, and constants.py contains constants required for the calculations.

## Notes on hazard file generation
The section Damage Calculation of script 2_damage_calc.ipynb has been used to generate the hazard data present on the CLIMADA API: https://climada.ethz.ch/data-api/v2/docs.
The basic steps are described in the publication manuscript, currently available as a preprint at https://doi.org/10.5194/egusphere-2023-205. The daily surface wind maxima
are first bias corrected using ERA5 reanalysis as reference. Then windstorms days are detected following the algorithm:

${Stormy day}_t \iff \sum_i \{a_i|[(v_{i,t} \geq v_{i,98}) \ \& \ (v_{i,t} \geq 15)] \} \geq A_{min}$

where $a_i$ is the area of the grid cell $i$, $v_{i,t}$ is the daily sfcWindmax intensity at grid cell $i$ on the considered day $t$, $v_{i,98}$ is the 98th percentile of the daily sfcWindmax at grid cell $i$, computed over the winter periods of the historical period, and $A_{min}$ is the area threshold parameter.
Days that do not fulfill this conditions are ignored. Gridcells that for which the daily surface wind maxima intensity values are below the absolute or relative thresholds are marked as 0.


## Requirements

requires access to CMIP6 model outputs as .netcdf files (see https://wcrp-cmip.org/cmip-data-access/#esgf for more information), as well as LitPop metadata (see tutorial https://github.com/CLIMADA-project/climada_python/blob/main/doc/tutorial/climada_entity_LitPop.ipynb).

Requires Python 3.8+ and CLIMADA v4.1.1 (or later):
https://github.com/CLIMADA-project/climada_python/
Documentation: https://github.com/CLIMADA-project/climada_python/blob/master/doc/source/install.rst

## Documentation:

* Publication available as a preprint at https://doi.org/10.5194/egusphere-2023-205 .

Documentation for CLIMADA is available on Read the Docs:

* [online (recommended)](https://climada-python.readthedocs.io/en/stable/)
* [PDF file](https://buildmedia.readthedocs.org/media/pdf/climada-python/stable/climada-python.pdf)

-----
## History
created 28 March 2024

-----

www.wcr.ethz.ch
229 changes: 229 additions & 0 deletions 202403_winterstorms_projections_Europe/SL_bias_corr.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,229 @@
"""
@author: Samuel Luethi WCR ETHZ
@date: 05.07.2021

BiasCorrection

"""

import numpy as np
import pandas as pd


"""Contains functions for bias correction.

Bias correction is performed using a quantile mapping approach.
This code is a slightly simplified version of the method published by
Rajczak et al. (2016). doi:10.1002/joc.4417 and is available in R under
https://github.com/SvenKotlarski/qmCH2018

"""

def bias_correct_time_series(dat_mod, dat_obs, dat_mod_all,
minq=0.001, maxq=1.000, incq=0.001):
""" Wrapper function for bias correction. Calculates quantiles for mapping,
estimates cumulative distribution function (CDF) of observational
(dat_obs) and modelled (dat_mod) data to estimate quantile specific
correction function. CDF of these two timeseries need to correspond to
the same time period. The estimated correction function is then applied
to dat_mod_all which is model output data the same model than dat_mod
but can cover any time range.

Parameters
----------
dat_mod : pd.Series
Model data series as reference for bias adjustment
dat_obs : pd.Series
Observational data series as ground truth
dat_mod_all : pd.Series
Data series to be bias adjusted
minq : float
Minimum quantile for correction function
maxq: float
Maximum quantile for correction function
incq : float
Quantile increment for correction function (bin size)

Returns
-------
dat_mod_all_corrected : pd.Series
bias corrected dat_mod_all

"""
# define quantiles used for mapping
q = np.arange(minq, maxq, incq)
# (1) calc cdf of observational and modeled data
cdf_obs = _calc_cdf(dat_obs, q)
cdf_mod = _calc_cdf(dat_mod, q)

# (2) estimate correction function
cdf_dif = cdf_mod - cdf_obs

# (3) perform quantile mapping to data
dat_mod_all_corrected = _map_quantile(dat_mod_all, cdf_mod, cdf_dif, q)

return dat_mod_all_corrected

def bias_correct_ensemble(dat_mod, dat_obs, dat_mod_all,
minq=0.001, maxq=1.000, incq=0.001):
""" Wrapper function for bias correction of a large ensemble.
Calculates quantiles for mapping, estimates cumulative distribution
function (CDF) of observational (dat_obs) and modelled (dat_mod) data
to estimate one single quantile specific correction function for the
whole ensemble. CDF of the observational data and the ensemble
DataFrame need to correspond to the same time period. The estimated
correction function is then applied to dat_mod_all which is ensemble
model output data of the same model than dat_mod but can cover
any time range.

Parameters
----------
dat_mod : pd.DataFrame
DataFrame with climate data from large ensemble. Needs to cover
same range as dat_obs
dat_obs : pd.Series
Observational data series as ground truth. Needs to cover same
range as dat_mod
dat_mod_all : pd.DataFrame
DataFrame to be bias adjusted
minq : float
Minimum quantile for correction function
maxq: float
Maximum quantile for correction function
incq : float
Quantile increment for correction function (bin size)

Returns
-------
dat_mod_all_corrected : pd.DataFrame
bias corrected dat_mod_all

"""
# define quantiles used for mapping
q = np.arange(minq, maxq, incq)
# (1) calc cdf of observational and modeled data
cdf_obs = _calc_cdf(dat_obs, q)
cdf_mod = _calc_cdf_ens(dat_mod, q)

# (2) estimate correction function
cdf_dif = cdf_mod - cdf_obs

# (3) perform quantile mapping to data
dat_mod_corrected = _map_quantile_ens(dat_mod_all, cdf_mod, cdf_dif, q)

return dat_mod_corrected

def _calc_cdf(data_series, q):
""" Calculates cumulative distribution function (CDF) of any time series.
Takes no assumption on distribution.

Parameters
----------
data_series : pd.Series
Data series
q : np.array
quantiles of cdf to be calculated

Returns
-------
cdf : np.array
cdf of data_series on quantiles q

"""

# sort data
dat_sorted = np.sort(data_series.values)
# calculate the proportional values of samples
p = 1. * np.arange(len(data_series)) / (len(data_series) - 1)
# map to percentiles
cdf = np.interp(q, p, dat_sorted)

return cdf

def _map_quantile(dat_mod_all, cdf_mod_orig, cdf_dif, q):
""" Performs bias correction using quantile mapping

Parameters
----------
dat_mod_all : pd.Series
Data series to be bias adjusted
cdf_mod_orig : np.array
original cdf of model used for bias correction
cdf_dif : np.array
cdf correction function
q : np.array
quantiles of cdf to be calculated

Returns
-------
dat_mod_adj : pd.Series
bias corrected data series

"""
# calc percentile value of each temperature value in modelled time series
perc_mod = np.interp(dat_mod_all, cdf_mod_orig, q)
# correction term for each temperature value in modelled time series
cor_term = np.interp(perc_mod, q, cdf_dif)
# adjust for bias
dat_mod_adj = dat_mod_all-cor_term

return pd.Series(dat_mod_adj)

def _calc_cdf_ens(dat_mod, q):
""" Calculates cumulative distribution function (CDF) an ensemble.
Ensemble CDF is calculated as the mean over all CDF of the ensemble
members.
Takes no assumption on distribution.

Parameters
----------
dat_mod : pd.DataFrame
DataFrame with climate data from large ensemble
q : np.array
quantiles of cdf to be calculated

Returns
-------
cdf : np.array
mean cdf over all ensemble members on quantiles q

"""

# array to store cdfs
cdf_array = np.zeros((dat_mod.shape[1], len(q)))
for i in range(dat_mod.shape[1]):
# calc cdf per member
cdf_array[i,:] = _calc_cdf(dat_mod.iloc[:,i], q)

# average cdf
cdf_ens = np.mean(cdf_array, axis=0)

return cdf_ens

def _map_quantile_ens(dat_mod_all, cdf_mod, cdf_dif, q):
""" Performs bias correction for each ensemble member.

Parameters
----------
dat_mod_all : pd.DataFrame
DataFrame to be bias adjusted
cdf_mod_orig : np.array
original cdf of model used for bias correction
cdf_dif : np.array
cdf correction function
q : np.array
quantiles of cdf to be calculated

Returns
-------
dat_mod_adj : pd.DataFrame
bias corrected data series

"""
ens_array = np.zeros(dat_mod_all.shape)
for i in range(dat_mod_all.shape[1]):
ens_array[:,i] = _map_quantile(dat_mod_all.iloc[:,i], cdf_mod, cdf_dif, q)
dat_mod_corrected = pd.DataFrame(ens_array)
dat_mod_corrected.columns = dat_mod_all.columns

return dat_mod_corrected
67 changes: 67 additions & 0 deletions 202403_winterstorms_projections_Europe/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
#### define constants to be use in the analysis

##imports
#from func_preproc_climada import *
### naming
##variables:
#day:surface wind max = SWM, surface wind (mean) = SW
#Amon: near surface air temperature: TAS, zonal wind: UA, air temperature: TA
#Omon: sea-surface temperature: TOS
#processing: 98th quantile = q98, gust factor 1.67 = gst1-67, NA20 = "threshold=20" in dropna(), cal1 = type 1 calibration
#space resolution: original model resolution = br, interpolated 25deg = itp2_5
#time res
#domain: Europe = EU
#season: extended winter = winE
#base name save file = metvar + processings + time res + space res + domain + season
## climada
#CLIMADA filetypes: hazard = haz, exposures = exp, impacts = imp
#impact function: see shortnames

### constants
#aai_agg_emdat = 1000*2165734.375
aai_agg_emdat = 1000*3073582.5
CubEOT_corr_fact = 0.0015956710965959711

## model sets
modlist_allscen = ['CanESM5','CNRM-CM6-1','CNRM-ESM2-1','EC-Earth3-Veg','EC-Earth3-Veg-LR','IPSL-CM6A-LR','MIROC-ES2L',
'UKESM1-0-LL','MRI-ESM2-0','FGOALS-g3','ACCESS-ESM1-5','MIROC6','MPI-ESM1-2-LR','KACE-1-0-G'] #complete model list

modlist_ssp585 = ['AWI-CM-1-1-MR','BCC-CSM2-MR','CNRM-CM6-1-HR','EC-Earth3','EC-Earth3-CC','HadGEM3-GC31-LL','GISS-E2-1-G','GFDL-CM4','CMCC-CM2-SR5','CMCC-ESM2','HadGEM3-GC31-MM','NESM3','MPI-ESM1-2-HR','INM-CM4-8','INM-CM5-0','ACCESS-CM2']

modlist_1cen = ['CanESM5','CNRM-CM6-1','EC-Earth3-Veg','IPSL-CM6A-LR','UKESM1-0-LL','MRI-ESM2-0','FGOALS-g3','ACCESS-ESM1-5','MIROC6','MPI-ESM1-2-LR','AWI-CM-1-1-MR','BCC-CSM2-MR','CNRM-CM6-1-HR','GISS-E2-1-G','GFDL-CM4','CMCC-ESM2','HadGEM3-GC31-MM','NESM3','INM-CM5-0']

#scenarios
scenlist= ['historical','ssp126','ssp245','ssp370','ssp585']

#region names
reglist3 = ['BI','IP','FR','WEU','MED','SC','EEU']


#impact function
impflist = ["CubEOT","Em2011","Wk2021"]

## paths
#in
pathcmip6 = '/home/lseverino/MT/cmip6/' #path of the cmip6 data
pathera5 = '/home/lseverino/MT/era5/'
pathcirc = '/home/lseverino/MT/circulation/'
pathin = '/home/lseverino/MT/inputdata/'
#out
#pathcal = '/home/lseverino/MT/scripts/calibration/'
#pathfig = '/home/lseverino/MT/scripts/results/figures'
#pathhaz = "/home/lseverino/MT/scripts/results/hazards/"
#pathimp = "/home/lseverino/MT/scripts/results/impacts/"

pathcal = './calibration/'
pathfig = './results/figures'
pathhaz = "./results/hazards/"
pathimp = "./results/impacts/"

#dict for abbreviations of the cmip6 variables names
cmip6vars = {'sfcWindmax':'SWM','sfcWind':'SW','psl':'SLP','tas':'TAS','ua':'UA','ta':'TA','tos':'TOS'}

#dic for impf shortnames
impf_sht_names = {'Cubic excess-over-threshold':'CubEOT','Scaled sigmoid':'ScSig','Emanuel 2011':'Em2011','Welker 2021':'Wk2021','Schwierz 2010' : 'Sw2010'}



Loading