Skip to content

Commit b6e093d

Browse files
committed
Merge branch 'develop-white' into feature/cyclostrophic-as-parameter
2 parents 0579026 + aa0615f commit b6e093d

File tree

74 files changed

+3185
-1972
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

74 files changed

+3185
-1972
lines changed

.github/workflows/ci.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ jobs:
1212
build-and-test:
1313
name: 'Core / Unit Test Pipeline'
1414
runs-on: ubuntu-latest
15+
timeout-minutes: 20
1516
permissions:
1617
# For publishing results
1718
checks: write

.gitignore

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -176,3 +176,10 @@ data/ISIMIP_crop/
176176

177177
# climada data results folder:
178178
results/
179+
180+
# Hidden files we want to track
181+
!.gitignore
182+
!.pre-commit-config.yaml
183+
!.pylintrc
184+
!.readthedocs.yml
185+
!.github

.pre-commit-config.yaml

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# See https://pre-commit.com for more information
2+
default_language_version:
3+
python: python3
4+
5+
# See https://pre-commit.com/hooks.html for more hooks
6+
repos:
7+
- repo: https://github.com/pre-commit/pre-commit-hooks
8+
rev: v3.2.0
9+
hooks:
10+
- id: end-of-file-fixer
11+
- id: trailing-whitespace
12+
13+
- repo: https://github.com/pycqa/isort
14+
rev: '5.13.2'
15+
hooks:
16+
- id: isort
17+
args: ["--profile", "black", "--filter-files"]
18+
19+
# Using this mirror lets us use mypyc-compiled black, which is about 2x faster
20+
- repo: https://github.com/psf/black-pre-commit-mirror
21+
rev: '24.4.2'
22+
hooks:
23+
- id: black-jupyter

CHANGELOG.md

Lines changed: 22 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,19 @@ Code freeze date: YYYY-MM-DD
1212

1313
### Added
1414

15+
- `climada.util.interpolation` module for inter- and extrapolation util functions used in local exceedance intensity and return period functions [#930](https://github.com/CLIMADA-project/climada_python/pull/930)
16+
1517
### Changed
1618

19+
- Improved scaling factors implemented in `climada.hazard.trop_cyclone.apply_climate_scenario_knu` to model the impact of climate changes to tropical cyclones [#734](https://github.com/CLIMADA-project/climada_python/pull/734)
20+
- In `climada.util.plot.geo_im_from_array`, NaNs are plotted in gray while cells with no centroid are not plotted [#929](https://github.com/CLIMADA-project/climada_python/pull/929)
21+
- Renamed `climada.util.plot.subplots_from_gdf` to `climada.util.plot.plot_from_gdf` [#929](https://github.com/CLIMADA-project/climada_python/pull/929)
22+
1723
### Fixed
1824

25+
- File handles are being closed after reading netcdf files with `climada.hazard` modules [#953](https://github.com/CLIMADA-project/climada_python/pull/953)
26+
- Avoids a ValueError in the impact calculation for cases with a single exposure point and MDR values of 0, by explicitly removing zeros in `climada.hazard.Hazard.get_mdr` [#933](https://github.com/CLIMADA-project/climada_python/pull/948)
27+
1928
### Deprecated
2029

2130
### Removed
@@ -52,6 +61,19 @@ Updated:
5261

5362
- GitHub actions workflow for CLIMADA Petals compatibility tests [#855](https://github.com/CLIMADA-project/climada_python/pull/855)
5463
- `climada.util.calibrate` module for calibrating impact functions [#692](https://github.com/CLIMADA-project/climada_python/pull/692)
64+
- Method `Hazard.check_matrices` for bringing the stored CSR matrices into "canonical format" [#893](https://github.com/CLIMADA-project/climada_python/pull/893)
65+
- Generic s-shaped impact function via `ImpactFunc.from_poly_s_shape` [#878](https://github.com/CLIMADA-project/climada_python/pull/878)
66+
- climada.hazard.centroids.centr.Centroids.get_area_pixel
67+
- climada.hazard.centroids.centr.Centroids.get_dist_coast
68+
- climada.hazard.centroids.centr.Centroids.get_elevation
69+
- climada.hazard.centroids.centr.Centroids.get_meta
70+
- climada.hazard.centroids.centr.Centroids.get_pixel_shapes
71+
- climada.hazard.centroids.centr.Centroids.to_crs
72+
- climada.hazard.centroids.centr.Centroids.to_default_crs
73+
- climada.hazard.centroids.centr.Centroids.write_csv
74+
- climada.hazard.centroids.centr.Centroids.write_excel
75+
- climada.hazard.local_return_period [#898](https://github.com/CLIMADA-project/climada_python/pull/898)
76+
- climada.util.plot.subplots_from_gdf [#898](https://github.com/CLIMADA-project/climada_python/pull/898)
5577

5678
### Changed
5779

@@ -76,20 +98,6 @@ CLIMADA tutorials. [#872](https://github.com/CLIMADA-project/climada_python/pull
7698
- Fix broken links in `CONTRIBUTING.md` [#900](https://github.com/CLIMADA-project/climada_python/pull/900)
7799
- When writing `TCTracks` to NetCDF, only apply compression to `float` or `int` data types. This fixes a downstream issue, see [climada_petals#135](https://github.com/CLIMADA-project/climada_petals/issues/135) [#911](https://github.com/CLIMADA-project/climada_python/pull/911)
78100

79-
### Added
80-
81-
- Method `Hazard.check_matrices` for bringing the stored CSR matrices into "canonical format" [#893](https://github.com/CLIMADA-project/climada_python/pull/893)
82-
- Generic s-shaped impact function via `ImpactFunc.from_poly_s_shape` [#878](https://github.com/CLIMADA-project/climada_python/pull/878)
83-
- climada.hazard.centroids.centr.Centroids.get_area_pixel
84-
- climada.hazard.centroids.centr.Centroids.get_dist_coast
85-
- climada.hazard.centroids.centr.Centroids.get_elevation
86-
- climada.hazard.centroids.centr.Centroids.get_meta
87-
- climada.hazard.centroids.centr.Centroids.get_pixel_shapes
88-
- climada.hazard.centroids.centr.Centroids.to_crs
89-
- climada.hazard.centroids.centr.Centroids.to_default_crs
90-
- climada.hazard.centroids.centr.Centroids.write_csv
91-
- climada.hazard.centroids.centr.Centroids.write_excel
92-
93101
### Deprecated
94102

95103
- climada.hazard.centroids.centr.Centroids.from_lat_lon
@@ -465,4 +473,3 @@ updated:
465473

466474
- `climada.enginge.impact.Impact.calc()` and `climada.enginge.impact.Impact.calc_impact_yearset()`
467475
[#436](https://github.com/CLIMADA-project/climada_python/pull/436).
468-

climada/engine/cost_benefit.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
from tabulate import tabulate
3333

3434
from climada.engine.impact_calc import ImpactCalc
35-
from climada.engine import Impact, ImpactFreqCurve
35+
from climada.engine.impact import Impact, ImpactFreqCurve
3636

3737
LOGGER = logging.getLogger(__name__)
3838

climada/engine/forecast.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@ def __init__(
186186
if exposure_name is None:
187187
try:
188188
self.exposure_name = u_coord.country_to_iso(
189-
exposure.gdf.region_id.unique()[0], "name"
189+
exposure.gdf["region_id"].unique()[0], "name"
190190
)
191191
except (KeyError, AttributeError):
192192
self.exposure_name = "custom"

climada/engine/impact.py

Lines changed: 31 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -243,8 +243,8 @@ def from_eih(cls, exposures, hazard, at_event, eai_exp, aai_agg, imp_mat=None):
243243
date = hazard.date,
244244
frequency = hazard.frequency,
245245
frequency_unit = hazard.frequency_unit,
246-
coord_exp = np.stack([exposures.gdf.latitude.values,
247-
exposures.gdf.longitude.values],
246+
coord_exp = np.stack([exposures.gdf['latitude'].values,
247+
exposures.gdf['longitude'].values],
248248
axis=1),
249249
crs = exposures.crs,
250250
unit = exposures.value_unit,
@@ -1081,25 +1081,25 @@ def from_csv(cls, file_name):
10811081
# pylint: disable=no-member
10821082
LOGGER.info('Reading %s', file_name)
10831083
imp_df = pd.read_csv(file_name)
1084-
imp = cls(haz_type=imp_df.haz_type[0])
1085-
imp.unit = imp_df.unit[0]
1086-
imp.tot_value = imp_df.tot_value[0]
1087-
imp.aai_agg = imp_df.aai_agg[0]
1088-
imp.event_id = imp_df.event_id[~np.isnan(imp_df.event_id)].values
1084+
imp = cls(haz_type=imp_df['haz_type'][0])
1085+
imp.unit = imp_df['unit'][0]
1086+
imp.tot_value = imp_df['tot_value'][0]
1087+
imp.aai_agg = imp_df['aai_agg'][0]
1088+
imp.event_id = imp_df['event_id'][~np.isnan(imp_df['event_id'])].values
10891089
num_ev = imp.event_id.size
1090-
imp.event_name = imp_df.event_name[:num_ev].values.tolist()
1091-
imp.date = imp_df.event_date[:num_ev].values
1092-
imp.at_event = imp_df.at_event[:num_ev].values
1093-
imp.frequency = imp_df.event_frequency[:num_ev].values
1094-
imp.frequency_unit = imp_df.frequency_unit[0] if 'frequency_unit' in imp_df \
1090+
imp.event_name = imp_df['event_name'][:num_ev].values.tolist()
1091+
imp.date = imp_df['event_date'][:num_ev].values
1092+
imp.at_event = imp_df['at_event'][:num_ev].values
1093+
imp.frequency = imp_df['event_frequency'][:num_ev].values
1094+
imp.frequency_unit = imp_df['frequency_unit'][0] if 'frequency_unit' in imp_df \
10951095
else DEF_FREQ_UNIT
1096-
imp.eai_exp = imp_df.eai_exp[~np.isnan(imp_df.eai_exp)].values
1096+
imp.eai_exp = imp_df['eai_exp'][~np.isnan(imp_df['eai_exp'])].values
10971097
num_exp = imp.eai_exp.size
10981098
imp.coord_exp = np.zeros((num_exp, 2))
1099-
imp.coord_exp[:, 0] = imp_df.exp_lat[:num_exp]
1100-
imp.coord_exp[:, 1] = imp_df.exp_lon[:num_exp]
1099+
imp.coord_exp[:, 0] = imp_df['exp_lat'][:num_exp]
1100+
imp.coord_exp[:, 1] = imp_df['exp_lon'][:num_exp]
11011101
try:
1102-
imp.crs = u_coord.to_crs_user_input(imp_df.exp_crs.values[0])
1102+
imp.crs = u_coord.to_crs_user_input(imp_df['exp_crs'].values[0])
11031103
except AttributeError:
11041104
imp.crs = DEF_CRS
11051105

@@ -1129,23 +1129,23 @@ def from_excel(cls, file_name):
11291129
dfr = pd.read_excel(file_name)
11301130
imp = cls(haz_type=str(dfr['haz_type'][0]))
11311131

1132-
imp.unit = dfr.unit[0]
1133-
imp.tot_value = dfr.tot_value[0]
1134-
imp.aai_agg = dfr.aai_agg[0]
1132+
imp.unit = dfr['unit'][0]
1133+
imp.tot_value = dfr['tot_value'][0]
1134+
imp.aai_agg = dfr['aai_agg'][0]
11351135

1136-
imp.event_id = dfr.event_id[~np.isnan(dfr.event_id.values)].values
1137-
imp.event_name = dfr.event_name[:imp.event_id.size].values
1138-
imp.date = dfr.event_date[:imp.event_id.size].values
1139-
imp.frequency = dfr.event_frequency[:imp.event_id.size].values
1140-
imp.frequency_unit = dfr.frequency_unit[0] if 'frequency_unit' in dfr else DEF_FREQ_UNIT
1141-
imp.at_event = dfr.at_event[:imp.event_id.size].values
1136+
imp.event_id = dfr['event_id'][~np.isnan(dfr['event_id'].values)].values
1137+
imp.event_name = dfr['event_name'][:imp.event_id.size].values
1138+
imp.date = dfr['event_date'][:imp.event_id.size].values
1139+
imp.frequency = dfr['event_frequency'][:imp.event_id.size].values
1140+
imp.frequency_unit = dfr['frequency_unit'][0] if 'frequency_unit' in dfr else DEF_FREQ_UNIT
1141+
imp.at_event = dfr['at_event'][:imp.event_id.size].values
11421142

1143-
imp.eai_exp = dfr.eai_exp[~np.isnan(dfr.eai_exp.values)].values
1143+
imp.eai_exp = dfr['eai_exp'][~np.isnan(dfr['eai_exp'].values)].values
11441144
imp.coord_exp = np.zeros((imp.eai_exp.size, 2))
1145-
imp.coord_exp[:, 0] = dfr.exp_lat.values[:imp.eai_exp.size]
1146-
imp.coord_exp[:, 1] = dfr.exp_lon.values[:imp.eai_exp.size]
1145+
imp.coord_exp[:, 0] = dfr['exp_lat'].values[:imp.eai_exp.size]
1146+
imp.coord_exp[:, 1] = dfr['exp_lon'].values[:imp.eai_exp.size]
11471147
try:
1148-
imp.crs = u_coord.to_csr_user_input(dfr.exp_crs.values[0])
1148+
imp.crs = u_coord.to_csr_user_input(dfr['exp_crs'].values[0])
11491149
except AttributeError:
11501150
imp.crs = DEF_CRS
11511151

@@ -1324,14 +1324,14 @@ def video_direct_impact(exp, impf_set, haz_list, file_name='',
13241324
np.array([haz.intensity.max() for haz in haz_list]).max()]
13251325

13261326
if 'vmin' not in args_exp:
1327-
args_exp['vmin'] = exp.gdf.value.values.min()
1327+
args_exp['vmin'] = exp.gdf['value'].values.min()
13281328

13291329
if 'vmin' not in args_imp:
13301330
args_imp['vmin'] = np.array([imp.eai_exp.min() for imp in imp_list
13311331
if imp.eai_exp.size]).min()
13321332

13331333
if 'vmax' not in args_exp:
1334-
args_exp['vmax'] = exp.gdf.value.values.max()
1334+
args_exp['vmax'] = exp.gdf['value'].values.max()
13351335

13361336
if 'vmax' not in args_imp:
13371337
args_imp['vmax'] = np.array([imp.eai_exp.max() for imp in imp_list

climada/engine/impact_calc.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
import geopandas as gpd
2828

2929
from climada import CONFIG
30-
from climada.engine import Impact
30+
from climada.engine.impact import Impact
3131

3232
LOGGER = logging.getLogger(__name__)
3333

@@ -154,8 +154,8 @@ def impact(self, save_mat=True, assign_centroids=True,
154154
exp_gdf.size, self.n_events)
155155
imp_mat_gen = self.imp_mat_gen(exp_gdf, impf_col)
156156

157-
insured = ('cover' in exp_gdf and exp_gdf.cover.max() >= 0) \
158-
or ('deductible' in exp_gdf and exp_gdf.deductible.max() > 0)
157+
insured = ('cover' in exp_gdf and exp_gdf['cover'].max() >= 0) \
158+
or ('deductible' in exp_gdf and exp_gdf['deductible'].max() > 0)
159159
if insured:
160160
LOGGER.info("cover and/or deductible columns detected,"
161161
" going to calculate insured impact")
@@ -253,8 +253,8 @@ def minimal_exp_gdf(self, impf_col, assign_centroids, ignore_cover, ignore_deduc
253253
" Run 'exposures.assign_centroids()' beforehand or set"
254254
" 'assign_centroids' to 'True'")
255255
mask = (
256-
(self.exposures.gdf.value.values == self.exposures.gdf.value.values) # value != NaN
257-
& (self.exposures.gdf.value.values != 0) # value != 0
256+
(self.exposures.gdf['value'].values == self.exposures.gdf['value'].values)# value != NaN
257+
& (self.exposures.gdf['value'].values != 0) # value != 0
258258
& (self.exposures.gdf[self.hazard.centr_exp_col].values >= 0) # centroid assigned
259259
)
260260

@@ -320,7 +320,7 @@ def _chunk_exp_idx(haz_size, idx_exp_impf):
320320
)
321321
idx_exp_impf = (exp_gdf[impf_col].values == impf_id).nonzero()[0]
322322
for exp_idx in _chunk_exp_idx(self.hazard.size, idx_exp_impf):
323-
exp_values = exp_gdf.value.values[exp_idx]
323+
exp_values = exp_gdf['value'].values[exp_idx]
324324
cent_idx = exp_gdf[self.hazard.centr_exp_col].values[exp_idx]
325325
yield (
326326
self.impact_matrix(exp_values, cent_idx, impf),
@@ -363,10 +363,10 @@ def insured_mat_gen(self, imp_mat_gen, exp_gdf, impf_col):
363363
haz_type=self.hazard.haz_type,
364364
fun_id=impf_id)
365365
if 'deductible' in exp_gdf:
366-
deductible = exp_gdf.deductible.values[exp_idx]
366+
deductible = exp_gdf['deductible'].values[exp_idx]
367367
mat = self.apply_deductible_to_mat(mat, deductible, self.hazard, cent_idx, impf)
368368
if 'cover' in exp_gdf:
369-
cover = exp_gdf.cover.values[exp_idx]
369+
cover = exp_gdf['cover'].values[exp_idx]
370370
mat = self.apply_cover_to_mat(mat, cover)
371371
yield (mat, exp_idx)
372372

0 commit comments

Comments
 (0)