Skip to content

Commit 7d466a1

Browse files
committed
Merge remote-tracking branch 'origin/develop' into feature/documentation-restructuring
2 parents fccc4a0 + 2995735 commit 7d466a1

File tree

28 files changed

+523
-118
lines changed

28 files changed

+523
-118
lines changed

.github/CODEOWNERS

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# This is a comment.
2+
# Each line is a file pattern followed by one or more owners.
3+
4+
# These owners will be the default owners for everything in
5+
# the repo. Unless a later match takes precedence, they will
6+
# be requested for review when someone opens a pull request.
7+
* @emanuel-schmid @chahank @peanutfun

.github/workflows/ci.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ jobs:
2222
# Do not abort other tests if only a single one fails
2323
fail-fast: false
2424
matrix:
25-
python-version: ["3.10", "3.11"]
25+
python-version: ["3.10", "3.11", "3.12"]
2626

2727
steps:
2828
-

.github/workflows/pull-request.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ jobs:
1616
ref: ${{ github.event.pull_request.head.sha }}
1717
-
1818
name: Checkout target commit
19-
run: git -c protocol.version=2 fetch --no-tags --prune --no-recurse-submodules --depth=1 origin ${{ github.event.pull_request.base.ref }}
19+
run: git -c protocol.version=2 fetch --no-tags --prune --no-recurse-submodules --depth=50 origin ${{ github.event.pull_request.base.ref }}
2020
-
2121
name: Set up Python 3.11
2222
uses: actions/setup-python@v5

CHANGELOG.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,32 @@ Code freeze date: YYYY-MM-DD
1010

1111
### Dependency Changes
1212

13+
Removed:
14+
15+
- `pandas-datareader`
16+
1317
### Added
18+
- Added optional parameter to `geo_im_from_array`, `plot_from_gdf`, `plot_rp_imp`, `plot_rp_intensity`,
19+
`plot_intensity`, `plot_fraction`, `_event_plot` to mask plotting when regions are too far from data points [#1047](https://github.com/CLIMADA-project/climada_python/pull/1047). To recreate previous plots (no masking), the parameter can be set to None.
20+
- Added instructions to install Climada petals on Euler cluster in `doc.guide.Guide_Euler.ipynb` [#1029](https://github.com/CLIMADA-project/climada_python/pull/1029)
21+
22+
- `ImpactFunc` and `ImpactFuncSet` now support equality comparisons via `==` [#1027](https://github.com/CLIMADA-project/climada_python/pull/1027)
1423

1524
### Changed
25+
1626
- `Hazard.local_exceedance_intensity`, `Hazard.local_return_period` and `Impact.local_exceedance_impact`, `Impact.local_return_period`, using the `climada.util.interpolation` module: New default (no binning), binning on decimals, and faster implementation [#1012](https://github.com/CLIMADA-project/climada_python/pull/1012)
27+
- World Bank indicator data is now downloaded directly from their API via the function `download_world_bank_indicator`, instead of relying on the `pandas-datareader` package [#1033](https://github.com/CLIMADA-project/climada_python/pull/1033)
28+
- `Exposures.write_hdf5` pickles geometry data in WKB format, which is faster and more sustainable. [#1051](https://github.com/CLIMADA-project/climada_python/pull/1051)
29+
1730
### Fixed
1831

32+
- NaN plotting issues in `geo_im_from_array`[#1038](https://github.com/CLIMADA-project/climada_python/pull/1038)
33+
- Broken ECMWF links in pydoc of `climada.hazard.storm_europe` relocated. [#944](https://github.com/CLIMADA-project/climada_python/pull/944)
34+
1935
### Deprecated
2036

2137
### Removed
38+
2239
- `climada.util.interpolation.round_to_sig_digits` [#1012](https://github.com/CLIMADA-project/climada_python/pull/1012)
2340

2441
## 6.0.1
@@ -188,6 +205,7 @@ CLIMADA tutorials. [#872](https://github.com/CLIMADA-project/climada_python/pull
188205
- `Impact.from_hdf5` now calls `str` on `event_name` data that is not strings, and issue a warning then [#894](https://github.com/CLIMADA-project/climada_python/pull/894)
189206
- `Impact.write_hdf5` now throws an error if `event_name` is does not contain strings exclusively [#894](https://github.com/CLIMADA-project/climada_python/pull/894)
190207
- Split `climada.hazard.trop_cyclone` module into smaller submodules without affecting module usage [#911](https://github.com/CLIMADA-project/climada_python/pull/911)
208+
- `yearly_steps` parameter of `TropCyclone.apply_climate_scenario_knu` has been made explicit [#991](https://github.com/CLIMADA-project/climada_python/pull/991)
191209

192210
### Fixed
193211

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ CLIMADA is divided into two parts (two repositories):
1616
1. the core [climada_python](https://github.com/CLIMADA-project/climada_python) contains all the modules necessary for the probabilistic impact, the averted damage, uncertainty and forecast calculations. Data for hazard, exposures and impact functions can be obtained from the [data API](https://github.com/CLIMADA-project/climada_python/blob/main/doc/tutorial/climada_util_api_client.ipynb). [Litpop](https://github.com/CLIMADA-project/climada_python/blob/main/doc/tutorial/climada_entity_LitPop.ipynb) is included as demo Exposures module, and [Tropical cyclones](https://github.com/CLIMADA-project/climada_python/blob/main/doc/tutorial/climada_hazard_TropCyclone.ipynb) is included as a demo Hazard module.
1717
2. the petals [climada_petals](https://github.com/CLIMADA-project/climada_petals) contains all the modules for generating data (e.g., TC_Surge, WildFire, OpenStreeMap, ...). Most development is done here. The petals builds-upon the core and does not work as a stand-alone.
1818

19-
It is recommend for new users to begin with the core (1) and the [tutorials](https://github.com/CLIMADA-project/climada_python/tree/main/doc/tutorial) therein.
19+
For new users, we recommend to begin with the core (1) and the [tutorials](https://github.com/CLIMADA-project/climada_python/tree/main/doc/tutorial) therein.
2020

21-
This is the Python (3.9+) version of CLIMADA - please see [here](https://github.com/davidnbresch/climada) for backward compatibility with the MATLAB version.
21+
This is the Python version of CLIMADA - please see [here](https://github.com/davidnbresch/climada) for backward compatibility with the MATLAB version.
2222

2323
## Getting started
2424

climada/engine/impact.py

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1178,6 +1178,7 @@ def plot_rp_imp(
11781178
return_periods=(25, 50, 100, 250),
11791179
log10_scale=True,
11801180
axis=None,
1181+
mask_distance=0.01,
11811182
kwargs_local_exceedance_impact=None,
11821183
**kwargs,
11831184
):
@@ -1194,6 +1195,11 @@ def plot_rp_imp(
11941195
plot impact as log10(impact). Default: True
11951196
smooth : bool, optional
11961197
smooth plot to plot.RESOLUTIONxplot.RESOLUTION. Default: True
1198+
mask_distance: float, optional
1199+
Only regions are plotted that are closer to any of the data points than this distance,
1200+
relative to overall plot size. For instance, to only plot values
1201+
at the centroids, use mask_distance=0.01. If None, the plot is not masked.
1202+
Default is 0.01.
11971203
kwargs_local_exceedance_impact: dict
11981204
Dictionary of keyword arguments for the method impact.local_exceedance_impact.
11991205
kwargs : dict, optional
@@ -1242,7 +1248,12 @@ def plot_rp_imp(
12421248
)
12431249

12441250
axis = u_plot.plot_from_gdf(
1245-
impacts_stats, title, column_labels, axis=axis, **kwargs
1251+
impacts_stats,
1252+
title,
1253+
column_labels,
1254+
axis=axis,
1255+
mask_distance=mask_distance,
1256+
**kwargs,
12461257
)
12471258
return axis, impacts_stats_vals
12481259

climada/entity/exposures/base.py

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@
2929

3030
import cartopy.crs as ccrs
3131
import contextily as ctx
32+
import geopandas as gpd
3233
import matplotlib.pyplot as plt
3334
import numpy as np
3435
import pandas as pd
@@ -1131,10 +1132,8 @@ def write_hdf5(self, file_name):
11311132
"""
11321133
LOGGER.info("Writing %s", file_name)
11331134
store = pd.HDFStore(file_name, mode="w")
1134-
pandas_df = pd.DataFrame(self.gdf)
1135-
for col in pandas_df.columns:
1136-
if str(pandas_df[col].dtype) == "geometry":
1137-
pandas_df[col] = np.asarray(self.gdf[col])
1135+
geocols = self.data.columns[self.data.dtypes == "geometry"].to_list()
1136+
pandas_df = self.data.to_wkb()
11381137

11391138
# Avoid pandas PerformanceWarning when writing HDF5 data
11401139
with warnings.catch_warnings():
@@ -1146,6 +1145,7 @@ def write_hdf5(self, file_name):
11461145
for var in type(self)._metadata:
11471146
var_meta[var] = getattr(self, var)
11481147
var_meta["crs"] = self.crs
1148+
var_meta["wkb_columns"] = geocols
11491149
store.get_storer("exposures").attrs.metadata = var_meta
11501150

11511151
store.close()
@@ -1184,7 +1184,15 @@ def from_hdf5(cls, file_name):
11841184
crs = metadata.get("crs", metadata.get("_crs"))
11851185
if crs is None and metadata.get("meta"):
11861186
crs = metadata["meta"].get("crs")
1187-
exp = cls(store["exposures"], crs=crs)
1187+
data = pd.DataFrame(store["exposures"])
1188+
1189+
wkb_columns = (
1190+
metadata.pop("wkb_columns") if "wkb_columns" in metadata else []
1191+
)
1192+
for col in wkb_columns:
1193+
data[col] = gpd.GeoSeries.from_wkb(data[col])
1194+
1195+
exp = cls(data, crs=crs)
11881196
for key, val in metadata.items():
11891197
if key in type(exp)._metadata: # pylint: disable=protected-access
11901198
setattr(exp, key, val)

climada/entity/exposures/test/test_base.py

Lines changed: 30 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -378,11 +378,14 @@ def test_read_template_pass(self):
378378

379379
def test_io_hdf5_pass(self):
380380
"""write and read hdf5"""
381-
exp_df = Exposures(pd.read_excel(ENT_TEMPLATE_XLS), crs="epsg:32632")
382-
exp_df.check()
381+
exp = Exposures(pd.read_excel(ENT_TEMPLATE_XLS), crs="epsg:32632")
382+
383383
# set metadata
384-
exp_df.ref_year = 2020
385-
exp_df.value_unit = "XSD"
384+
exp.ref_year = 2020
385+
exp.value_unit = "XSD"
386+
387+
# add another geometry column
388+
exp.data["geocol2"] = exp.data.geometry.copy(deep=True)
386389

387390
file_name = DATA_DIR.joinpath("test_hdf5_exp.h5")
388391

@@ -392,46 +395,51 @@ def test_io_hdf5_pass(self):
392395

393396
with warnings.catch_warnings():
394397
warnings.simplefilter("error", category=pd.errors.PerformanceWarning)
395-
exp_df.write_hdf5(file_name)
398+
exp.write_hdf5(file_name=file_name)
396399

397400
exp_read = Exposures.from_hdf5(file_name)
398401

399-
self.assertEqual(exp_df.ref_year, exp_read.ref_year)
400-
self.assertEqual(exp_df.value_unit, exp_read.value_unit)
401-
self.assertEqual(exp_df.description, exp_read.description)
402-
np.testing.assert_array_equal(exp_df.latitude, exp_read.latitude)
403-
np.testing.assert_array_equal(exp_df.longitude, exp_read.longitude)
404-
np.testing.assert_array_equal(exp_df.value, exp_read.value)
402+
self.assertEqual(exp.ref_year, exp_read.ref_year)
403+
self.assertEqual(exp.value_unit, exp_read.value_unit)
404+
self.assertEqual(exp.description, exp_read.description)
405+
np.testing.assert_array_equal(exp.latitude, exp_read.latitude)
406+
np.testing.assert_array_equal(exp.longitude, exp_read.longitude)
407+
np.testing.assert_array_equal(exp.value, exp_read.value)
405408
np.testing.assert_array_equal(
406-
exp_df.data["deductible"].values, exp_read.data["deductible"].values
409+
exp.data["deductible"].values, exp_read.data["deductible"].values
407410
)
408411
np.testing.assert_array_equal(
409-
exp_df.data["cover"].values, exp_read.data["cover"].values
412+
exp.data["cover"].values, exp_read.data["cover"].values
410413
)
411414
np.testing.assert_array_equal(
412-
exp_df.data["region_id"].values, exp_read.data["region_id"].values
415+
exp.data["region_id"].values, exp_read.data["region_id"].values
413416
)
414417
np.testing.assert_array_equal(
415-
exp_df.data["category_id"].values, exp_read.data["category_id"].values
418+
exp.data["category_id"].values, exp_read.data["category_id"].values
416419
)
417420
np.testing.assert_array_equal(
418-
exp_df.data["impf_TC"].values, exp_read.data["impf_TC"].values
421+
exp.data["impf_TC"].values, exp_read.data["impf_TC"].values
419422
)
420423
np.testing.assert_array_equal(
421-
exp_df.data["centr_TC"].values, exp_read.data["centr_TC"].values
424+
exp.data["centr_TC"].values, exp_read.data["centr_TC"].values
422425
)
423426
np.testing.assert_array_equal(
424-
exp_df.data["impf_FL"].values, exp_read.data["impf_FL"].values
427+
exp.data["impf_FL"].values, exp_read.data["impf_FL"].values
425428
)
426429
np.testing.assert_array_equal(
427-
exp_df.data["centr_FL"].values, exp_read.data["centr_FL"].values
430+
exp.data["centr_FL"].values, exp_read.data["centr_FL"].values
428431
)
429432

430433
self.assertTrue(
431-
u_coord.equal_crs(exp_df.crs, exp_read.crs),
432-
f"{exp_df.crs} and {exp_read.crs} are different",
434+
u_coord.equal_crs(exp.crs, exp_read.crs),
435+
f"{exp.crs} and {exp_read.crs} are different",
436+
)
437+
self.assertTrue(u_coord.equal_crs(exp.data.crs, exp_read.data.crs))
438+
439+
self.assertTrue(exp_read.data["geocol2"].dtype == "geometry")
440+
np.testing.assert_array_equal(
441+
exp.data["geocol2"].geometry, exp_read.data["geocol2"].values
433442
)
434-
self.assertTrue(u_coord.equal_crs(exp_df.gdf.crs, exp_read.gdf.crs))
435443

436444

437445
class TestAddSea(unittest.TestCase):

climada/entity/impact_funcs/base.py

Lines changed: 18 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,19 @@ def __init__(
9797
self.mdd = mdd if mdd is not None else np.array([])
9898
self.paa = paa if paa is not None else np.array([])
9999

100+
def __eq__(self, value: object, /) -> bool:
101+
if isinstance(value, ImpactFunc):
102+
return (
103+
self.haz_type == value.haz_type
104+
and self.id == value.id
105+
and self.name == value.name
106+
and self.intensity_unit == value.intensity_unit
107+
and np.array_equal(self.intensity, value.intensity)
108+
and np.array_equal(self.mdd, value.mdd)
109+
and np.array_equal(self.paa, value.paa)
110+
)
111+
return False
112+
100113
def calc_mdr(self, inten: Union[float, np.ndarray]) -> np.ndarray:
101114
"""Interpolate impact function to a given intensity.
102115
@@ -177,7 +190,7 @@ def from_step_impf(
177190
mdd: tuple[float, float] = (0, 1),
178191
paa: tuple[float, float] = (1, 1),
179192
impf_id: int = 1,
180-
**kwargs
193+
**kwargs,
181194
):
182195
"""Step function type impact function.
183196
@@ -218,7 +231,7 @@ def from_step_impf(
218231
intensity=intensity,
219232
mdd=mdd,
220233
paa=paa,
221-
**kwargs
234+
**kwargs,
222235
)
223236

224237
def set_step_impf(self, *args, **kwargs):
@@ -238,7 +251,7 @@ def from_sigmoid_impf(
238251
x0: float,
239252
haz_type: str,
240253
impf_id: int = 1,
241-
**kwargs
254+
**kwargs,
242255
):
243256
r"""Sigmoid type impact function hinging on three parameter.
244257
@@ -287,7 +300,7 @@ def from_sigmoid_impf(
287300
intensity=intensity,
288301
paa=paa,
289302
mdd=mdd,
290-
**kwargs
303+
**kwargs,
291304
)
292305

293306
def set_sigmoid_impf(self, *args, **kwargs):
@@ -308,7 +321,7 @@ def from_poly_s_shape(
308321
exponent: float,
309322
haz_type: str,
310323
impf_id: int = 1,
311-
**kwargs
324+
**kwargs,
312325
):
313326
r"""S-shape polynomial impact function hinging on four parameter.
314327

climada/entity/impact_funcs/impact_func_set.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -109,6 +109,12 @@ def __init__(self, impact_funcs: Optional[Iterable[ImpactFunc]] = None):
109109
for impf in impact_funcs:
110110
self.append(impf)
111111

112+
def __eq__(self, value: object, /) -> bool:
113+
if isinstance(value, ImpactFuncSet):
114+
return self._data == value._data
115+
116+
return False
117+
112118
def clear(self):
113119
"""Reinitialize attributes."""
114120
self._data = dict() # {hazard_type : {id:ImpactFunc}}

0 commit comments

Comments
 (0)