You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+57-18Lines changed: 57 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,21 @@
4
4
5
5
## Table of Contents
6
6
7
+
-[Introduction](#introduction)
8
+
-[Repository structure](#repository-structure)
9
+
-[Environment setup](#environment-setup)
10
+
-[Use case components](#use-case-components)
11
+
-[HydroMT](#hydromt)
12
+
-[Wflow](#wflow)
13
+
-[Surrogate model based on ItwinAI](#surrogate-model-based-on-it
14
+
winai)
15
+
-[OSCAR](#oscar)
16
+
-[Running the use case using openEO and OSCAR](#running-the-use-case-using-openeo-and-oscar)
17
+
-[openEO OSCAR integration](#openeo-oscar-integration)
18
+
-[Tests](#tests)
19
+
-[License](#license)
20
+
-[Project framework](#project-framework)
21
+
7
22
## Introduction
8
23
9
24
HyDroForM stands for "Hydrological Drought Forecasting Model with HydroMT and Wflow". It is a Digital Twin for Drought Early Warning in the Alps developed as a use case for the [InterTwin project](https://www.intertwin.eu/). The details of the use case are also available online [here](https://www.intertwin.eu/intertwin-use-case-a-digital-twin-for-drought-early-warning-in-the-alps).
@@ -19,7 +34,17 @@ InterTwin components used in this use case are:
There are **three main components** in the HyDroForM use case:
@@ -42,27 +63,41 @@ There are **three main components** in the HyDroForM use case:
42
63
43
64
HydroMT (Hydro Model Tools) is an open-source Python package that facilitates the process of building and analyzing spatial geoscientific models with a focus on water system models. It does so by automating the workflow to go from raw data to a complete model instance which is ready to run and to analyse model results once the simulation has finished. HydroMT builds on the latest packages in the scientific and geospatial python eco-system including xarray, rasterio, rioxarray, geopandas, scipy and pyflwdir. Source: [Deltares HydroMT](https://deltares.github.io/hydromt/latest/)
44
65
45
-
#### Running HydroMT
46
-
47
-
To run HydroMT from start to finish you can use the `validation` script which is located in `/docker/hydromt/validation.sh`. This script will run the HydroMT validation test which includes the following steps:
66
+
### Wflow
48
67
49
-
1. Update the configuration file of HydroMT
50
-
2. Run HydroMT using the configuration file
51
-
3. Convert the output Wflow configuration file to lowercase letters
52
-
4. Wrap the outputs into STAC collections
68
+
Wflow is Deltares’ solution for modelling hydrological processes, allowing users to account for precipitation, interception, snow accumulation and melt, evapotranspiration, soil water, surface water and groundwater recharge in a fully distributed environment. Successfully applied worldwide for analyzing flood hazards, drought, climate change impacts and land use changes, wflow is growing to be a leader in hydrology solutions. Wflow is conceived as a framework, within which multiple distributed model concepts are available, which maximizes the use of open earth observation data, making it the hydrological model of choice for data scarce environments. Based on gridded topography, soil, land use and climate data, wflow calculates all hydrological fluxes at any given grid cell in the model at a given time step.
Wflow is Deltares’ solution for modelling hydrological processes, allowing users to account for precipitation, interception, snow accumulation and melt, evapotranspiration, soil water, surface water and groundwater recharge in a fully distributed environment. Successfully applied worldwide for analyzing flood hazards, drought, climate change impacts and land use changes, wflow is growing to be a leader in hydrology solutions. Wflow is conceived as a framework, within which multiple distributed model concepts are available, which maximizes the use of open earth observation data, making it the hydrological model of choice for data scarce environments. Based on gridded topography, soil, land use and climate data, wflow calculates all hydrological fluxes at any given grid cell in the model at a given time step. Source: [Deltares Wflow](https://deltares.github.io/Wflow.jl/stable/)
72
+
### Surrogate model based on ItwinAI
57
73
58
-
#### Running Wflow
74
+
`itwinai` is a Python toolkit designed to help scientists and researchers streamline AI and machine learning workflows, specifically for digital twin applications. It provides easy-to-use tools for distributed training, hyper-parameter optimization on HPC systems, and integrated ML logging, reducing engineering overhead and accelerating research. Developed primarily by CERN, in collaboration with Forschungszentrum Jülich (FZJ), itwinai supports modular and reusable ML workflows, with the flexibility to be extended through third-party plugins, empowering AI-driven scientific research in digital twins.
OSCAR is an open-source platform to support the event-driven serverless computing model for data-processing applications. It can be automatically deployed on multi-Clouds, and even on low-powered devices, to create highly-parallel event-driven data-processing serverless applications along the computing continuum. These applications execute on customized runtime environments provided by Docker containers that run on elastic Kubernetes clusters. It is also integrated with the SCAR framework, which supports a High Throughput Computing Programming Model to create highly-parallel event-driven data-processing serverless applications that execute on customized runtime environments provided by Docker containers run on AWS Lambda and AWS Batch. [OSCAR](https://github.com/grycap/oscar)
65
81
82
+
## Running the use case using openEO and OSCAR
83
+
84
+
The `OSCAR` directory contains the files necessary to deploy the use case on the OSCAR platform. There are 2 main components to do so, a bash script and a yaml service definition file.
85
+
86
+
These can be found in the respective subdirectories:
87
+
`OSCAR/oscar_hydromt`, `OSCAR/oscar_wflow`, and `OSCAR/oscar_surrogate`
88
+
89
+
To run the use case we have created a sample Jupyter notebook `example/usecase.ipynb` which can be used to run the use case using the openEO API.
90
+
91
+
The example shows how the three components are linked together to create a drought forecasting workflow.
92
+
93
+
## openEO OSCAR integration
94
+
95
+
The integration of openEO with OSCAR is done in the dask/xarray implementation of openEO called `openeo-processes-dask`. The openEO backend is the main orchestration component of the use case. It is responsible for managing the execution of the different components of the use case on OSCAR.
96
+
97
+
The backend now implements the `oscar_python` library to submit tasks to OSCAR from the process graph.
98
+
99
+
When the process graph is executed, the `run_oscar` process authenticates with the OSCAR platform, validates the service definition file and submits the job to OSCAR. The process then monitors the job status and retrieves the results once the job is completed. If the service definition contains a process not yet registered in OSCAR it will be created on the fly. The process parameters are passed as environment variables to the container where the scripts are executed. The results are stored as STAC collections and returned to openEO as a string URL to the collection.
100
+
66
101
## Tests
67
102
68
103
The components of the use case are set up in `Docker containers`. We have a set of scripts available to build and run the base images. These can be found in the `/tests` directory and can be run from `root` directory of the repository.
@@ -73,8 +108,12 @@ For example:
73
108
./tests/test_hydromt.sh
74
109
```
75
110
76
-
## TODO: Use case demonstration
77
-
78
111
## License
79
112
80
113
This project is licensed under the Apache 2.0 - see the [LICENSE](LICENSE) file for details.
114
+
115
+
## Project framework
116
+
117
+
interTwin is an EU-funded project with the goal to co-design and implement the prototype of an interdisciplinary Digital Twin Engine – an open source platform based on open standards that offers the capability to integrate with application-specific Digital Twins.
118
+
119
+
interTwin is funded by the European Union Grant Agreement Number 101058386
0 commit comments