Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
100 changes: 65 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,61 +2,91 @@

## 📄 Description:

model2SWOT is an approach designed to interpolate model SSH (Sea Surface Height) data onto the SWOT (Surface Water and Ocean Topography) grid. Given a single daily model data, the script interpolates the SSH data onto a given SWOT swath.
model2SWOT is an approach designed to interpolate model Sea Surface Height (SSH) data onto Surface Water and Ocean Topography (SWOT) grids. Given a single daily model data, the script interpolates the SSH data onto a given SWOT swath.

### Main Requered Inputs:
### Required Inputs:

- model files containing SSH data.
- Model mask containing the longitude and latitude coordinates.
- SWOT data file containing the target longitude/latitude grid. It is recommended to use the "Expert" version of these files.
- Model netCDF files or Zarr store containing SSH data.
- Model domain / mask netCDF file or Zarr store containing the longitude and latitude coordinates.
- SWOT grid netCDF file containing the target longitude/latitude grid. It is recommended to use the "Expert" version of these files.
- Interpolator: The script offers two interpolation methods. Scipy interpolation method and Pyinterp interpolation method. Only one method can be chosen at a time, depending on user preference.

### Other Inputs:
### Optional Inputs:

- latitude_var_name: The name of the latitude variable in the model dataset (e.g., "lat" or "latitude").
- longitude_var_name: The name of the longitude variable in the model dataset (e.g., "lon" or "longitude").
- time_name: The name of the time variable in the model dataset (e.g., "time" or "time_counter").
- model_ssh_var: The name of the SSH variable in the model dataset (e.g., "ssh" or "sossheig").
- time_index: The index of the variable time, must be an integer (e.g., 0 or 2 or 3).
- `latitude_var_name`: The name of the latitude variable in the model dataset (e.g., "lat" or "latitude").
- `longitude_var_name`: The name of the longitude variable in the model dataset (e.g., "lon" or "longitude").
- `time_name`: The name of the time variable in the model dataset (e.g., "time" or "time_counter").
- `model_ssh_var`: The name of the SSH variable in the model dataset (e.g., "ssh" or "sossheig").

By default, these variables are assumed to be named:

- latitude for latitude_var_name
- longitude for longitude_var_name
- time_counter for time_name
- ssh for model_ssh_var
- 0 for time_index
- latitude for `latitude_var_name`
- longitude for `longitude_var_name`
- time_counter for `time_name`
- ssh for `model_ssh_var`

Users should make sure to provide the correct variable names if their dataset uses different names.

## 🚀 Usage:
## 🚀 Getting Started:

### Installation:

To use the tool, the user can clone the repository. Once the repository is cloned, the user can run the code in the `model2SWOT/model2SWOT.py` file, by executing the command line from a terminal using the following command (after navigating to the directory containing model2SWOT.py) :
To install the **synthocean** package, users should first clone this GitHub repository as follows:

```bash
./model2SWOT.py -m path_to_your_model_file -k path_to_your_model_mask_file -s path_to_swot_data_file -o path_to_output_file -i interpolator --model-lat-var latitude_var_name --model-lon-var longitude_var_name --model-time-var time_name --model_ssh_var the_model_ssh_variable_name --model_timestep_index time_index
git clone git@github.com:Amine-ouhechou/synthocean.git
```
Where:
-m is the path to your model file containing the SSH data.
-k is the path to your model mask file.
-s is the path to the directory containing the SWOT data files.
-o is the path to the output directory where results will be saved.
-i specifies the interpolation method (choose between scipy_interpolation or pyinterp_interpolation).

Once users have successfully cloned the repository, users need to install the required dependencies using conda:

## ⚡ Notes
- If the model mask and data are in the same file, you need to provide the path to this file twice — once for the data and once for the mask argument.
- Input file (model and mask) **must be either a NetCDF or zarr file** (`.nc, .zarr`).
- The output file **must be a NetCDF file** (`.nc`).
```bash
conda install numpy xarray netcdf4 inpoly scipy pyinterp
```

The tool is currently tested with **eORCA25** and **eNATL60** data in **NetCDF** and **Zarr** format.
Finally, the `model2swot` command line interface (CLI) can be installed using pip (editable install) as follows:

The data needed for testing will soon be provided via **link**.
```bash
cd synthocean
pip install -e .
```

### Usage:

The tool will soon be available for installation via pip.
To use the tool, users can run the `model2swot` command followed by the required arguments to run the interpolation of model outputs onto a given SWOT swath:

```bash
model2swot -m path_to_your_model_file -k path_to_your_model_mask_file -s path_to_swot_data_file -o path_to_output_file -i interpolator --model_lat_var latitude_var_name --model_lon_var longitude_var_name --model_time_var time_name --model_ssh_var the_model_ssh_variable_name
```

**Note:** When specifying a model SSH file including mutliple time-slices (e.g., daily-mean SSH fields stored in a monthly netCDF file), the nearest time-slice to the average time of the given SWOT swath will be used. The selected model time-slice is reported to users in the log.

#### Required Arguments:

| Flag | Name | Description |
|---|---|---|
| -m | Model SSH file | Path to model file containing the SSH data. |
| -k | Model Mask file | Path to model mask / domain file containing longitude-latitude data. |
| -s | SWOT grid file | Path to SWOT swath file containing longitude-latitude grid to interpolate onto. |
| -o | Output file | Path to the output file where results will be saved as netCDF file. |
| -i | Interpolation | Interpolation method (choose between "scipy_interpolation" or "pyinterp_interpolation"). |

#### Optional Arguments:

| Flag | Description |
|---|---|
| --model_lat_var | Model latitude variable name |
| --model_lon_var | Model longitude variable name |
| --model_time_var | Model time variable name |
| --model_ssh_var | Model SSH variable name |

## ⚡ Notes
- If the model mask and data are in the same file, users should provide the path to this file twice — once for the data and once for the mask argument. The file will only be read once.
- Model input files **must be either a netCDF files or Zarr stores** (`.nc` or `.zarr`).
- Output file **must be a NetCDF file** (`.nc`).
- SWOT gridded datasets are provided [here](https://ige-meom-opendap.univ-grenoble-alpes.fr/thredds/catalog/meomopendap/extract/MEOM/SWOT-geometry/catalog.html)

The tool is currently tested with **eORCA25** and **eNATL60** data in **NetCDF** and **Zarr** format.

> [!TIP]
> This code requires the use of the SWOT dataset, which is provided [here](https://ige-meom-opendap.univ-grenoble-alpes.fr/thredds/catalog/meomopendap/extract/MEOM/SWOT-geometry/catalog.html)
>
## 🐛 Issues and Contributions

If you encounter any issues or would like to contribute, please feel free to [open an issue here](https://github.com/Amine-ouhechou/synthocean/issues).
72 changes: 72 additions & 0 deletions examples/run_eorca12_era5v1_model2swot.slurm
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
#!/bin/bash
#SBATCH --job-name=swot_eorca12_npd_
#SBATCH --time=06:00:00
#SBATCH --ntasks=1
#SBATCH --mem=10GB

#SBATCH --account=my_account
#SBATCH --partition=serial
#SBATCH --qos=serial

# ==============================================================
# run_npd_eorca12_era5v1_swot.slurm
#
# Description: SLURM script to perform SWOT OMIP interpolation
# using NOC eORCA12 ERA-5v1 2024 Near-Present Day outputs.
#
# Created By: Ollie Tooth (oliver.tooth@noc.ac.uk)
# Created On: 2025-08-11
#
# ==============================================================
# -- Input arguments to model2SWOT -- #
# Define path to eORCA12 monthly mean output directory:
filedir_mdl=/path/to/my/eORCA12/model/output/2024/
# Define filepath to eORCA12 domain_cfg file:
filepath_domain=/path/to/my/domain_cfg.nc

# Define path to SWOT global grid directory:
filedir_swot=/path/to/my/global_swot_grid_2024
# Define output directory:
filedir_out=/path/to/my/npd_eorca12_era5v1_global_swot_grid_2024

# Define chosen interpolation method:
interpolator="pyinterp_interpolator"

# Define core variable names:
lon_name="glamt"
lat_name="gphit"
time_name="time_counter"
ssh_name="zos"

# -- Python Environment -- #
# Activate Python virtual environment:
source /path/to/my/miniforge3/bin/activate
conda activate env_swot

# -- Interpolate eORCA12 1-day mean outputs onto SWOT grid -- #
# Iterating over cycles:
for cycle in {008..026}
do
# Create output directory for the cycle if it doesn't exist:
if [ ! -d ${filedir_out}/cycle_${cycle} ]; then
mkdir -p "${filedir_out}/cycle_${cycle}"
fi

# Define path to SWOT cycle directory:
filedir_swot_cycle=${filedir_swot}/cycle_${cycle}/
for file in $(ls ${filedir_swot_cycle})
do
echo "Interpolating eORCA12 ERA-5 v1 SSH onto SWOT grid ${file}"

# Determine YYYYMM date-string of SWOT grid input:
date=$(echo ${file} | grep -Eo '[[:digit:]]{4}[[:digit:]]{2}' | head -1)
# Define path to nearest eORCA12 T1d monthly model file:
fpath_mdl=$(echo ${filedir_mdl}/eORCA12_1d_grid_T_${date}-${date}.nc)
# Define path to interpolated output file:
fpath_out="${filedir_out}/cycle_${cycle}/${file/SWOT_GRID_L3_LR/eORCA12_ERA5v1_SWOT}"
# Run model2swot interpolator:
model2swot -m ${fpath_mdl} -k ${filepath_domain} -s ${filedir_swot_cycle}${file} -o ${fpath_out} -i ${interpolator} --model_lat_var ${lat_name} --model_lon_var ${lon_name} --model_time_var ${time_name} --model_ssh_var ${ssh_name}

echo "Completed: Interpolated eORCA12 ERA-5 v1 SSH onto SWOT grid."
done
done
Loading