Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Source/Ingest/docker/Dockerfile.ingest
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ FROM python:3.12-slim
RUN apt-get update && apt-get install -y curl

# Install UV properly by copying from the official image
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
COPY --from=ghcr.io/astral-sh/uv:0.8.14 /uv /uvx /bin/

# Set the working directory
WORKDIR /app
Expand Down
87 changes: 0 additions & 87 deletions Source/RnR/dist/README.md

This file was deleted.

Binary file not shown.
Binary file not shown.
30 changes: 0 additions & 30 deletions Source/RnR/docker/Dockerfile.process_flows

This file was deleted.

31 changes: 9 additions & 22 deletions Source/RnR/docker/Dockerfile.troute
Original file line number Diff line number Diff line change
Expand Up @@ -7,43 +7,30 @@ RUN yum install -y gcc
RUN yum install -y netcdf netcdf-fortran netcdf-fortran-devel netcdf-openmpi
RUN yum install -y git cmake


# Install UV by copying the binaries directly from the official image
COPY --from=ghcr.io/astral-sh/uv:0.7.8 /uv /uvx /bin/

# Use UV to install Python 3.11
COPY --from=ghcr.io/astral-sh/uv:0.8.14 /uv /uvx /bin/
RUN uv python install 3.11

# Clone the repository
RUN git clone https://github.com/NGWPC/t-route.git
WORKDIR "/t-route/"
RUN git checkout pi_6
RUN git checkout pi-7-NGWPC-6258

# Create netcdf symlink with error handling
# # Create netcdf symlink with error handling
RUN ln -s /usr/lib64/gfortran/modules/netcdf.mod /usr/include/openmpi-x86_64/netcdf.mod || echo "NetCDF module link creation failed but continuing"

# Create venv and set environment to use it
# # Create venv and set environment to use it
RUN uv venv --python 3.11
ENV PATH="/app/.venv/bin:$PATH"

# Create a directory for local packages
RUN mkdir -p /app/wheels

# Copy the pre-built wheel from the build context
COPY dist/icefabric_tools-*.whl /app/wheels/
COPY dist/icefabric_manage-*whl /app/wheels/

# Install the main package in development mode
# # Install the main package in development mode
RUN uv pip install -e .

# Run the compiler script
RUN ./compiler_uv.sh no-e
# # Run the compiler script
RUN ./compiler.sh no-e --uv

# Install the troute-rnr package specifically
# # Install the troute-rnr package specifically
WORKDIR "/t-route/src/troute-rnr"
RUN uv pip install /app/wheels/icefabric_tools-*.whl
RUN uv pip install /app/wheels/icefabric_manage-*.whl
RUN uv pip install -e .

# Increase max open files soft limit with error handling
# # Increase max open files soft limit with error handling
RUN ulimit -n 10000
Empty file removed Source/data/warehouse/.gitkeep
Empty file.
48 changes: 38 additions & 10 deletions Source/docker/README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,48 @@
# Docker scripts
# Replace and Route (v2025.6.0)

The provided compose files are meant to spin up replace and route as a container for developmental testing
Replace and route is a service which routes streamflow forecasts from the [NWPS API](https://api.water.noaa.gov/nwps/v1/docs/#/) through T-Route to propogate flow through a river segment. Outputs are shown at [water.noaa.gov](water.noaa.gov). This `Source/` dir contains a collection of code, docker management scripts, and IaC to run the full collection of services.

## How to run:
## Overview

Run:
There are two versions of Replace and Route contained in `hydrovis/`
- The Docker Development Version
- This version is what can be spun up locally within a User's environment if they were to clone the repo
- IaC version
- This is the production version which will is designed to scale efficiently to handle many t-route containers running in parallel
- This code is contained in `Source/terraform`

```sh
docker compose -f docker/compose.yaml up
```
Both versions are the same code, but the terraform IaC is what is designed to scale. The docker version is meant to be a localized testing version

## Requirements

The services require a data directory. By default, it uses the `data` directory at the root of this repository.
The following data is required to run RnR locally
- the v2.2 hydrofabric layers stored as parquet files
- These are located at `s3://hydrofabric-data/icefabric` from the Raytheon private S3 bucket. Please contact @taddyb for these files if you do not have access
- `docker compose` installed on your system

You can override this by setting the `RNR_DATA_PATH` environment variable before running docker-compose:
## Parts
![RnR Workflow](rnr_workflow.png)

There are three parts to replace and route
1. The HML Ingestion
- This code reads in HML files from the public [weather api](https://api.weather.gov/) and queues them into a Rabbit MQ. Redis is used to cache previously read forecasts. This code is located in `Source/Ingest`
- To run this, run `./run_ingest.sh` after starting the containers
2. T-Route
- This is the routing code which will propograte forecasted flow downstream. All code is located in `src/troute-rnr` directory and the `Source/RnR` contains the docker scripts to run T-Route.
- To run this, run `./run_rnr.sh` after starting the containers
3. Post-processing
- This is the code which will read T-Route outputs and create and `output-inundation.csv` file
- To run this, run `./run_post_process.sh` after starting the containers

### How to run
To run RnR you can run the following scripts:
```sh
RNR_DATA_PATH=/path/to/your/data
cd Source/docker
./start.sh
./run_ingest.sh
./run_rnr.sh
./run_post_process.sh
./stop.sh
```
and you can view the logs through `./logs.sh`

15 changes: 0 additions & 15 deletions Source/docker/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,21 +57,6 @@ services:
condition: service_healthy
command: ["tail", "-f", "/dev/null"]

process_flows:
build:
context: ../RnR
dockerfile: docker/Dockerfile.process_flows
volumes:
- ../data:/t-route/data
networks:
- app-network
depends_on:
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
command: ["tail", "-f", "/dev/null"]

networks:
app-network:
driver: bridge
File renamed without changes
4 changes: 2 additions & 2 deletions Source/docker/run_post_process.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#!/bin/bash
# run_rnr.sh - Formats the .nc files to create output csvs
# run_post_process.sh - Formats the .nc files to create output csvs

# Goes into the container, activates the .venv/, runs the read script
docker exec docker-process_flows-1 bash -c "source ../../.venv/bin/activate && python post_process.py"
docker exec docker-rnr-1 bash -c "source ../../.venv/bin/activate && python post_process.py"
2 changes: 1 addition & 1 deletion Source/docker/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ check_service_health "Redis" 6379 10 2 || { echo "Redis failed to start properly

# Step 5: Start the rest of the services in detached mode
echo "Starting application services..."
docker compose up -d process_flows rnr ingest
docker compose up -d rnr ingest

echo "All services have been started successfully!"
echo "Use './run_ingest.sh', './run_rnr.sh', or './run_post_process.sh' to run specific services individually."
Expand Down
48 changes: 0 additions & 48 deletions Source/docs/workflow.md

This file was deleted.

Loading