Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
95 changes: 95 additions & 0 deletions examples/cfd/isotropic_eddyformer/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# EddyFormer for 3D Isotropic Turbulence

This example demonstrates how to use the EddyFormer model for simulating
a three-dimensional isotropic turbulence. This example runs on a single GPU.

## Problem Overview

This example focuses on **three-dimensional homogeneous isotropic turbulence (HIT)** sustained by large-scale forcing. The flow is governed by the incompressible Navier–Stokes equations with an external forcing term:

\[
\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u}
= \nu \nabla^2 \mathbf{u} + \mathbf{f}(\mathbf{x})
\]

where:

- **\(\mathbf{u}(\mathbf{x}, t)\)** — velocity field in a 3D periodic domain
- **\(\nu = 0.01\)** — kinematic viscosity
- **\(\mathbf{f}(\mathbf{x})\)** — isotropic forcing applied at the largest scales

### Forcing Mechanism

To maintain statistically steady turbulence, a **constant-power forcing** is applied to the lowest Fourier modes (\(|\mathbf{k}| \le 1\)). The forcing injects a prescribed amount of energy \(P_{\text{in}} = 1.0\) into the system:

\[
\mathbf{f}(\mathbf{x}) =
\frac{P_{\text{in}}}{E_1}
\sum_{\substack{|\mathbf{k}| \le 1 \\ \mathbf{k} \neq 0}}
\hat{\mathbf{u}}_{\mathbf{k}} e^{i \mathbf{k} \cdot \mathbf{x}}
\]

where:

\[
E_1 = \frac{1}{2}
\sum_{|\mathbf{k}| \le 1}
\hat{\mathbf{u}}_{\mathbf{k}} \cdot \hat{\mathbf{u}}_{\mathbf{k}}^{*}
\]

is the kinetic energy contained in the forced low-wavenumber modes.

Under this forcing, the flow reaches a **statistically steady state** with a Taylor-scale Reynolds number of:

**\(\mathrm{Re}_\lambda \approx 94\)**

### Task Description

The objective of this example is to **predict the future velocity field** of the turbulent flow. Given \(\mathbf{u}(\mathbf{x}, t)\), the task is:

> **Predict the velocity field \(\mathbf{u}(\mathbf{x}, t + \Delta t)\) with \(\Delta t = 0.5\).**

This requires modeling nonlinear, chaotic, multi-scale turbulent dynamics, including:

- energy injection at large scales
- nonlinear transfer across the inertial range
- dissipation at the smallest scales

### Dataset Summary

- **DNS resolution:** \(384^3\) (used to generate the dataset)
- **Stored dataset resolution:** \(96^3\)
- **Kolmogorov scale resolution:** ~0.5 η
- **Forcing:** applied to modes with \(|\mathbf{k}| \le 1\)
- **Viscosity:** \(\nu = 0.01\)
- **Input power:** \(P_{\text{in}} = 1.0\)
- **Flow regime:** statistically steady HIT at \(\mathrm{Re}_\lambda \approx 94\)

## Prerequisites

Install the required dependencies by running below:

```bash
pip install -r requirements.txt
```

## Download the Dataset

The dataset is publicly available at [Huggingface](https://huggingface.co/datasets/ydu11/re94).
To download the dataset, run (you might need to install the Huggingface CLI):

```bash
bash download_dataset.sh
```

## Getting Started

To train the model, run

```bash
python train_ef_isotropic.py
```

## References

- [EddyFormer: EddyFormer: Accelerated Neural Simulations of Three-Dimensional Turbulence at Scale](https://arxiv.org/abs/2510.24173)
23 changes: 23 additions & 0 deletions examples/cfd/isotropic_eddyformer/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
model:
idim: 3
odim: 3
hdim: 32
num_layers: 4
layer_config:
basis: legendre
mesh: [8, 8, 8]
mode: [10, 10, 10]
mode_les: [5, 5, 5]
kernel_size: [2, 2, 2]
kernel_size_les: [2, 2, 2]
ffn_dim: 128
activation: GELU
num_heads: 4
heads_dim: 32

training:
dataset: data/ns3d-re94
t: 0.5
batch_size: 4
num_epochs: 100
learning_rate: 1e-3
1 change: 1 addition & 0 deletions examples/cfd/isotropic_eddyformer/download_dataset.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
hf download --repo-type dataset ydu11/re94 --local-dir ${1:-data/ns3d-re94}
2 changes: 2 additions & 0 deletions examples/cfd/isotropic_eddyformer/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
hydra-core>=1.2.0
termcolor>=2.1.1
110 changes: 110 additions & 0 deletions examples/cfd/isotropic_eddyformer/train_ef_isotropic.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
import hydra
from typing import Tuple
from torch import Tensor
from omegaconf import DictConfig

import os
import numpy as np

import torch
from torch.nn import MSELoss
from torch.optim import Adam
from torch.utils.data import Dataset, DataLoader

from physicsnemo.models.eddyformer import EddyFormer, EddyFormerConfig
from physicsnemo.distributed import DistributedManager
from physicsnemo.utils import StaticCaptureTraining
from physicsnemo.launch.logging import PythonLogger, LaunchLogger


class Re94(Dataset):

root: str
t: float

n: int = 50
dt: float = 0.1

def __init__(self, root: str, split: str, *, t: float = 0.5) -> None:
"""
"""
super().__init__()
self.root = root
self.t = t

self.file = []
for fname in sorted(os.listdir(root)):
if fname.startswith(split):
self.file.append(fname)

@property
def stride(self) -> int:
k = int(self.t / self.dt)
assert self.dt * k == self.t
return k

@property
def samples_per_file(self) -> int:
return self.n - self.stride + 1

def __len__(self) -> int:
return len(self.file) * self.samples_per_file

def __getitem__(self, idx: int) -> Tuple[Tensor, Tensor]:
file_idx, time_idx = divmod(idx, self.samples_per_file)

data = np.load(f"{self.root}/{self.file[file_idx]}", allow_pickle=True).item()
return torch.from_numpy(data["u"][time_idx]), torch.from_numpy(data["u"][time_idx + self.stride])

@hydra.main(version_base="1.3", config_path=".", config_name="config.yaml")
def isotropic_trainer(cfg: DictConfig) -> None:
"""
"""
DistributedManager.initialize() # Only call this once in the entire script!
dist = DistributedManager() # call if required elsewhere

# initialize monitoring
log = PythonLogger(name="re94_ef")
log.file_logging()
LaunchLogger.initialize() # PhysicsNeMo launch logger

# define model, loss, optimiser, scheduler, data loader
model = EddyFormer(
idim=cfg.model.idim,
odim=cfg.model.odim,
hdim=cfg.model.hdim,
num_layers=cfg.model.num_layers,
cfg=EddyFormerConfig(**cfg.model.layer_config),
).to(dist.device)
loss_fun = MSELoss(reduction="mean")
optimizer = Adam(model.parameters(), lr=cfg.training.learning_rate)
dataset = Re94(root=cfg.training.dataset, split="train", t=cfg.training.t)

# define forward passes for training and inference
@StaticCaptureTraining(
model=model, optim=optimizer, logger=log, use_amp=False, use_graphs=False
)
def training_step(input, target):
pred = torch.vmap(model)(input)
loss = loss_fun(pred, target)
return loss

for epoch in range(cfg.training.num_epochs):

dataloader = DataLoader(dataset, cfg.training.batch_size, shuffle=True)

for input, target in dataloader:

input = input.to(dist.device)
target = target.to(dist.device)
with torch.autograd.set_detect_anomaly(True):
loss = training_step(input, target)

with LaunchLogger("train", epoch=epoch) as logger:
logger.log_minibatch({"Training loss": loss.item()})

log.success("Training completed")


if __name__ == "__main__":
isotropic_trainer()
5 changes: 5 additions & 0 deletions physicsnemo/models/eddyformer/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from ._basis import Legendre
from ._datatype import SEM
from .eddyformer import EddyFormer, EddyFormerLayer

EddyFormerConfig = EddyFormerLayer.Config
112 changes: 112 additions & 0 deletions physicsnemo/models/eddyformer/_basis.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
from typing import Protocol
from torch import Tensor

import torch
import torch.nn as nn

import numpy as np
import functools

class Basis(Protocol):

grid: Tensor
quad: Tensor

m: int
f: Tensor

def fn(self, xs: Tensor) -> Tensor:
"""
Evaluate basis functions at given points.
"""

def at(self, coef: Tensor, xs: Tensor) -> Tensor:
"""
Evaluate basis expansion at given points.
"""
return torch.tensordot(self.fn(xs), coef, dims=1)

def modal(self, vals: Tensor) -> Tensor:
"""
Convert nodal values to modal coefficients.
"""

def nodal(self, coef: Tensor) -> Tensor:
"""
Convert modal coefficients to nodal values.
"""

class Element(Basis):

def __init__(self, base: Basis):
"""
"""

# ---------------------------------------------------------------------------- #
# LEGENDRE #
# ---------------------------------------------------------------------------- #

from numpy.polynomial import legendre

@functools.cache
class Legendre(nn.Module, Basis):

"""
Shifted Legendre polynomials:
- `(1 - x^2) Pn''(x) - 2 x Pn(x) + n (n + 1) Pn(x) = 0`
- `Pn^~(x) = Pn(2 x - 1)`
"""

def extra_repr(self) -> str:
return f"m={self.m}"

def __init__(self, m: int, endpoint: bool = False):
"""
"""
super().__init__()
self.m = m

if endpoint: m -= 1
c = (0, ) * m + (1, )
dc = legendre.legder(c)

x = legendre.legroots(dc if endpoint else c)
y = legendre.legval(x, c if endpoint else dc)

if endpoint:
x = np.concatenate([[-1], x, [1]])
y = np.concatenate([[1], y, [1]])

w = 1 / y ** 2
if endpoint: w /= m * (m + 1)
else: w /= 1 - x ** 2

self.register_buffer("grid", torch.tensor((1 + x) / 2, dtype=torch.float))
self.register_buffer("quad", torch.tensor(w, dtype=torch.float))

self.register_buffer("f", self.fn(self.grid))

def fn(self, xs: Tensor) -> Tensor:
"""
"""
P = torch.ones_like(xs), 2 * xs - 1

for i in range(2, self.m):
a, b = (i * 2 - 1) / i, (i - 1) / i
P += a * P[-1] * P[1] - b * P[-2],

return torch.stack(P, dim=-1)

# --------------------------------- TRANSFORM -------------------------------- #

def modal(self, vals: Tensor) -> Tensor:
"""
"""
norm = 2 * torch.arange(self.m, device=vals.device) + 1
coef = self.f * norm * self.quad[:, None]
return torch.tensordot(coef.T, vals, dims=1)

def nodal(self, coef: Tensor) -> Tensor:
"""
"""
return self.at(coef, self.grid)
Loading