An End-to-End Composable Multi-Agent Framework for Automating CFD Simulation in OpenFOAM
You can visit https://deepwiki.com/csml-rpi/Foam-Agent for a comprehensive introduction and to ask any questions interactively.
Foam-Agent is a multi-agent framework that automates the entire OpenFOAM-based CFD simulation workflow from a single natural language prompt. By managing the full pipeline—from meshing and case setup to execution and post-processing—Foam-Agent dramatically lowers the expertise barrier for Computational Fluid Dynamics. Evaluated on FoamBench of 110 simulation tasks, our framework achieves an 88.2% success rate, demonstrating how specialized multi-agent systems can democratize complex scientific computing.
Our framework introduces three key innovations:
- End-to-End Simulation Automation: Foam-Agent manages the full simulation pipeline, including advanced pre-processing with a versatile Meshing Agent that handles external mesh files and generates new geometries via Gmsh, automatic generation of HPC submission scripts, and post-simulation visualization via ParaView/PyVista.
- High-Fidelity Configuration: We use a Retrieval-Augmented Generation (RAG) system based on a hierarchical index of case metadata. Generation proceeds in a dependency-aware order, ensuring consistency and accuracy across all configuration files.
- Composable Service Architecture: The framework exposes its core functions as discrete, callable tools using a Model Context Protocol (MCP). This allows for flexible integration with other agentic systems for more complex or exploratory workflows.
- Hierarchical retrieval covering case files, directory structures, and dependencies
- Specialized vector index architecture for improved information retrieval
- Context-specific knowledge retrieval at different simulation stages
- Architect Agent interprets requirements and plans file structures
- Input Writer Agent generates configuration files with consistency management
- Runner Agent executes simulations and captures outputs
- Reviewer Agent analyzes errors and proposes corrections
- Error pattern recognition for common simulation failures
- Automatic diagnosis and resolution of configuration issues
- Iterative refinement process that progressively improves simulation configurations
- Custom mesh integration with GMSH
.mshfiles - Boundary condition specification through natural language requirements
- Currently supports GMSH ASCII 2.2 format mesh files
- Seamless workflow from mesh import to simulation execution
Example Usage:
python foambench_main.py --output ./output --prompt_path ./user_req_tandem_wing.txt --custom_mesh_path ./tandem_wing.mshExample Mesh File: The tandem_wing.msh file in this repository is taken from the tandem wing tutorial and demonstrates a 3D tandem wing simulation with NACA 0012 airfoils.
Requirements Format: In your user_req_tandem_wing.txt, describe the boundary conditions and physical parameters for your custom mesh. The agent will automatically detect the mesh type and generate appropriate OpenFOAM configuration files.
Foam-Agent supports two generation modes for the Input Writer Agent (case file creation). You can set it in src/config.py:
-
input_writer_generation_mode = "sequential_dependency"- Generates files sequentially (ordered:
system→constant→0→ others). - When enabled by the planner, later files may include previously generated files as context to enforce consistency.
- Recommended when case evaluation is expensive (e.g., HPC runs / long simulations), because it tends to reduce the number of fail→review→rewrite iterations.
- Generates files sequentially (ordered:
-
input_writer_generation_mode = "parallel_no_context"(default)- Generates files in parallel with no cross-file context (faster and cheaper prompts).
- Recommended when case evaluation is cheap (e.g., quick local test runs), where you can rely on the Reviewer/Rewrite loop to fix small inconsistencies.
Rule of thumb:
- Expensive-to-run cases → choose
sequential_dependency. - Cheap-to-run cases (fast iteration) → choose
parallel_no_context.
Foam-Agent is fully pre-installed in the Docker image leoyue123/foamagent. This is the easiest way to get an end-to-end OpenFOAM + Foam-Agent environment.
docker pull leoyue123/foamagentIf you want a specific (stable) release, pull a tagged image instead of latest (recommended):
docker pull leoyue123/foamagent:<tag>For example (replace with the tag you want):
docker pull leoyue123/foamagent:v2.0.0Note:
git checkout <tag>applies to the source code repository (manual install / building from source), not todocker pull.
docker run -it \
-e OPENAI_API_KEY=your-key-here \
-p 7860:7860 \
--name foamagent \
leoyue123/foamagentInside the container you automatically get:
- OpenFOAM v10 installed and sourced
- Conda initialized and the
FoamAgentenvironment activated - Working directory set to
/home/openfoam/Foam-Agent - Database files pre-initialized and ready to use
- Default location inside Docker:
/home/openfoam/Foam-Agent/user_requirement.txt - Edit directly in the container (example):
nano user_requirement.txt
- Or mount a prompt file from the host:
docker run -it \ -e OPENAI_API_KEY=your-key-here \ -p 7860:7860 \ -v /absolute/path/to/my_requirement.txt:/home/openfoam/Foam-Agent/user_requirement.txt \ --name foamagent \ leoyue123/foamagent
Example content of user_requirement.txt:
do a Reynolds-Averaged Simulation (RAS) pitzdaily simulation. Use PIMPLE algorithm. The domain is a 2D millimeter-scale channel geometry. Boundary conditions specify a fixed velocity of 10m/s at the inlet (left), zero gradient pressure at the outlet (right), and no-slip conditions for walls. Use timestep of 0.0001 and output every 0.01. Finaltime is 0.3. use nu value of 1e-5.
If you have a Gmsh .msh file on the host, mount it into the container and point Foam-Agent to it:
docker run -it \
-e OPENAI_API_KEY=your-key-here \
-p 7860:7860 \
-v /absolute/path/to/my_mesh.msh:/home/openfoam/Foam-Agent/my_mesh.msh \
--name foamagent \
leoyue123/foamagentThen, inside the container, call:
python foambench_main.py \
--output ./output \
--prompt_path ./user_requirement.txt \
--custom_mesh_path ./my_mesh.mshFrom /home/openfoam/Foam-Agent in the container:
# Basic run
python foambench_main.py \
--output ./output \
--prompt_path ./user_requirement.txt
# With a custom mesh (if provided)
python foambench_main.py \
--output ./output \
--prompt_path ./user_requirement.txt \
--custom_mesh_path ./my_mesh.mshTo restart and reuse the same container later:
docker start -i foamagentIf you prefer to build the Docker image yourself from this repository:
git clone https://github.com/csml-rpi/Foam-Agent.git
cd Foam-Agent
docker build -f docker/Dockerfile -t foamagent:latest .Run the locally built image:
docker run -it \
-e OPENAI_API_KEY=your-key-here \
-p 7860:7860 \
--name foamagent \
foamagent:latestFoam-Agent selects the LLM backend and model from src/config.py. Inside the container, this file is at /home/openfoam/Foam-Agent/src/config.py.
Recommended (no API key): ChatGPT/Codex OAuth sign-in
If you have a ChatGPT/Codex subscription, you can run Foam-Agent via the Codex OAuth backend instead of a usage-based API key.
This avoids setting OPENAI_API_KEY and reduces the risk of accidentally leaking API keys in shell history, screenshots, or logs.
In plain words:
- OAuth sign-in = you sign in to your ChatGPT/Codex account once on your host machine, and a local token file is created.
- Foam-Agent can then reuse that token file (via a read-only mount) to call the model.
Step-by-step (host → Docker):
-
Install the Codex CLI on your host machine
- Follow the official Codex CLI installation instructions for your OS.
-
Login with ChatGPT (creates a local OAuth cache)
codex loginChoose “Sign in with ChatGPT” when prompted.
- Verify the token cache file exists (typical location)
ls -lah ~/.codex/auth.jsonIf you do not see the file, your Codex CLI may be using an OS keychain instead of a file cache.
In that case, configure Codex CLI to use file-based storage so that ~/.codex/auth.json is created.
- Run the Foam-Agent Docker container and mount the token file read-only
Docker example (recommended):
docker run -it \
-e FOAMAGENT_MODEL_PROVIDER=openai-codex \
-e FOAMAGENT_MODEL_VERSION=gpt-5.3-codex \
-v ~/.codex/auth.json:/root/.codex/auth.json:ro \
-p 7860:7860 \
--name foamagent \
leoyue123/foamagentIf your Codex OAuth cache lives elsewhere, mount that path instead:
$CODEX_HOME/auth.json~/.codex/auth.json~/.clawdbot/agents/main/agent/auth-profiles.json
from dataclasses import dataclass
from pathlib import Path
@dataclass
class Config:
...
# ["openai", "openai-codex", "ollama", "bedrock", "anthropic"]
model_provider: str = "openai"
model_version: str = "gpt-5-mini"
temperature: float = 1.0You can also override these values via environment variables (recommended for Docker / CI):
FOAMAGENT_MODEL_PROVIDER(e.g.,openai,openai-codex,anthropic,ollama,bedrock)FOAMAGENT_MODEL_VERSION(e.g.,gpt-5-mini,gpt-5.3-codex,claude-3-5-sonnet-latest, ...)
Example:
docker run -it \
-e FOAMAGENT_MODEL_PROVIDER=openai \
-e FOAMAGENT_MODEL_VERSION=gpt-5-mini \
-e OPENAI_API_KEY=your-key-here \
-p 7860:7860 \
--name foamagent \
foamagent:latest- model_provider:
"openai" - Set environment variable:
OPENAI_API_KEY=sk-...
- model_provider:
"anthropic" - Set environment variable:
ANTHROPIC_API_KEY=... - model_version: e.g.
"claude-3-5-sonnet-latest"
If you already signed in with ChatGPT for Codex (same flow as the Codex CLI / IDE extension), Foam-Agent can load that token from your local Codex auth cache and run inference via the Codex subscription backend.
- model_provider:
"openai-codex" - model_version: a Codex model you have access to, e.g.
"gpt-5.3-codex"
Foam-Agent looks for a Codex/ChatGPT OAuth cache (first match wins):
$CODEX_HOME/auth.json~/.codex/auth.json~/.clawdbot/agents/main/agent/auth-profiles.json(if you already logged in via Clawdbot OpenAI-Codex OAuth)
How to create ~/.codex/auth.json:
- Install and run the Codex CLI (or sign in via the Codex IDE extension).
- Run
codex loginand choose Sign in with ChatGPT. - Ensure Codex stores credentials in a file (some setups use OS keyring by default). If needed, configure
Codex to use file-based storage so that
~/.codex/auth.jsonexists.
Security note: ~/.codex/auth.json contains access tokens. Treat it like a password.
To change the LLM configuration inside Docker:
docker exec -it foamagent bash
cd /home/openfoam/Foam-Agent
nano src/config.py- OpenAI (via
OPENAI_API_KEY):- model_provider:
"openai" - model_version: e.g.
"gpt-5-mini"or another supported OpenAI-compatible model name
- model_provider:
- Anthropic (via
ANTHROPIC_API_KEY):- model_provider:
"anthropic" - model_version: e.g.
"claude-3-5-sonnet-latest"
- model_provider:
- AWS Bedrock:
- model_provider:
"bedrock" - model_version: your Bedrock application ARN
- model_provider:
- Ollama (local models):
- model_provider:
"ollama" - model_version: the local model name, e.g.
"qwen2.5:32b-instruct"
- model_provider:
Foam-Agent exposes its capabilities as an MCP server. The recommended workflow is:
- Run Foam-Agent in Docker
- Start the MCP server inside the container
- Point Claude Code or Cursor to that server
Make sure the container is running:
docker start -i foamagentIn a separate terminal, attach and start the MCP server:
docker exec -it foamagent bash
cd /home/openfoam/Foam-Agent
# HTTP mode (if your MCP client supports HTTP transport)
python -m src.mcp.fastmcp_server --transport http --host 0.0.0.0 --port 7860If you are running Docker on a remote server, make sure port 7860 is reachable from your local machine
(for example, by using SSH port forwarding or a proper port mapping such as -p 7860:7860 when starting the container).
In your MCP configuration file, use a simple HTTP-based entry like:
{
"mcpServers": {
"foam-agent": {
"url": "http://localhost:7860"
}
}
}Adjust localhost and the port if your server is running on a different host or port.
If your MCP client prefers stdio instead of HTTP, you can still use the original docker exec style configuration.
Refer to the Foam-Agent repository documentation for the stdio example.
- Open Cursor settings (Cmd/Ctrl + ,)
- Search for "MCP" or navigate to Settings → Features → MCP
- Click "Edit MCP Settings" or open the MCP configuration file
- Paste the JSON configuration from section 3.2
- Save and restart Cursor
Once configured, you can call Foam-Agent tools directly from Claude Code or Cursor to plan cases, write input files, run simulations, and visualize results through natural-language commands.
If you prefer not to use Docker, you can install Foam-Agent and its dependencies manually.
git clone https://github.com/csml-rpi/Foam-Agent.git
cd Foam-Agent
conda env create -n FoamAgent -f environment.yml
conda activate FoamAgentFoam-Agent requires OpenFOAM v10. Please follow the official installation guide for your operating system:
- Official installation: https://openfoam.org/version/10/
Verify your installation with:
echo $WM_PROJECT_DIRThe result should be something like:
/opt/openfoam10
WM_PROJECT_DIR is an environment variable that comes with your OpenFOAM installation, indicating the location of OpenFOAM on your computer.
From the repository root:
python foambench_main.py --output ./output --prompt_path ./user_requirement.txtYou can also specify a custom mesh:
python foambench_main.py \
--output ./output \
--prompt_path ./user_req_tandem_wing.txt \
--custom_mesh_path ./tandem_wing.msh- Default configuration (including LLM provider and model) is in
src/config.py. - You must set the
OPENAI_API_KEYenvironment variable if using OpenAI/Bedrock-style models. - For AWS Bedrock or other cloud providers, ensure their credentials are configured in your environment.
- OpenFOAM environment not found: Ensure you have sourced the OpenFOAM bashrc and restarted your terminal (for manual installations), or use the provided Docker image where this is pre-configured.
- Database files missing: Database files are included in the repository (and in the Docker image). If they are missing, ensure you have cloned the complete repository including the
database/directory. - Missing dependencies: Recreate the environment:
conda env update -n FoamAgent -f environment.yml --pruneorconda env remove -n FoamAgent && conda env create -n FoamAgent -f environment.yml. - API key errors: Ensure
OPENAI_API_KEYis set in your environment or in the MCP configuration. - MCP connection errors: Verify that the Docker container is running, the MCP command in your configuration matches your setup, and that all dependencies are installed.
If you use Foam-Agent in your research, please cite our paper:
@article{yue2026foam,
title = {Automating Computational Fluid Dynamics with LLM-based Multi-Agent Systems},
author = {Yue, Ling and Nithin Somasekharan, Nithin and Zhang, Tingwen and Cao, Yadi and Chen, Zhangze and Di, Shimin},
year = {2026},
month = feb,
howpublished = {Research Square},
note = {Preprint (Version 1)},
doi = {10.21203/rs.3.rs-8629022/v1},
url = {https://doi.org/10.21203/rs.3.rs-8629022/v1}
}
@article{somasekharan2026cfdllmbench,
title={CFDLLMBench: A Benchmark Suite for Evaluating Large Language Models in Computational Fluid Dynamics},
author={Somasekharan, Nithin and Yue, Ling and Cao, Yadi and Li, Weichao and Emami, Patrick and Bhargav, Pochinapeddi Sai and Acharya, Anurag and Xie, Xingyu and Pan, Shaowu},
journal={Journal of Data-centric Machine Learning Research},
year={2026},
url={https://openreview.net/forum?id=kTcH1MnkjY},
note={}
}
@article{yue2025foamagent,
title={Foam-Agent 2.0: An End-to-End Composable Multi-Agent Framework for Automating CFD Simulation in OpenFOAM},
author={Yue, Ling and Somasekharan, Nithin and Zhang, Tingwen and Cao, Yadi and Pan, Shaowu},
journal={arXiv preprint arXiv:2509.18178},
year={2025}
}
@article{yue2025foam,
title={Foam-Agent: Towards Automated Intelligent CFD Workflows},
author={Yue, Ling and Somasekharan, Nithin and Cao, Yadi and Pan, Shaowu},
journal={arXiv preprint arXiv:2505.04997},
year={2025}
}
