Skip to content

Commit 6ee36b7

Browse files
committed
docs: Update README to match latest upstream version
Synced README with documentation-updates branch which contains the most current version from upstream/master.
1 parent e34711d commit 6ee36b7

File tree

1 file changed

+123
-54
lines changed

1 file changed

+123
-54
lines changed

README.md

Lines changed: 123 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -5,17 +5,17 @@
55
</p>
66

77
<p align="center">
8-
<a href="http://dx.doi.org/10.1016/j.cpc.2020.107396" target="_blank">
9-
<img src="https://zenodo.org/badge/doi/10.1016/j.cpc.2020.107396.svg" />
10-
</a>
118
<a href="https://github.com/MFlowCode/MFC/actions">
12-
<img src="https://github.com/MFlowCode/MFC/actions/workflows/test.yml/badge.svg" />
9+
<img src="https://img.shields.io/github/actions/workflow/status/mflowcode/mfc/test.yml?style=flat&label=Tests&color=slateblue%09"/>
10+
</a>
11+
<a href="https://github.com/MFlowCode/MFC/blob/master/.github/CONTRIBUTING.md">
12+
<img src="https://img.shields.io/github/contributors-anon/mflowcode/mfc?style=flat&color=darkslategrey%09" />
1313
</a>
1414
<a href="https://join.slack.com/t/mflowcode/shared_invite/zt-y75wibvk-g~zztjknjYkK1hFgCuJxVw">
1515
<img src="https://img.shields.io/badge/slack-MFC-purple.svg?logo=slack" />
1616
</a>
1717
<a href="https://lbesson.mit-license.org/">
18-
<img src="https://img.shields.io/badge/License-MIT-blue.svg" />
18+
<img src="https://img.shields.io/badge/License-MIT-crimson.svg" />
1919
</a>
2020
<a href="https://codecov.io/github/MFlowCode/MFC" target="_blank">
2121
<img src="https://codecov.io/github/MFlowCode/MFC/graph/badge.svg?token=8SY043QND4">
@@ -25,12 +25,76 @@
2525
</a>
2626
</p>
2727

28-
Welcome to the home of MFC!
29-
MFC simulates compressible multi-component and multi-phase flows, [amongst other things](#what-else-can-this-thing-do).
30-
MFC is written in Fortran and uses metaprogramming to keep the code short (about 20K lines).
28+
<p align="center">
29+
<a href="https://mflowcode.github.io/">
30+
<img src="https://img.shields.io/badge/docs-mflowcode.github.io-blue" />
31+
</a>
32+
<a href="https://github.com/MFlowCode/MFC/discussions">
33+
<img src="https://img.shields.io/badge/discussions-join-brightgreen" />
34+
</a>
35+
<a href="https://github.com/codespaces/new?hide_repo_select=true&ref=master&repo=MFlowCode%2FMFC">
36+
<img src="https://img.shields.io/badge/Codespaces-Open%20in%201%20click-2ea44f?logo=github" />
37+
</a>
38+
<a href="https://github.com/MFlowCode/MFC/releases">
39+
<img src="https://img.shields.io/github/v/release/MFlowCode/MFC?display_name=release&sort=semver" />
40+
</a>
41+
</p>
42+
43+
<p align="center">
44+
<a href="https://star-history.com/#MFlowCode/MFC&Date">
45+
<img src="https://api.star-history.com/svg?repos=MFlowCode/MFC&type=Date" alt="Star History Chart" width="600"/>
46+
</a>
47+
</p>
48+
49+
> **If MFC helps your work, please ⭐ the repo and cite it!**
50+
51+
### Who uses MFC
52+
53+
MFC runs at exascale on the world's fastest supercomputers:
54+
- **OLCF Frontier** (>33K AMD MI250X GPUs)
55+
- **LLNL El Capitan** (>43K AMD MI300A APUs)
56+
- **LLNL Tuolumne**, **CSCS Alps**, and many others
57+
58+
### Try MFC
59+
60+
| Path | Command |
61+
| --- | --- |
62+
| **Codespaces** (fastest) | Click the "Codespaces" badge above to launch in 1 click |
63+
| **Local build** | `./mfc.sh build -j $(nproc) && ./mfc.sh test -j $(nproc)` |
64+
65+
**Welcome!**
66+
MFC simulates compressible multi-phase flows, [among other things](#what-else-can-this-thing-do).
67+
It uses metaprogramming and is short (20K lines) and portable.
68+
MFC conducted the largest known CFD simulation at <a href="https://arxiv.org/abs/2505.07392" target="_blank">200 trillion grid points</a>, and 1 quadrillion degrees of freedom (as of September 2025).
69+
MFC is a 2025 Gordon Bell Prize Finalist.
70+
71+
<p align="center">
72+
<a href="https://doi.org/10.48550/arXiv.2503.07953" target="_blank">
73+
<img src="https://img.shields.io/badge/DOI-10.48550/arXiv.2503.07953-thistle.svg"/>
74+
</a>
75+
<a href="https://doi.org/10.5281/zenodo.17049757" target="_blank">
76+
<img src="https://zenodo.org/badge/DOI/10.5281/zenodo.17049757.svg"/>
77+
</a>
78+
<a href="https://github.com/MFlowCode/MFC/stargazers" target="_blank">
79+
<img src="https://img.shields.io/github/stars/MFlowCode/MFC?style=flat&color=maroon"/>
80+
</a>
81+
82+
</br>
83+
Is MFC useful for you? Consider citing it or giving a star!
84+
</p>
85+
86+
```bibtex
87+
@article{Wilfong_2025,
88+
author = {Wilfong, Benjamin and {Le Berre}, Henry and Radhakrishnan, Anand and Gupta, Ansh and Vaca-Revelo, Diego and Adam, Dimitrios and Yu, Haocheng and Lee, Hyeoksu and Chreim, Jose Rodolfo and {Carcana Barbosa}, Mirelys and Zhang, Yanjun and Cisneros-Garibay, Esteban and Gnanaskandan, Aswin and {Rodriguez Jr.}, Mauro and Budiardja, Reuben D. and Abbott, Stephen and Colonius, Tim and Bryngelson, Spencer H.},
89+
title = {{MFC 5.0: A}n exascale many-physics flow solver},
90+
journal = {arXiv preprint arXiv:2503.07953},
91+
year = {2025},
92+
doi = {10.48550/arXiv.2503.07953}
93+
}
94+
```
3195

3296
MFC is used on the latest leadership-class supercomputers.
33-
It scales <b>ideally to exascale</b>; [tens of thousands of GPUs on NVIDIA- and AMD-GPU machines](#is-this-really-exascale) on Oak Ridge Summit and Frontier.
97+
It scales <b>ideally to exascale</b>; [tens of thousands of GPUs on NVIDIA- and AMD-GPU machines](#is-this-really-exascale) on Oak Ridge Frontier, LLNL El Capitan, CSCS Alps, among others.
3498
MFC is a SPEChpc benchmark candidate, part of the JSC JUPITER Early Access Program, and used OLCF Frontier and LLNL El Capitan early access systems.
3599

36100
Get in touch with <a href="mailto:[email protected]">Spencer</a> if you have questions!
@@ -53,7 +117,7 @@ This one simulates high-Mach flow over an airfoil:
53117
<img src="docs/res/airfoil.png" alt="Airfoil Example" width="700"/><br/>
54118
</p>
55119

56-
And here is a high amplitude acoustic wave reflecting and emerging through a circular orifice:
120+
And here is a high-amplitude acoustic wave reflecting and emerging through a circular orifice:
57121

58122
<p align="center">
59123
<img src="docs/res/orifice.png" alt="Orifice Example" width="700"/><br/>
@@ -62,15 +126,23 @@ And here is a high amplitude acoustic wave reflecting and emerging through a cir
62126

63127
## Getting started
64128

65-
You can navigate [to this webpage](https://mflowcode.github.io/documentation/md_getting-started.html) to get started using MFC!
129+
For a _very_ quick start, open a GitHub Codespace to load a pre-configured Docker container and familiarize yourself with MFC commands.
130+
Click <kbd> <> Code</kbd> (green button at top right) → <kbd>Codespaces</kbd> (right tab) → <kbd>+</kbd> (create a codespace).
131+
132+
> ****Note:**** Codespaces is a free service with a monthly quota of compute time and storage usage.
133+
> It is recommended for testing commands, troubleshooting, and running simple case files without installing dependencies or building MFC on your device.
134+
> Don't conduct any critical work here!
135+
> To learn more, please see [how Docker & Containers work](https://mflowcode.github.io/documentation/md_docker.html).
136+
137+
You can navigate [to this webpage](https://mflowcode.github.io/documentation/md_getting-started.html) to get you get started using MFC on your local machine, cluster, or supercomputer!
66138
It's rather straightforward.
67-
We'll give a brief intro. here for MacOS.
139+
We'll give a brief introdocution for MacOS below.
68140
Using [brew](https://brew.sh), install MFC's dependencies:
69141
```shell
70-
brew install coreutils python cmake fftw hdf5 gcc boost open-mpi
142+
brew install coreutils python cmake fftw hdf5 gcc boost open-mpi lapack
71143
```
72144
You're now ready to build and test MFC!
73-
Put it to a convenient directory via
145+
Put it to a local directory via
74146
```shell
75147
git clone https://github.com/MFlowCode/MFC
76148
cd MFC
@@ -100,17 +172,14 @@ You can visualize the output data in `examples/3d_shockdroplet/silo_hdf5` via Pa
100172
## Is this _really_ exascale?
101173

102174
[OLCF Frontier](https://www.olcf.ornl.gov/frontier/) is the first exascale supercomputer.
103-
The weak scaling of MFC on this machine shows near-ideal utilization.
175+
The weak scaling of MFC on this machine shows near-ideal utilization.
176+
We also scale ideally to >98% of LLNL El Capitan.
104177

105178
<p align="center">
106179
<img src="docs/res/scaling.png" alt="Scaling" width="400"/>
107180
</p>
108181

109-
110-
## What else can this thing do
111-
112-
MFC has many features.
113-
They are organized below.
182+
## What else can this thing do?
114183

115184
### Physics
116185

@@ -137,13 +206,14 @@ They are organized below.
137206
* Acoustic wave generation (one- and two-way sound sources)
138207
* Magnetohydrodynamics (MHD)
139208
* Relativistic Magnetohydrodynamics (RMHD)
140-
</details>
141209

142210
### Numerics
143211

144212
* Shock and interface capturing schemes
145213
* First-order upwinding
146-
* WENO reconstructions of order 3, 5, and 7
214+
* MUSCL (order 2)
215+
* Slope limiters: minmod, monotonized central, Van Albada, Van Leer, superbee
216+
* WENO reconstructions (orders 3, 5, and 7)
147217
* WENO variants: WENO-JS, WENO-M, WENO-Z, TENO
148218
* Monotonicity-preserving reconstructions
149219
* Reliable handling of large density ratios
@@ -156,15 +226,16 @@ They are organized below.
156226
* Runge-Kutta orders 1-3 (SSP TVD), adaptive time stepping
157227
* RK4-5 operator splitting for Euler-Lagrange modeling
158228
* Interface sharpening (THINC-like)
159-
229+
* Information geometric regularization (IGR)
230+
* Shock capturing without WENO and Riemann solvers
160231

161232
### Large-scale and accelerated simulation
162233

163234
* GPU compatible on NVIDIA ([P/V/A/H]100, GH200, etc.) and AMD (MI[1/2/3]00+) GPU and APU hardware
164235
* Ideal weak scaling to 100% of the largest GPU and superchip supercomputers
165-
* \>36K AMD APUs (MI300A) on [LLNL El Capitan](https://hpc.llnl.gov/hardware/compute-platforms/el-capitan)
236+
* \>43K AMD APUs (MI300A) on [LLNL El Capitan](https://hpc.llnl.gov/hardware/compute-platforms/el-capitan)
166237
* \>3K AMD APUs (MI300A) on [LLNL Tuolumne](https://hpc.llnl.gov/hardware/compute-platforms/tuolumne)
167-
* \>33K AMD GPUs (MI250X) on the first exascale computer, [OLCF Frontier](https://www.olcf.ornl.gov/frontier/)
238+
* \>33K AMD GPUs (MI250X) on [OLCF Frontier](https://www.olcf.ornl.gov/frontier/)
168239
* \>10K NVIDIA GPUs (V100) on [OLCF Summit](https://www.olcf.ornl.gov/summit/)
169240
* Near compute roofline behavior
170241
* RDMA (remote data memory access; GPU-GPU direct communication) via GPU-aware MPI on NVIDIA (CUDA-aware MPI) and AMD GPU systems
@@ -174,35 +245,28 @@ They are organized below.
174245

175246
* [Fypp](https://fypp.readthedocs.io/en/stable/fypp.html) metaprogramming for code readability, performance, and portability
176247
* Continuous Integration (CI)
177-
* \>300 Regression tests with each PR.
248+
* \>500 Regression tests with each PR.
178249
* Performed with GNU (GCC), Intel (oneAPI), Cray (CCE), and NVIDIA (NVHPC) compilers on NVIDIA and AMD GPUs.
179250
* Line-level test coverage reports via [Codecov](https://app.codecov.io/gh/MFlowCode/MFC) and `gcov`
180251
* Benchmarking to avoid performance regressions and identify speed-ups
181252
* Continuous Deployment (CD) of [website](https://mflowcode.github.io) and [API documentation](https://mflowcode.github.io/documentation/index.html)
182253

183254
## Citation
184255

185-
If you use MFC, consider citing it as:
186-
187-
<p align="center">
188-
<a href="https://doi.org/10.1016/j.cpc.2020.107396">
189-
S. H. Bryngelson, K. Schmidmayer, V. Coralic, K. Maeda, J. Meng, T. Colonius (2021) Computer Physics Communications <b>266</b>, 107396
190-
</a>
191-
</p>
256+
If you use MFC, consider citing it as below.
257+
Ref. 1 includes all modern MFC features, including GPU acceleration and many new physics features.
258+
If referencing MFC's (GPU) performance, consider citing ref. 1 and 2, which describe the solver and its design.
259+
The original open-source release of MFC is ref. 3, which should be cited for provenance as appropriate.
192260

193261
```bibtex
194-
@article{Bryngelson_2021,
195-
title = {{MFC: A}n open-source high-order multi-component, multi-phase, and multi-scale compressible flow solver},
196-
author = {S. H. Bryngelson and K. Schmidmayer and V. Coralic and J. C. Meng and K. Maeda and T. Colonius},
197-
journal = {Computer Physics Communications},
198-
year = {2021},
199-
volume = {266},
200-
pages = {107396},
201-
doi = {10.1016/j.cpc.2020.107396}
262+
@article{Wilfong_2025,
263+
author = {Wilfong, Benjamin and {Le Berre}, Henry and Radhakrishnan, Anand and Gupta, Ansh and Vaca-Revelo, Diego and Adam, Dimitrios and Yu, Haocheng and Lee, Hyeoksu and Chreim, Jose Rodolfo and {Carcana Barbosa}, Mirelys and Zhang, Yanjun and Cisneros-Garibay, Esteban and Gnanaskandan, Aswin and {Rodriguez Jr.}, Mauro and Budiardja, Reuben D. and Abbott, Stephen and Colonius, Tim and Bryngelson, Spencer H.},
264+
title = {{MFC 5.0: A}n exascale many-physics flow solver},
265+
journal = {arXiv preprint arXiv:2503.07953},
266+
year = {2025},
267+
doi = {10.48550/arXiv.2503.07953}
202268
}
203-
```
204269
205-
```bibtex
206270
@article{Radhakrishnan_2024,
207271
title = {Method for portable, scalable, and performant {GPU}-accelerated simulation of multiphase compressible flow},
208272
author = {A. Radhakrishnan and H. {Le Berre} and B. Wilfong and J.-S. Spratt and M. {Rodriguez Jr.} and T. Colonius and S. H. Bryngelson},
@@ -212,6 +276,16 @@ If you use MFC, consider citing it as:
212276
pages = {109238},
213277
doi = {10.1016/j.cpc.2024.109238}
214278
}
279+
280+
@article{Bryngelson_2021,
281+
title = {{MFC: A}n open-source high-order multi-component, multi-phase, and multi-scale compressible flow solver},
282+
author = {S. H. Bryngelson and K. Schmidmayer and V. Coralic and J. C. Meng and K. Maeda and T. Colonius},
283+
journal = {Computer Physics Communications},
284+
year = {2021},
285+
volume = {266},
286+
pages = {107396},
287+
doi = {10.1016/j.cpc.2020.107396}
288+
}
215289
```
216290

217291
## License
@@ -221,16 +295,11 @@ MFC is under the MIT license (see [LICENSE](LICENSE) for full text).
221295

222296
## Acknowledgements
223297

224-
Federal sponsors have supported MFC development, including the US Department of Defense (DOD), the National Institutes of Health (NIH), the Department of Energy (DOE), and the National Science Foundation (NSF).
298+
Federal sponsors have supported MFC development, including the US Department of Defense (DOD), the National Institutes of Health (NIH), the Department of Energy (DOE) and National Nuclear Security Administration (NNSA), and the National Science Foundation (NSF).
225299

226300
MFC computations have used many supercomputing systems. A partial list is below
227-
* OLCF Frontier and Summit, and testbeds Wombat, Crusher, and Spock (allocation CFD154, PI Bryngelson)
228-
* LLNL Tuolumne and Lassen, El Capitan early access system Tioga
229-
* PSC Bridges(1/2), NCSA Delta, SDSC Comet and Expanse, Purdue Anvil, TACC Stampede(1-3), and TAMU ACES via ACCESS-CI allocations from Bryngelson, Colonius, Rodriguez, and more.
230-
* DOD systems Onyx, Carpenter, Nautilus, and Narwhal via the DOD HPCMP program
231-
* Sandia National Labs systems Doom and Attaway and testbed systems Weaver and Vortex
232-
233-
234-
## Contributors
235-
236-
[![Contributors](https://contributors-img.web.app/image?repo=mflowcode/mfc)](https://github.com/mflowcode/mfc/graphs/contributors)
301+
* OLCF Frontier and Summit, and testbeds Wombat, Crusher, and Spock (allocation CFD154, PI Bryngelson).
302+
* LLNL El Capitan, Tuolumne, and Lassen; El Capitan early access system Tioga.
303+
* NCSA Delta and DeltaAI, PSC Bridges(1/2), SDSC Comet and Expanse, Purdue Anvil, TACC Stampede(1-3), and TAMU ACES via ACCESS-CI allocations from Bryngelson, Colonius, Rodriguez, and more.
304+
* DOD systems Blueback, Onyx, Carpenter, Nautilus, and Narwhal via the DOD HPCMP program.
305+
* Sandia National Labs systems Doom and Attaway, and testbed systems Weaver and Vortex.

0 commit comments

Comments
 (0)