You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<img src="https://api.star-history.com/svg?repos=MFlowCode/MFC&type=Date" alt="Star History Chart" width="600"/>
46
+
</a>
47
+
</p>
48
+
49
+
> **If MFC helps your work, please ⭐ the repo and cite it!**
50
+
51
+
### Who uses MFC
52
+
53
+
MFC runs at exascale on the world's fastest supercomputers:
54
+
-**OLCF Frontier** (>33K AMD MI250X GPUs)
55
+
-**LLNL El Capitan** (>43K AMD MI300A APUs)
56
+
-**LLNL Tuolumne**, **CSCS Alps**, and many others
57
+
58
+
### Try MFC
59
+
60
+
| Path | Command |
61
+
| --- | --- |
62
+
|**Codespaces** (fastest) | Click the "Codespaces" badge above to launch in 1 click |
63
+
|**Local build**|`./mfc.sh build -j $(nproc) && ./mfc.sh test -j $(nproc)`|
64
+
65
+
**Welcome!**
66
+
MFC simulates compressible multi-phase flows, [among other things](#what-else-can-this-thing-do).
67
+
It uses metaprogramming and is short (20K lines) and portable.
68
+
MFC conducted the largest known CFD simulation at <ahref="https://arxiv.org/abs/2505.07392"target="_blank">200 trillion grid points</a>, and 1 quadrillion degrees of freedom (as of September 2025).
Is MFC useful for you? Consider citing it or giving a star!
84
+
</p>
85
+
86
+
```bibtex
87
+
@article{Wilfong_2025,
88
+
author = {Wilfong, Benjamin and {Le Berre}, Henry and Radhakrishnan, Anand and Gupta, Ansh and Vaca-Revelo, Diego and Adam, Dimitrios and Yu, Haocheng and Lee, Hyeoksu and Chreim, Jose Rodolfo and {Carcana Barbosa}, Mirelys and Zhang, Yanjun and Cisneros-Garibay, Esteban and Gnanaskandan, Aswin and {Rodriguez Jr.}, Mauro and Budiardja, Reuben D. and Abbott, Stephen and Colonius, Tim and Bryngelson, Spencer H.},
89
+
title = {{MFC 5.0: A}n exascale many-physics flow solver},
90
+
journal = {arXiv preprint arXiv:2503.07953},
91
+
year = {2025},
92
+
doi = {10.48550/arXiv.2503.07953}
93
+
}
94
+
```
31
95
32
96
MFC is used on the latest leadership-class supercomputers.
33
-
It scales <b>ideally to exascale</b>; [tens of thousands of GPUs on NVIDIA- and AMD-GPU machines](#is-this-really-exascale) on Oak Ridge Summit and Frontier.
97
+
It scales <b>ideally to exascale</b>; [tens of thousands of GPUs on NVIDIA- and AMD-GPU machines](#is-this-really-exascale) on Oak Ridge Frontier, LLNL El Capitan, CSCS Alps, among others.
34
98
MFC is a SPEChpc benchmark candidate, part of the JSC JUPITER Early Access Program, and used OLCF Frontier and LLNL El Capitan early access systems.
35
99
36
100
Get in touch with <ahref="mailto:[email protected]">Spencer</a> if you have questions!
@@ -53,7 +117,7 @@ This one simulates high-Mach flow over an airfoil:
@@ -62,15 +126,23 @@ And here is a high amplitude acoustic wave reflecting and emerging through a cir
62
126
63
127
## Getting started
64
128
65
-
You can navigate [to this webpage](https://mflowcode.github.io/documentation/md_getting-started.html) to get started using MFC!
129
+
For a _very_ quick start, open a GitHub Codespace to load a pre-configured Docker container and familiarize yourself with MFC commands.
130
+
Click <kbd> <> Code</kbd> (green button at top right) → <kbd>Codespaces</kbd> (right tab) → <kbd>+</kbd> (create a codespace).
131
+
132
+
> ****Note:**** Codespaces is a free service with a monthly quota of compute time and storage usage.
133
+
> It is recommended for testing commands, troubleshooting, and running simple case files without installing dependencies or building MFC on your device.
134
+
> Don't conduct any critical work here!
135
+
> To learn more, please see [how Docker & Containers work](https://mflowcode.github.io/documentation/md_docker.html).
136
+
137
+
You can navigate [to this webpage](https://mflowcode.github.io/documentation/md_getting-started.html) to get you get started using MFC on your local machine, cluster, or supercomputer!
66
138
It's rather straightforward.
67
-
We'll give a brief intro. here for MacOS.
139
+
We'll give a brief introdocution for MacOS below.
68
140
Using [brew](https://brew.sh), install MFC's dependencies:
S. H. Bryngelson, K. Schmidmayer, V. Coralic, K. Maeda, J. Meng, T. Colonius (2021) Computer Physics Communications <b>266</b>, 107396
190
-
</a>
191
-
</p>
256
+
If you use MFC, consider citing it as below.
257
+
Ref. 1 includes all modern MFC features, including GPU acceleration and many new physics features.
258
+
If referencing MFC's (GPU) performance, consider citing ref. 1 and 2, which describe the solver and its design.
259
+
The original open-source release of MFC is ref. 3, which should be cited for provenance as appropriate.
192
260
193
261
```bibtex
194
-
@article{Bryngelson_2021,
195
-
title = {{MFC: A}n open-source high-order multi-component, multi-phase, and multi-scale compressible flow solver},
196
-
author = {S. H. Bryngelson and K. Schmidmayer and V. Coralic and J. C. Meng and K. Maeda and T. Colonius},
197
-
journal = {Computer Physics Communications},
198
-
year = {2021},
199
-
volume = {266},
200
-
pages = {107396},
201
-
doi = {10.1016/j.cpc.2020.107396}
262
+
@article{Wilfong_2025,
263
+
author = {Wilfong, Benjamin and {Le Berre}, Henry and Radhakrishnan, Anand and Gupta, Ansh and Vaca-Revelo, Diego and Adam, Dimitrios and Yu, Haocheng and Lee, Hyeoksu and Chreim, Jose Rodolfo and {Carcana Barbosa}, Mirelys and Zhang, Yanjun and Cisneros-Garibay, Esteban and Gnanaskandan, Aswin and {Rodriguez Jr.}, Mauro and Budiardja, Reuben D. and Abbott, Stephen and Colonius, Tim and Bryngelson, Spencer H.},
264
+
title = {{MFC 5.0: A}n exascale many-physics flow solver},
265
+
journal = {arXiv preprint arXiv:2503.07953},
266
+
year = {2025},
267
+
doi = {10.48550/arXiv.2503.07953}
202
268
}
203
-
```
204
269
205
-
```bibtex
206
270
@article{Radhakrishnan_2024,
207
271
title = {Method for portable, scalable, and performant {GPU}-accelerated simulation of multiphase compressible flow},
208
272
author = {A. Radhakrishnan and H. {Le Berre} and B. Wilfong and J.-S. Spratt and M. {Rodriguez Jr.} and T. Colonius and S. H. Bryngelson},
@@ -212,6 +276,16 @@ If you use MFC, consider citing it as:
212
276
pages = {109238},
213
277
doi = {10.1016/j.cpc.2024.109238}
214
278
}
279
+
280
+
@article{Bryngelson_2021,
281
+
title = {{MFC: A}n open-source high-order multi-component, multi-phase, and multi-scale compressible flow solver},
282
+
author = {S. H. Bryngelson and K. Schmidmayer and V. Coralic and J. C. Meng and K. Maeda and T. Colonius},
283
+
journal = {Computer Physics Communications},
284
+
year = {2021},
285
+
volume = {266},
286
+
pages = {107396},
287
+
doi = {10.1016/j.cpc.2020.107396}
288
+
}
215
289
```
216
290
217
291
## License
@@ -221,16 +295,11 @@ MFC is under the MIT license (see [LICENSE](LICENSE) for full text).
221
295
222
296
## Acknowledgements
223
297
224
-
Federal sponsors have supported MFC development, including the US Department of Defense (DOD), the National Institutes of Health (NIH), the Department of Energy (DOE), and the National Science Foundation (NSF).
298
+
Federal sponsors have supported MFC development, including the US Department of Defense (DOD), the National Institutes of Health (NIH), the Department of Energy (DOE) and National Nuclear Security Administration (NNSA), and the National Science Foundation (NSF).
225
299
226
300
MFC computations have used many supercomputing systems. A partial list is below
227
-
* OLCF Frontier and Summit, and testbeds Wombat, Crusher, and Spock (allocation CFD154, PI Bryngelson)
228
-
* LLNL Tuolumne and Lassen, El Capitan early access system Tioga
229
-
* PSC Bridges(1/2), NCSA Delta, SDSC Comet and Expanse, Purdue Anvil, TACC Stampede(1-3), and TAMU ACES via ACCESS-CI allocations from Bryngelson, Colonius, Rodriguez, and more.
230
-
* DOD systems Onyx, Carpenter, Nautilus, and Narwhal via the DOD HPCMP program
231
-
* Sandia National Labs systems Doom and Attaway and testbed systems Weaver and Vortex
* OLCF Frontier and Summit, and testbeds Wombat, Crusher, and Spock (allocation CFD154, PI Bryngelson).
302
+
* LLNL El Capitan, Tuolumne, and Lassen; El Capitan early access system Tioga.
303
+
* NCSA Delta and DeltaAI, PSC Bridges(1/2), SDSC Comet and Expanse, Purdue Anvil, TACC Stampede(1-3), and TAMU ACES via ACCESS-CI allocations from Bryngelson, Colonius, Rodriguez, and more.
304
+
* DOD systems Blueback, Onyx, Carpenter, Nautilus, and Narwhal via the DOD HPCMP program.
305
+
* Sandia National Labs systems Doom and Attaway, and testbed systems Weaver and Vortex.
0 commit comments