You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -155,8 +155,8 @@ They are organized below. Just click the drop-downs!
155
155
156
156
### Large-scale and accelerated simulation
157
157
158
-
* GPU compatible on NVIDIA (P/V/A/H100, GH200, etc.) and AMD (MI200+) hardware
159
-
* Ideal weak scaling to 100% of the largest GPU supercomputers
158
+
* GPU compatible on NVIDIA ([P/V/A/H]100, GH200, etc.) and AMD (MI[1/2/3]00+) GPU and APU hardware
159
+
* Ideal weak scaling to 100% of the largest GPU and superchip supercomputers
160
160
* \>10K NVIDIA GPUs on [OLCF Summit](https://www.olcf.ornl.gov/summit/) (NV V100-based)
161
161
* \>66K AMD GPUs on the first exascale computer, [OLCF Frontier](https://www.olcf.ornl.gov/frontier/) (AMD MI250X-based)
162
162
* Near compute roofline behavior
@@ -167,8 +167,8 @@ They are organized below. Just click the drop-downs!
167
167
168
168
*[Fypp](https://fypp.readthedocs.io/en/stable/fypp.html) metaprogramming for code readability, performance, and portability
169
169
* Continuous Integration (CI)
170
-
* \>250 Regression tests with each PR.
171
-
* Performed with GNU (GCC), Intel, Cray (CCE), and NVIDIA (NVHPC) compilers on NVIDIA and AMD GPUs.
170
+
* \>300 Regression tests with each PR.
171
+
* Performed with GNU (GCC), Intel (oneAPI), Cray (CCE), and NVIDIA (NVHPC) compilers on NVIDIA and AMD GPUs.
172
172
* Line-level test coverage reports via [Codecov](https://app.codecov.io/gh/MFlowCode/MFC) and `gcov`
173
173
* Benchmarking to avoid performance regressions and identify speed-ups
174
174
* Continuous Deployment (CD) of [website](https://mflowcode.github.io) and [API documentation](https://mflowcode.github.io/documentation/index.html)
@@ -214,11 +214,11 @@ MFC is under the MIT license (see [LICENSE](LICENSE) for full text).
214
214
215
215
## Acknowledgements
216
216
217
-
Multiple federal sponsors have supported MFC development, including the US Department of Defense (DOD), National Institutes of Health (NIH), Department of Energy (DOE), and National Science Foundation (NSF).
217
+
Federal sponsors have supported MFC development, including the US Department of Defense (DOD), the National Institutes of Health (NIH), the Department of Energy (DOE), and the National Science Foundation (NSF).
218
218
219
219
MFC computations have used many supercomputing systems. A partial list is below
220
-
* OLCF Frontier and Summit, and testbed systems Wombat, Crusher, and Spock (allocation CFD154, PI Bryngelson)
221
-
* LLNL Lassen and El Capitan testbed system, Tioga
222
-
* PSC Bridges(1/2), NCSA Delta, SDSC Comet and Expanse, Purdue Anvil, TACC Stampede(1-3), and TAMU ACES via ACCESS-CI (allocations TG-CTS120005 (PI Colonius) and TG-PHY210084 (PI Bryngelson))
223
-
* DOD systems Onyx, Carpenter, and Nautilus via the DOD HPCMP program
220
+
* OLCF Frontier and Summit, and testbeds Wombat, Crusher, and Spock (allocation CFD154, PI Bryngelson)
221
+
* LLNL Tuolumne and Lassen, El Capitan early access system Tioga
222
+
* PSC Bridges(1/2), NCSA Delta, SDSC Comet and Expanse, Purdue Anvil, TACC Stampede(1-3), and TAMU ACES via ACCESS-CI allocations from Bryngelson, Colonius, Rodriguez, and more.
223
+
* DOD systems Onyx, Carpenter, Nautilus, and Narwhal via the DOD HPCMP program
224
224
* Sandia National Labs systems Doom and Attaway and testbed systems Weaver and Vortex
0 commit comments