Skip to content

Commit f032ad8

Browse files
authored
Add/fix Macbooks performance (MFlowCode#634)
1 parent bf37331 commit f032ad8

File tree

1 file changed

+6
-5
lines changed

1 file changed

+6
-5
lines changed

docs/documentation/expectedPerformance.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,8 @@ The following table outlines observed performance as nanoseconds per grid point
99
We solve an example 3D, inviscid, 5-equation model problem with two advected species (8 PDEs) and 8M grid points (158-cubed uniform grid).
1010
The numerics are WENO5 finite volume reconstruction and HLLC approximate Riemann solver.
1111
This case is located in `examples/3D_performance_test`.
12-
You can run it via `./mfc.sh run -n <num_processors> -j $(nproc) ./examples/3D_performance_test/case.py -t pre_process simulation --case-optimization`, which will build an optimized version of the code for this case then execute it.
12+
You can run it via `./mfc.sh run -n <num_processors> -j $(nproc) ./examples/3D_performance_test/case.py -t pre_process simulation --case-optimization` for CPU cases right after building MFC, which will build an optimized version of the code for this case then execute it.
13+
For benchmarking GPU devices, you will likely want to use `-n <num_gpus>` where `<num_gpus>` should likely be `1`.
1314
If the above does not work on your machine, see the rest of this documentation for other ways to use the `./mfc.sh run` command.
1415

1516
Results are for MFC v4.9.3 (July 2024 release), though numbers have not changed meaningfully since then.
@@ -18,13 +19,13 @@ All results are for the compiler that gave the best performance.
1819
Note:
1920
* CPU results may be performed on CPUs with more cores than reported in the table; we report results for the best performance given the full processor die by checking the performance for different core counts on that device. CPU results are the best performance we achieved using a single socket (or die).
2021
These are reported as (X/Y cores), where X is the used cores, and Y is the total on the die.
21-
* GPU results are for a single GPU device. For single-precision (SP) GPUs, we performed computation in double-precision via conversion in compiler/software; these numbers are _not_ for single-precision computation. AMD MI250X and MI300A GPUs have multiple graphics compute dies (GCDs) per device; we report results for one _GCD_*, though one can quickly estimate full device runtime by dividing the grind time number by the number of GCDs on the device (the MI250X has 2 GCDs). We gratefully acknowledge the permission of LLNL, HPE/Cray, and AMD for permission to release MI300A performance numbers.
22+
* GPU results are for a single GPU device. For single-precision (SP) GPUs, we performed computation in double-precision via conversion in compiler/software; these numbers are _not_ for single-precision computation. AMD MI250X and MI300A GPUs have multiple compute dies per socket; we report results for one _GCD_* for the MI250X and the entire APU (6 XCDs) for MI300A, though one can quickly estimate full device runtime by dividing the grind time number by the number of GCDs on the device (the MI250X has 2 GCDs). We gratefully acknowledge the permission of LLNL, HPE/Cray, and AMD for permission to release MI300A performance numbers.
2223

2324
| Hardware | Details | Type | Usage | Grind Time [ns] | Compiler | Computer |
2425
| ---: | ----: | ----: | ----: | ----: | :--- | :--- |
2526
| NVIDIA GH200 | GPU only | APU | 1 GPU | 0.32 | NVHPC 24.1 | GT Rogues Gallery |
2627
| NVIDIA H100 | | GPU | 1 GPU | 0.45 | NVHPC 24.5 | GT Rogues Gallery |
27-
| AMD MI300A | | APU | 1 _GCD_* | 0.60 | CCE 18.0.0 | LLNL Tioga |
28+
| AMD MI300A | | APU | 1 APU | 0.60 | CCE 18.0.0 | LLNL Tioga |
2829
| NVIDIA A100 | | GPU | 1 GPU | 0.62 | NVHPC 22.11 | GT Phoenix |
2930
| NVIDIA V100 | | GPU | 1 GPU | 0.99 | NVHPC 22.11 | GT Phoenix |
3031
| NVIDIA A30 | | GPU | 1 GPU | 1.1 | NVHPC 24.1 | GT Rogues Gallery |
@@ -50,13 +51,13 @@ These are reported as (X/Y cores), where X is the used cores, and Y is the total
5051
| AMD EPYC 7452 | Rome | CPU | 32 cores | 8.4 | GNU 12.3.0 | GT ICE |
5152
| IBM Power10 | | CPU | 24 cores | 10 | GNU 13.3.1 | GT Rogues Gallery |
5253
| AMD EPYC 7401 | Naples | CPU | 24 cores | 10 | GNU 10.3.1 | LLNL Corona |
53-
| Apple M1 Pro | | CPU | 8 cores | 14 | GNU 13.2.0 | N/A |
5454
| Intel Xeon 6226 | Cascade Lake | CPU | 12 cores | 17 | GNU 12.3.0 | GT ICE |
55-
| Apple M1 Max | | CPU | 8 cores | 18 | GNU 14.1.0 | N/A |
55+
| Apple M1 Max | | CPU | 10 cores | 20 | GNU 14.1.0 | N/A |
5656
| IBM Power9 | | CPU | 20 cores | 21 | GNU 9.1.0 | OLCF Summit |
5757
| Cavium ThunderX2 | Arm | CPU | 32 cores | 21 | GNU 13.2.0 | SBU Ookami |
5858
| Arm Cortex-A78AE | Arm, BlueField3 | CPU | 16 cores | 25 | NVHPC 24.5 | GT Rogues Gallery |
5959
| Intel Xeon E5-2650V4 | Broadwell | CPU | 12 cores | 27 | NVHPC 23.5 | GT CSE Internal |
60+
| Apple M2 | | CPU | 8 cores | 32 | GNU 14.1.0 | N/A |
6061
| Intel Xeon E7-4850V3 | Haswell | CPU | 14 cores | 34 | GNU 9.4.0 | GT CSE Internal |
6162
| Fujitsu A64FX | Arm | CPU | 48 cores | 63 | GNU 13.2.0 | SBU Ookami |
6263

0 commit comments

Comments
 (0)