You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/software/communication/mpich.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,15 +8,15 @@ It can be installed inside containers directly from the source code manually, bu
8
8
MPICH can be built inside containers, however for native Slingshot performance special care has to be taken, to ensure that communication is optimal for all cases:
9
9
10
10
* Intra-node communication (this is via shared memory, especially `xpmem`)
11
-
* Inter-node communication (this should go through the openfabrics interface OFI)
11
+
* Inter-node communication (this should go through the OpenFabrics Interface - OFI)
12
12
* Host-to-Host memory communication
13
13
* Device-to-Device memory communication
14
14
15
15
To achieve native performance one needs to ensure to build MPICH with `libfabric` and `xpmem` support.
16
16
Additionally, when building for GH200 nodes one needs to ensure to build `libfabric` and `mpich` with `CUDA` support.
17
17
18
18
At container runtime the [CXI hook][ref-ce-cxi-hook] will replace the libraries `xpmem` and `libfabric` inside the container, with the libraries on the host system.
19
-
This will ensure native peformance when doing MPI communication.
19
+
This will ensure native performance when doing MPI communication.
20
20
21
21
This are example Dockerfiles that can be used on `Eiger` and `Daint` to build a container image with MPICH and best communication performance.
22
22
They are quite explicit and building manually the necessary packages, however for real-life one should fall back to Spack to do the building.
0 commit comments