You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To enable oversubscription of MPI ranks per GPU, you'll need to use the `mps-wrapper.sh` script provided at the following page: [NVIDIA GH200 GPU nodes: multiple ranks per GPU][ref-slurm-gh200-multi-rank-per-gpu]
180
-
181
179
1. Time format: `HH:MM:SS`.
182
180
2. For LAMMPS+gpu it is often beneficial to use more than 1 MPI rank per GPU. To enable oversubscription of MPI ranks per GPU, you'll need to use the `mps-wrapper.sh` script provided in the following section: [multiple ranks per GPU][ref-slurm-gh200-multi-rank-per-gpu].
183
181
3. Change `<ACCOUNT>` to your project account name.
184
182
4. Change `<LAMMPS_UENV>` to the name (or path) of the LAMMPS uenv you want to use.
185
183
5. Enable the `gpu` uenv view.
186
184
185
+
To enable oversubscription of MPI ranks per GPU, you'll need to use the `mps-wrapper.sh` script provided at the following page: [NVIDIA GH200 GPU nodes: multiple ranks per GPU][ref-slurm-gh200-multi-rank-per-gpu]
186
+
187
187
#### LAMMPS + GPU input file
188
188
??? example "LAMMPS+GPU input file, defining a 3d Lennard-Jones melt."
189
189
```
@@ -236,25 +236,26 @@ On Eiger, a similar sbatch script can be used:
0 commit comments