You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tools/slurm.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ This is the default partition, and will be used when you do not explicitly set a
27
27
The following sections will provide detailed guidance on how to use SLURM to request and manage CPU cores, memory, and GPUs in jobs. These instructions will help users optimize their workload execution and ensure efficient use of CSCS computing resources.
28
28
29
29
[](){#ref-slurm-gh200}
30
-
###NVIDIA GH200 GPU Nodes
30
+
## NVIDIA GH200 GPU Nodes
31
31
32
32
The [GH200 nodes on Alps][ref-alps-gh200-node] have four GPUs per node, and SLURM job submissions must be configured appropriately to best make use of the resources.
33
33
Applications that can saturate the GPUs with a single process per GPU should generally prefer this mode.
@@ -49,7 +49,7 @@ See [Scientific Applications][ref-software-sciapps] for information about recomm
49
49
If the variable is unset or empty all GPUs are visible to the rank and the rank will in most cases only use the first GPU.
50
50
51
51
[](){#ref-slurm-gh200-single-rank-per-gpu}
52
-
####One rank per GPU
52
+
### One rank per GPU
53
53
54
54
Configuring SLURM to use one GH200 GPU per rank is easiest done using the `--ntasks-per-node=4` and `--gpus-per-task=1` SLURM flags.
55
55
For advanced users, using `--gpus-per-task` is equivalent to setting `CUDA_VISIBLE_DEVICES` to `SLURM_LOCALID`, assuming the job is using four ranks per node.
@@ -68,7 +68,7 @@ srun <application>
68
68
Omitting the `--gpus-per-task` results in `CUDA_VISIBLE_DEVICES` being unset, which will lead to most applications using the first GPU on all ranks.
69
69
70
70
[](){#ref-slurm-gh200-multi-rank-per-gpu}
71
-
####Multiple ranks per GPU
71
+
### Multiple ranks per GPU
72
72
73
73
Using multiple ranks per GPU can improve performance e.g. of applications that don't generate enough work for a GPU using a single rank, or ones that scale badly to all 72 cores of the Grace CPU.
74
74
In these cases SLURM jobs must be configured to assign multiple ranks to a single GPU.
0 commit comments