You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CSCS uses the [SLURM](https://slurm.schedmd.com/documentation.html) as its workload manager to efficiently schedule and manage jobs on Alps vClusters.
5
-
SLURM is an open-source, highly scalable job scheduler that allocates computing resources, queues user jobs, and optimizes workload distribution across the cluster. It supports advanced scheduling policies, job dependencies, resource reservations, and accounting, making it well-suited for high-performance computing environments.
4
+
CSCS uses the [Slurm](https://slurm.schedmd.com/documentation.html) workload manager to efficiently schedule and manage jobs on Alps vClusters.
5
+
Slurm is an open-source, highly scalable job scheduler that allocates computing resources, queues user jobs, and optimizes workload distribution across the cluster.
6
+
It supports advanced scheduling policies, job dependencies, resource reservations, and accounting, making it well-suited for high-performance computing environments.
7
+
8
+
Refer to the [Quick Start User Guide](https://slurm.schedmd.com/quickstart.html) for commonly used terminology and commands.
[:octicons-arrow-right-24: Multiple MPI jobs per node][ref-slurm-exclusive]
27
+
28
+
</div>
6
29
7
30
## Accounts and resources
8
31
@@ -58,9 +81,9 @@ Additionally, short-duration jobs may be selected for backfilling — a process
58
81
[](){#ref-slurm-partitions}
59
82
## Partitions
60
83
61
-
At CSCS, SLURM is configured to accommodate the diverse range of node types available in our HPC clusters.
84
+
At CSCS, Slurm is configured to accommodate the diverse range of node types available in our HPC clusters.
62
85
These nodes vary in architecture, including CPU-only nodes and nodes equipped with different types of GPUs.
63
-
Because of this heterogeneity, SLURM must be tailored to ensure efficient resource allocation, job scheduling, and workload management specific to each node type.
86
+
Because of this heterogeneity, Slurm must be tailored to ensure efficient resource allocation, job scheduling, and workload management specific to each node type.
64
87
65
88
Each type of node has different resource constraints and capabilities, which Slurm takes into account when scheduling jobs.
66
89
For example, CPU-only nodes may have configurations optimized for multi-threaded CPU workloads, while GPU nodes require additional parameters to allocate GPU resources efficiently.
@@ -80,13 +103,15 @@ Slurm ensures that user jobs request and receive the appropriate resources while
80
103
81
104
[](){#ref-slurm-partition-debug}
82
105
### Debug partition
83
-
The SLURM`debug` partition is useful for quick turnaround workflows. The partition has a short maximum time (timelimit can be seen with `sinfo -p debug`), and a low number of maximum nodes (the `MaxNodes` can be seen with `scontrol show partition=debug`).
106
+
The Slurm`debug` partition is useful for quick turnaround workflows. The partition has a short maximum time (timelimit can be seen with `sinfo -p debug`), and a low number of maximum nodes (the `MaxNodes` can be seen with `scontrol show partition=debug`).
84
107
85
108
[](){#ref-slurm-partition-normal}
86
109
### Normal partition
87
-
This is the default partition, and will be used when you do not explicitly set a partition. This is the correct choice for standard jobs. The maximum time is usually set to 24 hours (`sinfo -p normal` for timelimit), and the maximum nodes can be as much as nodes are available.
110
+
This is the default partition, and will be used when you do not explicitly set a partition.
111
+
This is the correct choice for standard jobs. The maximum time is usually set to 24 hours (`sinfo -p normal` for timelimit), and the maximum nodes can be as much as nodes are available.
88
112
89
-
The following sections will provide detailed guidance on how to use SLURM to request and manage CPU cores, memory, and GPUs in jobs. These instructions will help users optimize their workload execution and ensure efficient use of CSCS computing resources.
113
+
The following sections will provide detailed guidance on how to use Slurm to request and manage CPU cores, memory, and GPUs in jobs.
114
+
These instructions will help users optimize their workload execution and ensure efficient use of CSCS computing resources.
90
115
91
116
## Affinity
92
117
@@ -215,13 +240,13 @@ The build generates the following executables:
215
240
[](){#ref-slurm-gh200}
216
241
## NVIDIA GH200 GPU Nodes
217
242
218
-
The [GH200 nodes on Alps][ref-alps-gh200-node] have four GPUs per node, and SLURM job submissions must be configured appropriately to best make use of the resources.
243
+
The [GH200 nodes on Alps][ref-alps-gh200-node] have four GPUs per node, and Slurm job submissions must be configured appropriately to best make use of the resources.
219
244
Applications that can saturate the GPUs with a single process per GPU should generally prefer this mode.
220
-
[Configuring SLURM jobs to use a single GPU per rank][ref-slurm-gh200-single-rank-per-gpu] is also the most straightforward setup.
245
+
[Configuring Slurm jobs to use a single GPU per rank][ref-slurm-gh200-single-rank-per-gpu] is also the most straightforward setup.
221
246
Some applications perform badly with a single rank per GPU, and require use of [NVIDIA's Multi-Process Service (MPS)] to oversubscribe GPUs with multiple ranks per GPU.
222
247
223
-
The best SLURM configuration is application- and workload-specific, so it is worth testing which works best in your particular case.
224
-
See [Scientific Applications][ref-software-sciapps] for information about recommended application-specific SLURM configurations.
248
+
The best Slurm configuration is application- and workload-specific, so it is worth testing which works best in your particular case.
249
+
See [Scientific Applications][ref-software-sciapps] for information about recommended application-specific Slurm configurations.
225
250
226
251
!!! warning
227
252
The GH200 nodes have their GPUs configured in ["default" compute mode](https://docs.nvidia.com/deploy/mps/index.html#gpu-compute-modes).
@@ -232,12 +257,12 @@ See [Scientific Applications][ref-software-sciapps] for information about recomm
232
257
Some applications benefit from using multiple ranks per GPU. However, [MPS should be used][ref-slurm-gh200-multi-rank-per-gpu] in these cases.
233
258
234
259
If you are unsure about which GPU is being used for a particular rank, print the `CUDA_VISIBLE_DEVICES` variable, along with e.g. `SLURM_LOCALID`, `SLURM_PROCID`, and `SLURM_NODEID` variables, in your job script.
235
-
If the variable is unset or empty all GPUs are visible to the rank and the rank will in most cases only use the first GPU.
260
+
If the variable is unset or empty all GPUs are visible to the rank and the rank will in most cases only use the first GPU.
236
261
237
262
[](){#ref-slurm-gh200-single-rank-per-gpu}
238
263
### One rank per GPU
239
264
240
-
Configuring SLURM to use one GH200 GPU per rank is easiest done using the `--ntasks-per-node=4` and `--gpus-per-task=1`SLURM flags.
265
+
Configuring Slurm to use one GH200 GPU per rank is easiest done using the `--ntasks-per-node=4` and `--gpus-per-task=1`Slurm flags.
241
266
For advanced users, using `--gpus-per-task` is equivalent to setting `CUDA_VISIBLE_DEVICES` to `SLURM_LOCALID`, assuming the job is using four ranks per node.
242
267
The examples below launch jobs on two nodes with four ranks per node using `sbatch` and `srun`:
243
268
@@ -257,7 +282,7 @@ Omitting the `--gpus-per-task` results in `CUDA_VISIBLE_DEVICES` being unset, wh
257
282
### Multiple ranks per GPU
258
283
259
284
Using multiple ranks per GPU can improve performance e.g. of applications that don't generate enough work for a GPU using a single rank, or ones that scale badly to all 72 cores of the Grace CPU.
260
-
In these cases SLURM jobs must be configured to assign multiple ranks to a single GPU.
285
+
In these cases Slurm jobs must be configured to assign multiple ranks to a single GPU.
261
286
This is best done using [NVIDIA's Multi-Process Service (MPS)].
262
287
To use MPS, launch your application using the following wrapper script, which will start MPS on one rank per node and assign GPUs to ranks according to the CPU mask of a rank, ensuring the closest GPU is used:
263
288
@@ -519,3 +544,117 @@ rank 3 @ nid001512
519
544
!!! warning
520
545
The `OMP_*` environment variables only affect thread affinity of applications that use OpenMP for thread-level parallelism.
521
546
Other threading runtimes will be configured differently, and the `affinity.mpi` tool will only be able to show the set of cores assigned to the rank.
547
+
548
+
[](){#ref-slurm-over-subscription}
549
+
## Node over-subscription
550
+
551
+
The nodes on Alps provide a lot of resources, particularly the GPU nodes that have 4 GPUs.
552
+
For workflows and use cases with tasks that require only a subset of these resources, for example a simulation that only needs one GPU, allocating a whole node to run one task is a waste of resources.
553
+
554
+
!!! example
555
+
A workflow that runs a single [GROMACS][ref-uenv-gromacs] simulation, that uses one GPU.
556
+
557
+
* The optimal use of resources would allocate one quarter of a node, and allow other jobs to access the other three GPUs.
558
+
559
+
A workflow that runs 100 independent [GROMACS][ref-uenv-gromacs] simulations, where each simulation requires two GPUs.
560
+
561
+
* The optimal use of resources would allocate 50 nodes, with two simulations run on each node.
562
+
563
+
[](){#ref-slurm-sharing}
564
+
### Node sharing
565
+
566
+
!!! under-construction
567
+
Node sharing, whereby jobs can request part of the resources on a node, and multiple jobs can run on a node (possibly from different users) is _not currently available on Alps clusters_.
568
+
569
+
CSCS will support this feature on some Alps [clusters][ref-alps-clusters] in the near-medium future.
570
+
571
+
[](){#ref-slurm-exclusive}
572
+
### Running more than one job step per node
573
+
574
+
Running multiple job steps in parallel on the same allocated set of nodes can improve resource utilization by taking advantage of all the available CPUs, GPUs, or memory within a single job allocation.
575
+
576
+
The approach is to:
577
+
578
+
1. first allocate all the resources on each node to the job;
579
+
2. then subdivide those resources at each invocation of srun.
580
+
581
+
If Slurm believes that a request for resources (cores, gpus, memory) overlaps with what another step has already allocated, it will defer the execution until the resources are relinquished.
582
+
This must be avoided.
583
+
584
+
First ensure that *all* resources are allocated to the whole job with the following preamble:
585
+
586
+
```bash title="Slurm preamble on a GH200 node"
587
+
#!/usr/bin/env bash
588
+
#SBATCH --exclusive --mem=450G
589
+
```
590
+
591
+
*`--exclusive` allocates all the CPUs and GPUs exclusively to this job;
592
+
*`--mem=450G` most of allowable memory (there are 4 Grace CPUs with ~120 GB of memory on the node)
593
+
594
+
!!! note
595
+
`--mem=0` can generally be used to allocate all memory on the node but the Slurm configuration on clariden doesn't allow this.
596
+
597
+
Next, launch your applications using `srun`, carefully subdividing resources for each job step.
598
+
The `--exclusive` flag must be used again, but note that its meaning differs in the context of `srun`.
599
+
Here, `--exclusive` ensures that only the resources explicitly requested for a given job step are reserved and allocated to it.
600
+
Without this flag, Slurm reserves all resources for the job step, even if it only allocates a subset -- effectively blocking further parallel `srun` invocations from accessing unrequested but needed resources.
601
+
602
+
Be sure to background each `srun` command with `&`, so that subsequent job steps start immediately without waiting for previous ones to finish.
603
+
A final `wait` command ensures that your submission script does not exit until all job steps complete.
604
+
605
+
Slurm will automatically set `CUDA_VISIBLE_DEVICES` for each `srun` call, restricting GPU access to only the devices assigned to that job step.
0 commit comments