You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/running/slurm.md
+32-32Lines changed: 32 additions & 32 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
1
[](){#ref-slurm}
2
-
# SLURM
2
+
# Slurm
3
3
4
-
CSCS uses the [SLURM](https://slurm.schedmd.com/documentation.html) workload manager to efficiently schedule and manage jobs on Alps vClusters.
5
-
SLURM is an open-source, highly scalable job scheduler that allocates computing resources, queues user jobs, and optimizes workload distribution across the cluster.
4
+
CSCS uses the [Slurm](https://slurm.schedmd.com/documentation.html) workload manager to efficiently schedule and manage jobs on Alps vClusters.
5
+
Slurm is an open-source, highly scalable job scheduler that allocates computing resources, queues user jobs, and optimizes workload distribution across the cluster.
6
6
It supports advanced scheduling policies, job dependencies, resource reservations, and accounting, making it well-suited for high-performance computing environments.
7
7
8
8
Refer to the [Quick Start User Guide](https://slurm.schedmd.com/quickstart.html) for commonly used terminology and commands.
@@ -11,7 +11,7 @@ Refer to the [Quick Start User Guide](https://slurm.schedmd.com/quickstart.html)
@@ -29,7 +29,7 @@ Refer to the [Quick Start User Guide](https://slurm.schedmd.com/quickstart.html)
29
29
30
30
## Accounts and resources
31
31
32
-
SLURM associates each job with a CSCS project in order to perform accounting.
32
+
Slurm associates each job with a CSCS project in order to perform accounting.
33
33
The project to use for accounting is specified using the `--account/-A` flag.
34
34
If no job is specified, the primary project is used as the default.
35
35
@@ -81,13 +81,13 @@ Additionally, short-duration jobs may be selected for backfilling — a process
81
81
[](){#ref-slurm-partitions}
82
82
## Partitions
83
83
84
-
At CSCS, SLURM is configured to accommodate the diverse range of node types available in our HPC clusters.
84
+
At CSCS, Slurm is configured to accommodate the diverse range of node types available in our HPC clusters.
85
85
These nodes vary in architecture, including CPU-only nodes and nodes equipped with different types of GPUs.
86
-
Because of this heterogeneity, SLURM must be tailored to ensure efficient resource allocation, job scheduling, and workload management specific to each node type.
86
+
Because of this heterogeneity, Slurm must be tailored to ensure efficient resource allocation, job scheduling, and workload management specific to each node type.
87
87
88
-
Each type of node has different resource constraints and capabilities, which SLURM takes into account when scheduling jobs.
88
+
Each type of node has different resource constraints and capabilities, which Slurm takes into account when scheduling jobs.
89
89
For example, CPU-only nodes may have configurations optimized for multi-threaded CPU workloads, while GPU nodes require additional parameters to allocate GPU resources efficiently.
90
-
SLURM ensures that user jobs request and receive the appropriate resources while preventing conflicts or inefficient utilization.
90
+
Slurm ensures that user jobs request and receive the appropriate resources while preventing conflicts or inefficient utilization.
91
91
92
92
[](){#ref-slurm-partitions-nodecount}
93
93
!!! example "How to check the partitions and number of nodes therein?"
@@ -103,25 +103,25 @@ SLURM ensures that user jobs request and receive the appropriate resources while
103
103
104
104
[](){#ref-slurm-partition-debug}
105
105
### Debug partition
106
-
The SLURM`debug` partition is useful for quick turnaround workflows. The partition has a short maximum time (timelimit can be seen with `sinfo -p debug`), and a low number of maximum nodes (the `MaxNodes` can be seen with `scontrol show partition=debug`).
106
+
The Slurm`debug` partition is useful for quick turnaround workflows. The partition has a short maximum time (timelimit can be seen with `sinfo -p debug`), and a low number of maximum nodes (the `MaxNodes` can be seen with `scontrol show partition=debug`).
107
107
108
108
[](){#ref-slurm-partition-normal}
109
109
### Normal partition
110
110
This is the default partition, and will be used when you do not explicitly set a partition.
111
111
This is the correct choice for standard jobs. The maximum time is usually set to 24 hours (`sinfo -p normal` for timelimit), and the maximum nodes can be as much as nodes are available.
112
112
113
-
The following sections will provide detailed guidance on how to use SLURM to request and manage CPU cores, memory, and GPUs in jobs.
113
+
The following sections will provide detailed guidance on how to use Slurm to request and manage CPU cores, memory, and GPUs in jobs.
114
114
These instructions will help users optimize their workload execution and ensure efficient use of CSCS computing resources.
115
115
116
116
## Affinity
117
117
118
-
The following sections will document how to use SLURM on different compute nodes available on Alps.
119
-
To demonstrate the effects different SLURM parameters, we will use a little command line tool [affinity](https://github.com/bcumming/affinity) that prints the CPU cores and GPUs that are assigned to each MPI rank in a job, and which node they are run on.
118
+
The following sections will document how to use Slurm on different compute nodes available on Alps.
119
+
To demonstrate the effects different Slurm parameters, we will use a little command line tool [affinity](https://github.com/bcumming/affinity) that prints the CPU cores and GPUs that are assigned to each MPI rank in a job, and which node they are run on.
120
120
121
-
We strongly recommend using a tool like affinity to understand and test the SLURM configuration for jobs, because the behavior of SLURM is highly dependent on the system configuration.
122
-
Parameters that worked on a different cluster -- or with a different SLURM version or configuration on the same cluster -- are not guaranteed to give the same results.
121
+
We strongly recommend using a tool like affinity to understand and test the Slurm configuration for jobs, because the behavior of Slurm is highly dependent on the system configuration.
122
+
Parameters that worked on a different cluster -- or with a different Slurm version or configuration on the same cluster -- are not guaranteed to give the same results.
123
123
124
-
It is straightforward to build the affinity tool to experiment with SLURM configurations.
124
+
It is straightforward to build the affinity tool to experiment with Slurm configurations.
@@ -223,9 +223,9 @@ The build generates the following executables:
223
223
224
224
!!! info "Quick affinity checks"
225
225
226
-
The SLURM flag [`--cpu-bind=verbose`](https://slurm.schedmd.com/srun.html#OPT_cpu-bind) prints information about MPI ranks and their thread affinity.
226
+
The Slurm flag [`--cpu-bind=verbose`](https://slurm.schedmd.com/srun.html#OPT_cpu-bind) prints information about MPI ranks and their thread affinity.
227
227
228
-
The mask it prints is not very readable, but it can be used with the `true` command to quickly test SLURM parameters without building the Affinity tool.
228
+
The mask it prints is not very readable, but it can be used with the `true` command to quickly test Slurm parameters without building the Affinity tool.
@@ -240,13 +240,13 @@ The build generates the following executables:
240
240
[](){#ref-slurm-gh200}
241
241
## NVIDIA GH200 GPU Nodes
242
242
243
-
The [GH200 nodes on Alps][ref-alps-gh200-node] have four GPUs per node, and SLURM job submissions must be configured appropriately to best make use of the resources.
243
+
The [GH200 nodes on Alps][ref-alps-gh200-node] have four GPUs per node, and Slurm job submissions must be configured appropriately to best make use of the resources.
244
244
Applications that can saturate the GPUs with a single process per GPU should generally prefer this mode.
245
-
[Configuring SLURM jobs to use a single GPU per rank][ref-slurm-gh200-single-rank-per-gpu] is also the most straightforward setup.
245
+
[Configuring Slurm jobs to use a single GPU per rank][ref-slurm-gh200-single-rank-per-gpu] is also the most straightforward setup.
246
246
Some applications perform badly with a single rank per GPU, and require use of [NVIDIA's Multi-Process Service (MPS)] to oversubscribe GPUs with multiple ranks per GPU.
247
247
248
-
The best SLURM configuration is application- and workload-specific, so it is worth testing which works best in your particular case.
249
-
See [Scientific Applications][ref-software-sciapps] for information about recommended application-specific SLURM configurations.
248
+
The best Slurm configuration is application- and workload-specific, so it is worth testing which works best in your particular case.
249
+
See [Scientific Applications][ref-software-sciapps] for information about recommended application-specific Slurm configurations.
250
250
251
251
!!! warning
252
252
The GH200 nodes have their GPUs configured in ["default" compute mode](https://docs.nvidia.com/deploy/mps/index.html#gpu-compute-modes).
@@ -262,7 +262,7 @@ See [Scientific Applications][ref-software-sciapps] for information about recomm
262
262
[](){#ref-slurm-gh200-single-rank-per-gpu}
263
263
### One rank per GPU
264
264
265
-
Configuring SLURM to use one GH200 GPU per rank is easiest done using the `--ntasks-per-node=4` and `--gpus-per-task=1`SLURM flags.
265
+
Configuring Slurm to use one GH200 GPU per rank is easiest done using the `--ntasks-per-node=4` and `--gpus-per-task=1`Slurm flags.
266
266
For advanced users, using `--gpus-per-task` is equivalent to setting `CUDA_VISIBLE_DEVICES` to `SLURM_LOCALID`, assuming the job is using four ranks per node.
267
267
The examples below launch jobs on two nodes with four ranks per node using `sbatch` and `srun`:
268
268
@@ -282,7 +282,7 @@ Omitting the `--gpus-per-task` results in `CUDA_VISIBLE_DEVICES` being unset, wh
282
282
### Multiple ranks per GPU
283
283
284
284
Using multiple ranks per GPU can improve performance e.g. of applications that don't generate enough work for a GPU using a single rank, or ones that scale badly to all 72 cores of the Grace CPU.
285
-
In these cases SLURM jobs must be configured to assign multiple ranks to a single GPU.
285
+
In these cases Slurm jobs must be configured to assign multiple ranks to a single GPU.
286
286
This is best done using [NVIDIA's Multi-Process Service (MPS)].
287
287
To use MPS, launch your application using the following wrapper script, which will start MPS on one rank per node and assign GPUs to ranks according to the CPU mask of a rank, ensuring the closest GPU is used:
288
288
@@ -357,7 +357,7 @@ For a detailed description of the node hardware, see the [AMD Rome node][ref-alp
357
357

358
358
359
359
360
-
Each MPI rank is assigned a set of cores on a node, and SLURM provides flags that can be used directly as flags to `srun`, or as arguments in an `sbatch` script.
360
+
Each MPI rank is assigned a set of cores on a node, and Slurm provides flags that can be used directly as flags to `srun`, or as arguments in an `sbatch` script.
361
361
Here are some basic flags that we will use to distribute work.
362
362
363
363
| flag | meaning |
@@ -368,10 +368,10 @@ Here are some basic flags that we will use to distribute work.
368
368
|`-c`, `--cpus-per-task`| The number of cores to assign to each rank. |
369
369
|`--hint=nomultithread`| Use only one PU per core |
370
370
371
-
!!! info "SLURM is highly configurable"
371
+
!!! info "Slurm is highly configurable"
372
372
These are a subset of the most useful flags.
373
373
Call `srun --help` or `sbatch --help` to get a complete list of all the flags available on your target cluster.
374
-
Note that the exact set of flags available depends on the SLURM version, how SLURM was configured, and SLURM plugins.
374
+
Note that the exact set of flags available depends on the Slurm version, how Slurm was configured, and Slurm plugins.
375
375
376
376
The first example assigns 2 MPI ranks per node, with 64 cores per rank, with the two PUs per core:
377
377
```console title="One MPI rank per socket"
@@ -578,12 +578,12 @@ The approach is to:
578
578
1. first allocate all the resources on each node to the job;
579
579
2. then subdivide those resources at each invocation of srun.
580
580
581
-
If SLURM believes that a request for resources (cores, gpus, memory) overlaps with what another step has already allocated, it will defer the execution until the resources are relinquished.
581
+
If Slurm believes that a request for resources (cores, gpus, memory) overlaps with what another step has already allocated, it will defer the execution until the resources are relinquished.
582
582
This must be avoided.
583
583
584
584
First ensure that *all* resources are allocated to the whole job with the following preamble:
585
585
586
-
```bash title="SLURM preamble on a GH200 node"
586
+
```bash title="Slurm preamble on a GH200 node"
587
587
#!/usr/bin/env bash
588
588
#SBATCH --exclusive --mem=450G
589
589
```
@@ -592,17 +592,17 @@ First ensure that *all* resources are allocated to the whole job with the follow
592
592
*`--mem=450G` most of allowable memory (there are 4 Grace CPUs with ~120 GB of memory on the node)
593
593
594
594
!!! note
595
-
`--mem=0` can generally be used to allocate all memory on the node but the SLURM configuration on clariden doesn't allow this.
595
+
`--mem=0` can generally be used to allocate all memory on the node but the Slurm configuration on clariden doesn't allow this.
596
596
597
597
Next, launch your applications using `srun`, carefully subdividing resources for each job step.
598
598
The `--exclusive` flag must be used again, but note that its meaning differs in the context of `srun`.
599
599
Here, `--exclusive` ensures that only the resources explicitly requested for a given job step are reserved and allocated to it.
600
-
Without this flag, SLURM reserves all resources for the job step, even if it only allocates a subset -- effectively blocking further parallel `srun` invocations from accessing unrequested but needed resources.
600
+
Without this flag, Slurm reserves all resources for the job step, even if it only allocates a subset -- effectively blocking further parallel `srun` invocations from accessing unrequested but needed resources.
601
601
602
602
Be sure to background each `srun` command with `&`, so that subsequent job steps start immediately without waiting for previous ones to finish.
603
603
A final `wait` command ensures that your submission script does not exit until all job steps complete.
604
604
605
-
SLURM will automatically set `CUDA_VISIBLE_DEVICES` for each `srun` call, restricting GPU access to only the devices assigned to that job step.
605
+
Slurm will automatically set `CUDA_VISIBLE_DEVICES` for each `srun` call, restricting GPU access to only the devices assigned to that job step.
606
606
607
607
!!! todo "use [affinity](https://github.com/bcumming/affinity) for these examples"
0 commit comments