You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cyclecloud/slurm.md
+30Lines changed: 30 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,35 @@ Slurm is a highly configurable open source workload manager. For more informatio
16
16
> Starting with CycleCloud 8.4.0, we rewrote the Slurm integration to support new features and functionality. For more information, see [Slurm 3.0](slurm-3.md) documentation.
17
17
18
18
::: moniker range="=cyclecloud-7"
19
+
To enable Slurm on a CycleCloud cluster, modify the 'run_list', available in the configuration section of your cluster definition. A Slurm cluster has two main parts: the master (or scheduler) node, which runs the Slurm software on a shared file system, and the executed nodes, which mount that file system and run the submitted jobs. For example, a simple cluster template snippet may look like:
20
+
21
+
```ini
22
+
[cluster custom-slurm]
23
+
24
+
[[node master]]
25
+
ImageName = cycle.image.centos7
26
+
MachineType = Standard_A4 # 8 cores
27
+
28
+
[[[cluster-init cyclecloud/slurm:default]]]
29
+
[[[cluster-init cyclecloud/slurm:master]]]
30
+
[[[configuration]]]
31
+
run_list = role[slurm_master_role]
32
+
33
+
[[nodearray execute]]
34
+
ImageName = cycle.image.centos7
35
+
MachineType = Standard_A1 # 1 core
36
+
37
+
[[[cluster-init cyclecloud/slurm:default]]]
38
+
[[[cluster-init cyclecloud/slurm:execute]]]
39
+
[[[configuration]]]
40
+
run_list = role[slurm_master_role]
41
+
slurm.autoscale = true
42
+
# Set to true if nodes are used for tightly-coupled multi-node jobs
43
+
slurm.hpc = true
44
+
slurm.default_partition = true
45
+
```
46
+
47
+
::: moniker-end
19
48
::: moniker range=">=cyclecloud-8"
20
49
To enable Slurm on a CycleCloud cluster, modify the `run_list` in the configuration section of your cluster definition. A Slurm cluster has two main parts: the scheduler node, which provides a shared file system and runs the Slurm software, and the execute nodes, which mount the shared file system and run the submitted jobs. For example, a simple cluster template snippet might look like:
21
50
@@ -46,6 +75,7 @@ To enable Slurm on a CycleCloud cluster, modify the `run_list` in the configurat
46
75
```
47
76
48
77
::: moniker-end
78
+
49
79
## Editing existing Slurm clusters
50
80
51
81
Slurm clusters running in CycleCloud versions 7.8 and later use an updated version of the autoscaling APIs that allows the clusters to use multiple node arrays and partitions. To make this functionality work in Slurm, CycleCloud prepopulates the executed nodes in the cluster. Because of this prepopulation, you need to run a command on the Slurm scheduler node after you make any changes to the cluster, such as changing the autoscale limits or VM types.
0 commit comments