Skip to content

Commit 3ff7676

Browse files
committed
fix moniker issues
1 parent ac6cc0f commit 3ff7676

File tree

1 file changed

+30
-0
lines changed

1 file changed

+30
-0
lines changed

articles/cyclecloud/slurm.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,35 @@ Slurm is a highly configurable open source workload manager. For more informatio
1616
> Starting with CycleCloud 8.4.0, we rewrote the Slurm integration to support new features and functionality. For more information, see [Slurm 3.0](slurm-3.md) documentation.
1717
1818
::: moniker range="=cyclecloud-7"
19+
To enable Slurm on a CycleCloud cluster, modify the 'run_list', available in the configuration section of your cluster definition. A Slurm cluster has two main parts: the master (or scheduler) node, which runs the Slurm software on a shared file system, and the executed nodes, which mount that file system and run the submitted jobs. For example, a simple cluster template snippet may look like:
20+
21+
``` ini
22+
[cluster custom-slurm]
23+
24+
[[node master]]
25+
ImageName = cycle.image.centos7
26+
MachineType = Standard_A4 # 8 cores
27+
28+
[[[cluster-init cyclecloud/slurm:default]]]
29+
[[[cluster-init cyclecloud/slurm:master]]]
30+
[[[configuration]]]
31+
run_list = role[slurm_master_role]
32+
33+
[[nodearray execute]]
34+
ImageName = cycle.image.centos7
35+
MachineType = Standard_A1 # 1 core
36+
37+
[[[cluster-init cyclecloud/slurm:default]]]
38+
[[[cluster-init cyclecloud/slurm:execute]]]
39+
[[[configuration]]]
40+
run_list = role[slurm_master_role]
41+
slurm.autoscale = true
42+
# Set to true if nodes are used for tightly-coupled multi-node jobs
43+
slurm.hpc = true
44+
slurm.default_partition = true
45+
```
46+
47+
::: moniker-end
1948
::: moniker range=">=cyclecloud-8"
2049
To enable Slurm on a CycleCloud cluster, modify the `run_list` in the configuration section of your cluster definition. A Slurm cluster has two main parts: the scheduler node, which provides a shared file system and runs the Slurm software, and the execute nodes, which mount the shared file system and run the submitted jobs. For example, a simple cluster template snippet might look like:
2150

@@ -46,6 +75,7 @@ To enable Slurm on a CycleCloud cluster, modify the `run_list` in the configurat
4675
```
4776

4877
::: moniker-end
78+
4979
## Editing existing Slurm clusters
5080

5181
Slurm clusters running in CycleCloud versions 7.8 and later use an updated version of the autoscaling APIs that allows the clusters to use multiple node arrays and partitions. To make this functionality work in Slurm, CycleCloud prepopulates the executed nodes in the cluster. Because of this prepopulation, you need to run a command on the Slurm scheduler node after you make any changes to the cluster, such as changing the autoscale limits or VM types.

0 commit comments

Comments
 (0)