You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cyclecloud/how-to/bursting/setup-instructions-for-cloud-bursting.md
+41-42Lines changed: 41 additions & 42 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,20 +2,20 @@
2
2
title: Cloud Bursting Setup Instruction
3
3
description: Learn how to setup Cloud bursting using Azure CycleCloud and Slurm.
4
4
author: vinil-v
5
-
ms.date: 12/23/2024
5
+
ms.date: 04/17/2025
6
6
ms.author: padmalathas
7
7
---
8
8
9
9
# Setup Instructions
10
10
11
11
After we have the prerequisites ready, we can follow these steps to integrate the external Slurm Scheduler node with the CycleCloud cluster:
12
12
13
-
###Importing a Cluster Using the Slurm Headless Template in CycleCloud
13
+
## Importing a Cluster Using the Slurm Headless Template in CycleCloud
14
14
15
15
- This step must be executed on the **CycleCloud VM**.
16
16
- Make sure that the **CycleCloud 8.6.4 VM** is running and accessible via the `cyclecloud` CLI.
17
-
- Execute the `cyclecloud-project-build.sh` script and provide the desired cluster name (e.g., `hpc1`). This will set up a custom project based on the `cyclecloud-slurm-3.0.9` version and import the cluster using the Slurm headless template.
18
-
- In the example provided, `hpc1` is used as the cluster name. You can choose any cluster name, but be consistent and use the same name throughout the entire setup.
17
+
- Execute the `cyclecloud-project-build.sh` script and provide the desired cluster name (for example, `hpc1`). This sets a custom project based on the `cyclecloud-slurm-3.0.9` version and import the cluster using the Slurm headless template.
18
+
- In the example provided, `<clustername>` is used as the cluster name. Choose any cluster name you like, but same name must be consistently used throughout the entire setup.
19
19
20
20
21
21
```bash
@@ -29,13 +29,13 @@ Output :
29
29
```bash
30
30
[user1@cc86vm ~]$ cd cyclecloud-slurm/cloud_bursting/slurm-23.11.9-1/cyclecloud
31
31
[user1@cc86vm cyclecloud]$ sh cyclecloud-project-build.sh
32
-
Enter Cluster Name: hpc1
33
-
Cluster Name: hpc1
34
-
Use the same cluster name: hpc1in building the scheduler
32
+
Enter Cluster Name: <clustername>
33
+
Cluster Name: <clustername>
34
+
Use the same cluster name: <clustername>in building the scheduler
35
35
Importing Cluster
36
36
Importing cluster Slurm_HL and creating cluster hpc1....
37
37
----------
38
-
hpc1: off
38
+
<clustername>: off
39
39
----------
40
40
Resource group:
41
41
Cluster nodes:
@@ -45,47 +45,46 @@ Fetching CycleCloud project
45
45
Uploading CycleCloud project to the locker
46
46
```
47
47
48
-
###Slurm Scheduler Installation and Configuration
48
+
## Slurm Scheduler Installation and Configuration
49
49
50
50
- A VM should be deployed using the specified **AlmaLinux HPC 8.7** or **Ubuntu HPC 22.04** image.
51
-
- If you already have a Slurm Scheduler installed, you may skip this step. However, it is recommended to review the script to ensure compatibility with your existing setup.
52
-
- Run the Slurm scheduler installation script (`slurm-scheduler-builder.sh`) and provide the cluster name (`hpc1`) when prompted.
51
+
- If you already have a Slurm Scheduler installed, you can skip this step. However, it's advisable to review the script to make sure it is compatible with your current setup.
52
+
- Run the Slurm scheduler installation script (`slurm-scheduler-builder.sh`) and provide the cluster name (`<clustername>`) when prompted.
53
53
- This script will setup NFS server and install and configure Slurm Scheduler.
54
-
- If you are using an external NFS server, you can remove the NFS setup entries from the script.
55
-
54
+
- If you're using an external NFS server, you can delete the NFS setup entries from the script.
- Ensure consistent user and group IDs across all nodes.
120
-
- It is advisable to use a centralized User Management system like LDAP to maintain consistent UID and GID across all nodes.
121
-
- In this example, we are using the `useradd_example.sh` script to create a test user `user1` and a group for job submission. (User `user1` already exists in CycleCloud)
119
+
- It's advisable to use a centralized User Management system like LDAP to maintain consistent UID and GID across all nodes.
120
+
- In this example, we're using the `useradd_example.sh` script to create a test user `<username>` and a group for job submission. (User `<username>` already exists in CycleCloud)
122
121
123
122
```bash
124
123
cd cyclecloud-slurm/cloud_bursting/slurm-23.11.9-1/scheduler
125
124
sh useradd_example.sh
126
125
```
127
126
128
-
###Testing the Setup
127
+
## Testing the Setup
129
128
130
-
- Log in as a test user (e.g., `user1`) on the Scheduler node.
129
+
- Log in as a test user (e.g., `<username>`) on the Scheduler node.
131
130
- Submit a test job to verify that the setup is functioning correctly.
132
131
133
132
```bash
134
-
su - user1
133
+
su - <username>
135
134
srun hostname &
136
135
```
137
136
Output:
138
137
```bash
139
-
[root@masternode2 scripts]# su - user1
138
+
[root@masternode2 scripts]# su - <username>
140
139
Last login: Tue May 14 04:54:51 UTC 2024 on pts/0
141
-
[user1@masternode2 ~]$ srun hostname &
140
+
[<username>@masternode2 ~]$ srun hostname &
142
141
[1] 43448
143
-
[user1@masternode2 ~]$ squeue
144
-
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
145
-
1 hpc hostnameuser1 CF 0:04 1 hpc1-hpc-1
146
-
[user1@masternode2 ~]$ hpc1-hpc-1
142
+
[<username>@masternode2 ~]$ squeue
143
+
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
0 commit comments