Skip to content

Commit e9a424d

Browse files
authored
Fixing the Acrolinx for Correctness score
1 parent 1cfc8ce commit e9a424d

File tree

1 file changed

+41
-42
lines changed

1 file changed

+41
-42
lines changed

articles/cyclecloud/how-to/bursting/setup-instructions-for-cloud-bursting.md

Lines changed: 41 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -2,20 +2,20 @@
22
title: Cloud Bursting Setup Instruction
33
description: Learn how to setup Cloud bursting using Azure CycleCloud and Slurm.
44
author: vinil-v
5-
ms.date: 12/23/2024
5+
ms.date: 04/17/2025
66
ms.author: padmalathas
77
---
88

99
# Setup Instructions
1010

1111
After we have the prerequisites ready, we can follow these steps to integrate the external Slurm Scheduler node with the CycleCloud cluster:
1212

13-
### Importing a Cluster Using the Slurm Headless Template in CycleCloud
13+
## Importing a Cluster Using the Slurm Headless Template in CycleCloud
1414

1515
- This step must be executed on the **CycleCloud VM**.
1616
- Make sure that the **CycleCloud 8.6.4 VM** is running and accessible via the `cyclecloud` CLI.
17-
- Execute the `cyclecloud-project-build.sh` script and provide the desired cluster name (e.g., `hpc1`). This will set up a custom project based on the `cyclecloud-slurm-3.0.9` version and import the cluster using the Slurm headless template.
18-
- In the example provided, `hpc1` is used as the cluster name. You can choose any cluster name, but be consistent and use the same name throughout the entire setup.
17+
- Execute the `cyclecloud-project-build.sh` script and provide the desired cluster name (for example, `hpc1`). This sets a custom project based on the `cyclecloud-slurm-3.0.9` version and import the cluster using the Slurm headless template.
18+
- In the example provided, `<clustername>` is used as the cluster name. Choose any cluster name you like, but same name must be consistently used throughout the entire setup.
1919

2020

2121
```bash
@@ -29,13 +29,13 @@ Output :
2929
```bash
3030
[user1@cc86vm ~]$ cd cyclecloud-slurm/cloud_bursting/slurm-23.11.9-1/cyclecloud
3131
[user1@cc86vm cyclecloud]$ sh cyclecloud-project-build.sh
32-
Enter Cluster Name: hpc1
33-
Cluster Name: hpc1
34-
Use the same cluster name: hpc1 in building the scheduler
32+
Enter Cluster Name: <clustername>
33+
Cluster Name: <clustername>
34+
Use the same cluster name: <clustername> in building the scheduler
3535
Importing Cluster
3636
Importing cluster Slurm_HL and creating cluster hpc1....
3737
----------
38-
hpc1 : off
38+
<clustername> : off
3939
----------
4040
Resource group:
4141
Cluster nodes:
@@ -45,47 +45,46 @@ Fetching CycleCloud project
4545
Uploading CycleCloud project to the locker
4646
```
4747

48-
### Slurm Scheduler Installation and Configuration
48+
## Slurm Scheduler Installation and Configuration
4949

5050
- A VM should be deployed using the specified **AlmaLinux HPC 8.7** or **Ubuntu HPC 22.04** image.
51-
- If you already have a Slurm Scheduler installed, you may skip this step. However, it is recommended to review the script to ensure compatibility with your existing setup.
52-
- Run the Slurm scheduler installation script (`slurm-scheduler-builder.sh`) and provide the cluster name (`hpc1`) when prompted.
51+
- If you already have a Slurm Scheduler installed, you can skip this step. However, it's advisable to review the script to make sure it is compatible with your current setup.
52+
- Run the Slurm scheduler installation script (`slurm-scheduler-builder.sh`) and provide the cluster name (`<clustername>`) when prompted.
5353
- This script will setup NFS server and install and configure Slurm Scheduler.
54-
- If you are using an external NFS server, you can remove the NFS setup entries from the script.
55-
54+
- If you're using an external NFS server, you can delete the NFS setup entries from the script.
5655

5756
```bash
5857
git clone https://github.com/Azure/cyclecloud-slurm.git
5958
cd cyclecloud-slurm/cloud_bursting/slurm-23.11.9-1/scheduler
6059
sh slurm-scheduler-builder.sh
6160
```
62-
Output
61+
Output:
6362

6463
```bash
6564
------------------------------------------------------------------------------------------------------------------------------
6665
Building Slurm scheduler for cloud bursting with Azure CycleCloud
6766
------------------------------------------------------------------------------------------------------------------------------
6867

69-
Enter Cluster Name: hpc1
68+
Enter Cluster Name: <clustername>
7069
------------------------------------------------------------------------------------------------------------------------------
7170

7271
Summary of entered details:
73-
Cluster Name: hpc1
74-
Scheduler Hostname: masternode2
72+
Cluster Name: <clustername>
73+
Scheduler Hostname: <scheduler hostname>
7574
NFSServer IP Address: 10.222.xxx.xxx
7675
```
7776

78-
### CycleCloud UI Configuration
77+
## CycleCloud UI Configuration
7978

80-
- Access the **CycleCloud UI** and navigate to the settings for the `hpc1` cluster.
79+
- Access the **CycleCloud UI** and navigate to the settings for the `<clustername>` cluster.
8180
- Edit the cluster settings to configure the VM SKUs and networking options as needed.
8281
- In the **Network Attached Storage** section, enter the NFS server IP address for the `/sched` and `/shared` mounts.
83-
- Select the OS from Advance setting tab - **Ubuntu 22.04** or **AlmaLinux 8** from the drop down based on the scheduler VM.
84-
- Once all settings are configured, click **Save** and then **Start** the `hpc1` cluster.
82+
- On the Advance setting tab, from the dropdown menu choose the OS: either **Ubuntu 22.04** or **AlmaLinux 8** based on the scheduler VM.
83+
- Once all settings are configured, click **Save** and then **Start** the `<clustername>` cluster.
8584

8685
![NFS settings](../../images/slurm-cloud-burst/cyclecloud-ui-config.png)
8786

88-
### CycleCloud Autoscaler Integration on Slurm Scheduler
87+
## CycleCloud Autoscaler Integration on Slurm Scheduler
8988

9089
- Integrate Slurm with CycleCloud using the `cyclecloud-integrator.sh` script.
9190
- Provide CycleCloud details (username, password, and ip address) when prompted.
@@ -100,53 +99,53 @@ Output:
10099
[root@masternode2 scripts]# sh cyclecloud-integrator.sh
101100
Please enter the CycleCloud details to integrate with the Slurm scheduler
102101

103-
Enter Cluster Name: hpc1
104-
Enter CycleCloud Username: user1
105-
Enter CycleCloud Password:
106-
Enter CycleCloud IP (e.g., 10.220.x.xx): 10.220.x.xx
102+
Enter Cluster Name: <clustername>
103+
Enter CycleCloud Username: <username>
104+
Enter CycleCloud Password: <password>
105+
Enter CycleCloud IP (e.g., 10.220.x.xx): <ip address>
107106
------------------------------------------------------------------------------------------------------------------------------
108107

109108
Summary of entered details:
110-
Cluster Name: hpc1
111-
CycleCloud Username: user1
112-
CycleCloud URL: https://10.220.x.xx
109+
Cluster Name: <clustername>
110+
CycleCloud Username: <username>
111+
CycleCloud URL: https://<ip address>
113112

114113
------------------------------------------------------------------------------------------------------------------------------
115114
```
116115

117-
### User and Group Setup (Optional)
116+
## User and Group Setup (Optional)
118117

119118
- Ensure consistent user and group IDs across all nodes.
120-
- It is advisable to use a centralized User Management system like LDAP to maintain consistent UID and GID across all nodes.
121-
- In this example, we are using the `useradd_example.sh` script to create a test user `user1` and a group for job submission. (User `user1` already exists in CycleCloud)
119+
- It's advisable to use a centralized User Management system like LDAP to maintain consistent UID and GID across all nodes.
120+
- In this example, we're using the `useradd_example.sh` script to create a test user `<username>` and a group for job submission. (User `<username>` already exists in CycleCloud)
122121

123122
```bash
124123
cd cyclecloud-slurm/cloud_bursting/slurm-23.11.9-1/scheduler
125124
sh useradd_example.sh
126125
```
127126

128-
### Testing the Setup
127+
## Testing the Setup
129128

130-
- Log in as a test user (e.g., `user1`) on the Scheduler node.
129+
- Log in as a test user (e.g., `<username>`) on the Scheduler node.
131130
- Submit a test job to verify that the setup is functioning correctly.
132131

133132
```bash
134-
su - user1
133+
su - <username>
135134
srun hostname &
136135
```
137136
Output:
138137
```bash
139-
[root@masternode2 scripts]# su - user1
138+
[root@masternode2 scripts]# su - <username>
140139
Last login: Tue May 14 04:54:51 UTC 2024 on pts/0
141-
[user1@masternode2 ~]$ srun hostname &
140+
[<username>@masternode2 ~]$ srun hostname &
142141
[1] 43448
143-
[user1@masternode2 ~]$ squeue
144-
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
145-
1 hpc hostname user1 CF 0:04 1 hpc1-hpc-1
146-
[user1@masternode2 ~]$ hpc1-hpc-1
142+
[<username>@masternode2 ~]$ squeue
143+
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
144+
1 hpc hostname <username> CF 0:04 1 <clustername>-hpc-1
145+
[user1@masternode2 ~]$ <clustername>-hpc-1
147146
```
148147
![Node Creation](../../images/slurm-cloud-burst/cyclecloud-ui-new-node.png)
149148

150149
You should see the job running successfully, indicating a successful integration with CycleCloud.
151150

152-
For further details and advanced configurations, refer to the scripts and documentation within this repository.
151+
For further details and advanced configurations, see the scripts and documentation within this repository.

0 commit comments

Comments
 (0)