Skip to content

Commit 6d649ac

Browse files
committed
code example and output cleanup, acrolinx
1 parent deeaf89 commit 6d649ac

File tree

1 file changed

+15
-6
lines changed

1 file changed

+15
-6
lines changed

articles/modeling-simulation-workbench/tutorial-install-slurm.md

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ If you don’t have an Azure subscription, [create a free account](https://azure
3434

3535
## Sign in to the Azure portal and navigate to your workbench
3636

37-
If you aren't already signed into the Azure portal, go to [https://portal.azure.com](https://portal.azure.com). Navigate to your workbench, then the chamber where you'll create your Slurm cluster.
37+
If you aren't already signed into the Azure portal, go to [portal.azure.com](https://portal.azure.com). Navigate to your workbench, then to the chamber you want to create your Slurm cluster.
3838

3939
## Create a cluster for Slurm
4040

@@ -45,7 +45,7 @@ Slurm requires one node to serve as the controller and a set of compute nodes wh
4545
1. From the chamber overview page, select **Chamber VM** from the **Settings** menu, then either the **+ Create** button on action menu along the top or the blue **Create chamber VM** button in center of the page.
4646
:::image type="content" source="media/tutorial-slurm/create-chamber-vm.png" alt-text="Screenshot of chamber VM overview page with Chamber VM in Settings and the create options on the page highlighted by red outlines.":::
4747
1. On the **Create chamber VM** page:
48-
* Enter a **Name** for the VM. We recommend choosing a name that indicates it is the controller node.
48+
* Enter a **Name** for the VM. We recommend choosing a name that indicates it's the controller node.
4949
* Select a VM size. For the controller, you can select the smallest VM available. The *D4s_v4* is currently the smallest.
5050
* Leave the **Chamber VM image type** and **Chamber VM count** as the default of *Semiconductor* and *1*.
5151
* Select **Review + create**.
@@ -100,7 +100,7 @@ Configuring Slurm requires an inventory of nodes. From the controller node:
100100
10.163.4.9 wrkldvmslurm-nod034b970
101101
```
102102

103-
1. Create a file with just the worker nodes, one host per line and call it *slurm_worker.txt*. For the remaining steps of this tutorial, you'll use this list to configure the compute nodes from your controller. In some steps, the nodes need to be in a comma-delimited format. In those instances, we use a command-line shortcut to format the list without having to create a new file. To create *slurm_worker.txt*, remove the IP addresses in the first column, and the controller node which is listed first.
103+
1. Create a file with just the worker nodes, one host per line and call it *slurm_worker.txt*. For the remaining steps of this tutorial, use this list to configure the compute nodes from your controller. In some steps, the nodes need to be in a comma-delimited format. In those instances, we use a command-line shortcut to format the list without having to create a new file. To create *slurm_worker.txt*, remove the IP addresses in the first column, and the controller node which is listed first.
104104

105105
### Gather technical specifications about the compute nodes
106106

@@ -172,7 +172,7 @@ mysql_secure_installation
172172

173173
The *mysql_secure_installation* script asks for more configuration.
174174

175-
* The default database password isn't set. Hit **Enter** when asked for current password.
175+
* The default database password isn't set. Hit <kbd>Enter</kbd> when asked for current password.
176176
* Enter *Y* when asked to set root password. Create a new, secure root password for MariaDB, take note of it for later, then reenter to confirm. You need this password when you configure the Slurm controller in the following step.
177177
* Enter *Y* for the remaining questions for:
178178
* Reloading privileged tables
@@ -192,7 +192,12 @@ sudo /usr/sdw/slurm/sdwChamberSlurm.sh CONTROLLER <databaseSecret> <clusterNodes
192192
For this example, we use the list of nodes we created in the previous steps and substitute our values collected during discovery. The `paste` command is used to reformat the list of worker nodes into the comma-delimited format without needing to create a new file.
193193

194194
```bash
195-
$ sudo /usr/sdw/slurm/sdwChamberSlurm.sh CONTROLLER <databasepassword> `paste -d, -s ./slurm_nodes.txt` 4 1 2 2 13593564
195+
sudo /usr/sdw/slurm/sdwChamberSlurm.sh CONTROLLER <databasepassword> `paste -d, -s ./slurm_nodes.txt` 4 1 2 2 13593564
196+
```
197+
198+
The output should be similar to:
199+
200+
```bash
196201
Last metadata expiration check: 4:00:15 ago on Thu 03 Oct 2024 01:52:40 PM UTC.
197202
Package bzip2-devel-1.0.6-26.el8.x86_64 is already installed.
198203
Package gcc-8.5.0-18.2.el8_8.x86_64 is already installed..
@@ -240,8 +245,12 @@ END
240245
Using the same file of the node hostnames that you previously used, execute the bash script you created on the node.
241246

242247
```bash
243-
$ for host in `cat ./slurm_nodes.txt`; do ssh $host sudo sh ~/node-munge.sh; done
248+
for host in `cat ./slurm_nodes.txt`; do ssh $host sudo sh ~/node-munge.sh; done
249+
```
250+
251+
Your output should be similar to:
244252

253+
```bash
245254
Last metadata expiration check: 4:02:25 ago on Thu 03 Oct 2024 09:35:58 PM UTC.
246255
Dependencies resolved.
247256
================================================================================

0 commit comments

Comments
 (0)