Skip to content

Commit d2098dd

Browse files
Merge pull request #302084 from tfitzmac/0701edit1
copy edit
2 parents 1e29e4f + ce14773 commit d2098dd

10 files changed

+262
-267
lines changed

articles/cyclecloud/how-to/flex-scalesets.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -2,39 +2,40 @@
22
title: Using Flex ScaleSets
33
description: Create VMs using VMSS Flex
44
author: dougclayton
5-
ms.date: 11/02/2022
5+
ms.date: 07/01/2025
66
ms.author: doclayto
77
monikerRange: '>= cyclecloud-8'
88
---
99

1010
# Using Flex ScaleSets
1111

12-
As of 8.3.0, CycleCloud can use [Flex orchestration](https://go.microsoft.com/fwlink/?LinkId=2156742) for scale sets.
13-
This works differently than the automatic usage of Uniform scale sets that is standard in CycleCloud.
12+
As of version 8.3.0, CycleCloud can use [Flex orchestration](https://go.microsoft.com/fwlink/?LinkId=2156742) for scale sets.
13+
This orchestration works differently than the automatic usage of Uniform scale sets that is standard in CycleCloud.
1414
In this mode, you create a Flex scale set outside of CycleCloud, and you specify which nodes should use it.
15-
CycleCloud creates and deletes VMs in that scale set. This works for both head nodes and execute nodearrays.
15+
CycleCloud creates and deletes VMs in that scale set. This setup works for both head nodes and execute node arrays.
1616

17-
To use Flex orchestration, you must use a CycleCloud credential that is locked to a given resource group (which must be created).
18-
This is because VMs in a Flex scale set must be in the same resource group as the scale set.
19-
You can use the az CLI to create the resource group, if you don't have one to use already:
17+
To use Flex orchestration, you must use a CycleCloud credential that is locked to a given resource group (which you must create).
18+
This requirement exists because VMs in a Flex scale set must be in the same resource group as the scale set.
19+
You can use the az CLI to create the resource group if you don't already have one:
2020

2121
```azurecli-interactive
2222
az group create --location REGIONNAME --resource-group RESOURCEGROUP
2323
```
2424

25-
The scaleset must be created in Flex orchestration mode, and any VM settings on it (e.g., VM size or image) are ignored.
26-
Because of this, it is easiest to create it through the az CLI:
25+
You must create the scale set in Flex orchestration mode. The creation process ignores any VM settings on the scale set, such as the VM size or image.
26+
27+
Because of this limitation, it's easiest to create the scale set through Azure CLI:
2728

2829
```azurecli-interactive
2930
az vmss create --orchestration-mode Flexible --resource-group RESOURCEGROUP --name SCALESET --platform-fault-domain-count 1
3031
```
3132

32-
Finally, specify the fully qualified id for this scaleset on the node or nodearray that should use it on the cluster template:
33+
Finally, specify the fully qualified ID for this scale set on the node or node array that should use it in the cluster template:
3334

3435
```ini
3536
[nodearray execute]
3637
FlexScaleSetId = /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/RESOURCEGROUP/providers/Microsoft.Compute/virtualMachineScaleSets/SCALESET
3738
```
3839

3940
> [!NOTE]
40-
> Scale sets have limitations on size (currently 1000 VMs). To scale larger than that, you must create multiple scale sets and assign them to different nodearrays.
41+
> Scale sets have limitations on size (currently 1,000 VMs). To scale larger than that size, you must create multiple scale sets and assign them to different node arrays.

articles/cyclecloud/how-to/hb-hc-best-practices.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -2,31 +2,31 @@
22
title: HB/HC Cluster Best Practices
33
description: Best practices for using Azure CycleCloud with HB and HC series Virtual Machines.
44
author: anhoward
5-
ms.date: 06/11/2019
5+
ms.date: 07/01/2025
66
ms.author: anhoward
77
---
88

9-
# Best Practices for using HB and HC VMs
9+
# Best practices for using HB and HC VMs
1010

1111
## Overview
1212

13-
The [H-series virtual machines](/azure/virtual-machines/windows/sizes-hpc) (VMs) are the latest HPC offerings on Azure. HB-series VMs offer 60-core AMD EPYC processors, optimized for running applications with high memory-bandwidth requirements, such as explicit finite element analysis, fluid dynamics, and weather modeling. The HC-series VMs have 44-core Intel Xeon Skylake processors and are optimized for applications requiring intensive CPU calculations, like molecular dynamics and implicit finite element analysis. HB and HC VMs feature 100 Gb/s EDR InfiniBand and support the latest MPI types and versions. The [Scaling HPC Applications Guide](/azure/virtual-machines/workloads/hpc/compiling-scaling-applications) has more information on how to scale HPC applications on HB and HC VMs.
13+
The [H-series virtual machines](/azure/virtual-machines/windows/sizes-hpc) (VMs) are the latest HPC offerings on Azure. HB-series VMs offer 60-core AMD EPYC processors and are optimized for running applications with high memory-bandwidth requirements, such as explicit finite element analysis, fluid dynamics, and weather modeling. The HC-series VMs have 44-core Intel Xeon Skylake processors and are optimized for applications requiring intensive CPU calculations, like molecular dynamics and implicit finite element analysis. HB and HC VMs feature 100-Gb/s EDR InfiniBand and support the latest MPI types and versions. For more information on how to scale HPC applications on HB and HC VMs, see the [Scaling HPC Applications Guide](/azure/virtual-machines/workloads/hpc/compiling-scaling-applications).
1414

15-
Azure CycleCloud supports the new H-series VMs out of the box, but for the best experience and performance, follow the guidelines and best practices on this page.
15+
Azure CycleCloud supports the new H-series VMs, but for the best experience and performance, follow the guidelines and best practices in this article.
1616

17-
## CentOS 7.6 HPC Marketplace Image
17+
## CentOS 7.6 HPC Marketplace image
1818

19-
The CentOS 7.6 HPC Marketplace image contains all of the drivers to enable the InfiniBand interface as well as pre-compiled versions of all of the common MPI variants installed in */opt*. For details on what exactly the image has to offer see [this blog post](https://techcommunity.microsoft.com/t5/Azure-Compute/CentOS-HPC-VM-Image-for-SR-IOV-enabled-Azure-HPC-VMs/ba-p/665557).
19+
The CentOS 7.6 HPC Marketplace image contains all of the drivers to enable the InfiniBand interface as well as precompiled versions of all of the common MPI variants installed in */opt*. For details on what the image offers, see [this blog post](https://techcommunity.microsoft.com/t5/Azure-Compute/CentOS-HPC-VM-Image-for-SR-IOV-enabled-Azure-HPC-VMs/ba-p/665557).
2020

2121
To use the CentOS 7.6 HPC image when creating your cluster, check the **Custom Image** box on the **Advanced Settings** parameter and enter the value `OpenLogic:CentOS-HPC:7.6:latest`.
2222

2323
![CentOS HPC Image](~/articles/cyclecloud/images/hc-marketplace-image.png)
2424

25-
In order to support the older H16r VM series and keep cluster head nodes locked to the same version of CentOS, the default "Cycle CentOS 7" image in the Base OS dropdown deploys CentOS 7.4. While this is fine for most VM series, HB/HC VMs require CentOS 7.6 or newer and a different Mellanox driver.
25+
To support the older H16r VM series and keep cluster head nodes locked to the same version of CentOS, the default "Cycle CentOS 7" image in the Base OS dropdown deploys CentOS 7.4. While this version works for most VM series, HB and HC VMs require CentOS 7.6 or newer and a different Mellanox driver.
2626

27-
## Disable SElinux in CycleCloud < 7.7.4
27+
## Disable SElinux in CycleCloud versions earlier than 7.7.4
2828

29-
By default, SElinux only considers */root* and */home* to be valid paths for home directories. Any users with home directories outside of these paths cause SElinux to block SSH from using any SSH keypairs in the user's home directory. In CycleCloud clusters, user home directories are created in */shared/home*. While CycleCloud versions newer than 7.7.4 automatically set the */shared/home* path as a valid SElinux homedir context, older versions don't support this. In order to make sure SSH works properly for users on the cluster, you need to disable SElinux in the cluster template:
29+
By default, SElinux only considers `/root` and `/home` to be valid paths for home directories. If users have home directories outside of these paths, SElinux blocks SSH from using any SSH keypairs in the user's home directory. In CycleCloud clusters, you create user home directories in `/shared/home`. While CycleCloud versions newer than 7.7.4 automatically set the `/shared/home` path as a valid SElinux homedir context, older versions don't support this feature. To make sure SSH works properly for users on the cluster, disable SElinux in the cluster template:
3030
```ini
3131
[[node defaults]]
3232
[[[configuration]]]
@@ -35,7 +35,7 @@ By default, SElinux only considers */root* and */home* to be valid paths for hom
3535

3636
## Running MPI jobs with Slurm
3737

38-
MPI jobs running on HB/HC VMs need to run in the same VM Scaleset (VMSS). To ensure proper autoscale placement of VMs for MPI jobs running with Slurm, make sure to set the following attribute in your cluster template:
38+
MPI jobs running on HB or HC VMs need to run in the same virtual machine scale set. To ensure proper autoscale placement of VMs for MPI jobs running with Slurm, set the following attribute in your cluster template:
3939

4040
```ini
4141
[[nodearray execute]]
@@ -46,7 +46,7 @@ Azure.Overprovision = true
4646

4747
## Getting pkeys for use with OpenMPI and MPICH
4848

49-
Some MPI variants require you to specify the InfiniBand PKEY when running the job. The following Bash function can be used to determine the PKEY:
49+
Some MPI variants require you to specify the InfiniBand PKEY when running the job. Use the following Bash function to determine the PKEY:
5050

5151
```bash
5252
get_ib_pkey()

0 commit comments

Comments
 (0)