Skip to content

Commit 06e0f0e

Browse files
authored
Update howto-cluster-runtime-upgrade.md
1 parent 638549f commit 06e0f0e

File tree

1 file changed

+26
-26
lines changed

1 file changed

+26
-26
lines changed

articles/operator-nexus/howto-cluster-runtime-upgrade.md

Lines changed: 26 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: "Azure Operator Nexus: Runtime upgrade"
3-
description: Learn to execute a cluster runtime upgrade for Operator Nexus
3+
description: Learn to execute a Cluster runtime upgrade for Operator Nexus
44
author: bartpinto
55
ms.author: bpinto
66
ms.service: azure-operator-nexus
@@ -10,7 +10,7 @@ ms.date: 02/25/2025
1010
# ms.custom: template-include
1111
---
1212

13-
# Upgrade cluster runtime from Azure CLI
13+
# Upgrade Cluster runtime from Azure CLI
1414

1515
This how-to guide explains the steps for installing the required Azure CLI and extensions required to interact with Operator Nexus.
1616

@@ -23,23 +23,23 @@ This how-to guide explains the steps for installing the required Azure CLI and e
2323
- Subscription ID (`SUBSCRIPTION`)
2424
- Cluster name (`CLUSTER`)
2525
- Resource group (`CLUSTER_RG`)
26-
1. Target cluster must be healthy in a running state, with all control plane nodes healthy.
26+
1. Target Cluster must be healthy in a running state, with all control plane nodes healthy.
2727

2828
## Checking current runtime version
29-
Verify current cluster runtime version before upgrade:
30-
[How to check current cluster runtime version.](./howto-check-runtime-version.md#check-current-cluster-runtime-version)
29+
Verify current Cluster runtime version before upgrade:
30+
[How to check current Cluster runtime version.](./howto-check-runtime-version.md#check-current-cluster-runtime-version)
3131

3232
## Finding available runtime versions
3333

3434
### Via Azure portal
3535

36-
To find available upgradeable runtime versions, navigate to the target cluster in the Azure portal. In the cluster's overview pane, navigate to the ***Available upgrade versions*** tab.
36+
To find available upgradeable runtime versions, navigate to the target Cluster in the Azure portal. In the Cluster's overview pane, navigate to the ***Available upgrade versions*** tab.
3737

38-
:::image type="content" source="./media/runtime-upgrade-upgradeable-runtime-versions.png" alt-text="Screenshot of Azure portal showing correct tab to identify available cluster upgrades." lightbox="./media/runtime-upgrade-upgradeable-runtime-versions.png":::
38+
:::image type="content" source="./media/runtime-upgrade-upgradeable-runtime-versions.png" alt-text="Screenshot of Azure portal showing correct tab to identify available Cluster upgrades." lightbox="./media/runtime-upgrade-upgradeable-runtime-versions.png":::
3939

40-
From the **available upgrade versions** tab, we're able to see the different cluster versions that are currently available to upgrade. The operator can select from the listed the target runtime versions. Once selected, proceed to upgrade the cluster.
40+
From the **available upgrade versions** tab, we're able to see the different Cluster versions that are currently available to upgrade. The operator can select from the listed the target runtime versions. Once selected, proceed to upgrade the Cluster.
4141

42-
:::image type="content" source="./media/runtime-upgrade-runtime-version.png" lightbox="./media/runtime-upgrade-runtime-version.png" alt-text="Screenshot of Azure portal showing available cluster upgrades.":::
42+
:::image type="content" source="./media/runtime-upgrade-runtime-version.png" lightbox="./media/runtime-upgrade-runtime-version.png" alt-text="Screenshot of Azure portal showing available Cluster upgrades.":::
4343

4444
### Via Azure CLI
4545

@@ -66,9 +66,9 @@ In the output, you can find the `availableUpgradeVersions` property and look at
6666
],
6767
```
6868

69-
If there are no available cluster upgrades, the list is empty.
69+
If there are no available Cluster upgrades, the list is empty.
7070

71-
## Configure compute threshold parameters for runtime upgrade using cluster updateStrategy
71+
## Configure compute threshold parameters for runtime upgrade using Cluster `updateStrategy`
7272

7373
The following Azure CLI command is used to configure the compute threshold parameters for a runtime upgrade:
7474

@@ -82,15 +82,15 @@ wait-time-minutes="<waitTimeBetweenRacks>" \
8282
```
8383

8484
Required parameters:
85-
- strategy-type: Defines the update strategy. Setting used are `Rack` (Rack by Rack) OR `PauseAfterRack` (Pause for user before each Rack starts). The default value is `Rack`. To perform a cluster runtime upgrade using the `PauseAfterRack` strategy, follow the steps outlined in [Upgrade Cluster Runtime with PauseAfterRack Strategy](howto-cluster-runtime-upgrade-with-pauseafterrack-strategy.md).
85+
- strategy-type: Defines the update strategy. Setting used are `Rack` (Rack-by-Rack) OR `PauseAfterRack` (Pause for user before each Rack starts). The default value is `Rack`. To perform a Cluster runtime upgrade using the `PauseAfterRack` strategy, follow the steps outlined in [Upgrade Cluster Runtime with PauseAfterRack Strategy](howto-cluster-runtime-upgrade-with-pauseafterrack-strategy.md).
8686
- threshold-type: Determines how the threshold should be evaluated, applied in the units defined by the strategy. Settings used are `PercentSuccess` OR `CountSuccess`. The default value is `PercentSuccess`.
8787
- threshold-value: The numeric threshold value used to evaluate an update. The default value is `80`.
8888

8989
Optional parameters:
9090
- max-unavailable: The maximum number of worker nodes that can be offline, that is, upgraded rack at a time. The default value is `32767`.
9191
- wait-time-minutes: The delay or waiting period before updating a rack. The default value is `15`.
9292

93-
The following example is for a customer using Rack by Rack strategy with a Percent Success of 60% and a 1-minute pause.
93+
The following example is for a customer using Rack-by-Rack strategy with a Percent Success of 60% and a 1-minute pause.
9494

9595
```azurecli
9696
az networkcloud cluster update --name "<CLUSTER>" \
@@ -115,9 +115,9 @@ az networkcloud cluster show --name "<CLUSTER>" \
115115
"waitTimeMinutes": 1
116116
```
117117

118-
In this example, if less than 60% of the compute nodes being provisioned in a rack fail to provision (on a Rack by Rack basis), the cluster upgrade waits indefinitely until the condition is met. If 60% or more of the compute nodes are successfully provisioned, cluster deployment moves on to the next rack of compute nodes. If there are too many failures in the rack, the hardware must be repaired before the upgrade can continue.
118+
In this example, if less than 60% of the compute nodes being provisioned in a rack fail to provision (on a Rack-by-Rack basis), the Cluster upgrade waits indefinitely until the condition is met. If 60% or more of the compute nodes are successfully provisioned, Cluster deployment moves on to the next rack of compute nodes. If there are too many failures in the rack, the hardware must be repaired before the upgrade can continue.
119119

120-
The following example is for a customer using Rack by Rack strategy with a threshold type CountSuccess of 10 nodes per rack and a 1-minute pause.
120+
The following example is for a customer using Rack-by-Rack strategy with a threshold type `CountSuccess` of 10 nodes per rack and a 1-minute pause.
121121

122122
```azurecli
123123
az networkcloud cluster update --name "<CLUSTER>" \
@@ -142,13 +142,13 @@ az networkcloud cluster show --name "<CLUSTER>" \
142142
"waitTimeMinutes": 1
143143
```
144144

145-
In this example, if less than 10 compute nodes being provisioned in a rack fail to provision (on a Rack by Rack basis), the cluster upgrade will wait indefinitely until the condition is met. If 10 or more of the compute nodes are successfully provisioned, cluster deployment moves on to the next rack of compute nodes. If there are too many failures in the rack, the hardware must be repaired before the upgrade can continue.
145+
In this example, if less than 10 compute nodes being provisioned in a rack fail to provision (on a Rack-by-Rack basis), the Cluster upgrade waits indefinitely until the condition is met. If 10 or more of the compute nodes are successfully provisioned, Cluster deployment moves on to the next rack of compute nodes. If there are too many failures in the rack, the hardware must be repaired before the upgrade can continue.
146146

147147
> [!NOTE]
148-
> ***`update-strategy` cannot be changed after the cluster runtime upgrade has started.***
148+
> ***`update-strategy` cannot be changed after the Cluster runtime upgrade has started.***
149149
> When a threshold value below 100% is set, it’s possible that any unhealthy nodes might not be upgraded, yet the "Cluster" status could still indicate that upgrade was successful. For troubleshooting issues with bare metal machines, refer to [Troubleshoot Azure Operator Nexus server problems](troubleshoot-reboot-reimage-replace.md)
150150
151-
## Upgrade cluster runtime using CLI
151+
## Upgrade Cluster runtime using CLI
152152

153153
To perform an upgrade of the runtime, use the following Azure CLI command:
154154

@@ -159,21 +159,21 @@ az networkcloud cluster update-version --cluster-name "<CLUSTER>" \
159159
--subscription "<SUBSCRIPTION>"
160160
```
161161

162-
The runtime upgrade is a long process. The upgrade first upgrades the management nodes and then sequentially Rack by Rack for the worker nodes.
162+
The runtime upgrade is a long process. The upgrade first upgrades the management nodes and then sequentially Rack-by-Rack for the worker nodes.
163163
The upgrade is considered to be finished when 80% of worker nodes per rack and 100% of management nodes are successfully upgraded.
164164
Workloads might be impacted while the worker nodes in a rack are in the process of being upgraded, however workloads in all other racks aren't impacted. Consideration of workload placement in light of this implementation design is encouraged.
165165

166166
Upgrading all the nodes takes multiple hours, depending upon how many racks exist for the Cluster.
167167
Due to the length of the upgrade process, the Cluster's detail status should be checked periodically for the current state of the upgrade.
168168
To check on the status of the upgrade observe the detailed status of the Cluster. This check can be done via the portal or az CLI.
169169

170-
To view the upgrade status through the Azure portal, navigate to the targeted cluster resource. In the cluster's *Overview* screen, the detailed status is provided along with a detailed status message.
170+
To view the upgrade status through the Azure portal, navigate to the targeted Cluster resource. In the Cluster's *Overview* screen, the detailed status is provided along with a detailed status message.
171171

172172
The Cluster upgrade is in-progress when detailedStatus is set to `Updating` and detailedStatusMessage shows the progress of upgrade. Some examples of upgrade progress shown in detailedStatusMessage are `Waiting for control plane upgrade to complete...`, `Waiting for nodepool "<rack-id>" to finish upgrading...`, etc.
173173

174174
The Cluster upgrade is complete when detailedStatus is set to `Running` and detailedStatusMessage shows message `Cluster is up and running`
175175

176-
:::image type="content" source="./media/runtime-upgrade-cluster-detail-status.png" lightbox="./media/runtime-upgrade-cluster-detail-status.png" alt-text="Screenshot of Azure portal showing in progress cluster upgrade.":::
176+
:::image type="content" source="./media/runtime-upgrade-cluster-detail-status.png" lightbox="./media/runtime-upgrade-cluster-detail-status.png" alt-text="Screenshot of Azure portal showing in progress Cluster upgrade.":::
177177

178178
To view the upgrade status through the Azure CLI, use `az networkcloud cluster show`.
179179

@@ -183,7 +183,7 @@ az networkcloud cluster show --cluster-name "<CLUSTER>" \
183183
--subscription "<SUBSCRIPTION>"
184184
```
185185

186-
The output should be the target cluster's information and the cluster's detailed status and detail status message should be present.
186+
The output should be the target Cluster's information and the Cluster's detailed status and detail status message should be present.
187187
For more detailed insights on the upgrade progress, the individual node in each Rack can be checked for status. An example of checking the status is provided in the reference section under [BareMetal Machine roles](./reference-near-edge-baremetal-machine-roles.md).
188188

189189

@@ -192,7 +192,7 @@ For more detailed insights on the upgrade progress, the individual node in each
192192
### Identifying Cluster Upgrade Stalled/Stuck
193193

194194
During a runtime upgrade, it's possible that the upgrade fails to move forward but the detail status reflects that the upgrade is still ongoing. **Because the runtime upgrade can take a very long time to successfully finish, there's no set timeout length currently specified**.
195-
Hence, it's advisable to also check periodically on your cluster's detail status and logs to determine if your upgrade is indefinitely attempting to upgrade.
195+
Hence, it's advisable to also check periodically on your Cluster's detail status and logs to determine if your upgrade is indefinitely attempting to upgrade.
196196

197197
We can identify an `indefinitely attempting to upgrade` situation by looking at the Cluster's logs, detailed message, and detailed status message. If a timeout occurs, we would observe that the Cluster is continuously reconciling over the same indefinitely and not moving forward. From here, we recommend checking Cluster logs or configured LAW, to see if there's a failure, or a specific upgrade that is causing the lack of progress.
198198

@@ -202,7 +202,7 @@ A guide for identifying issues with provisioning worker nodes is provided at [Tr
202202

203203
### Hardware Failure doesn't require Upgrade re-execution
204204

205-
If a hardware failure during an upgrade occurs, the runtime upgrade continues as long as the set thresholds are met for the compute and management/control nodes. Once the machine is fixed or replaced, it gets provisioned with the current platform runtime's OS, which contains the targeted version of the runtime. If a rack was updated before a failure, then the upgraded runtime version would be used when the nodes are reprovisioned. If the rack's spec wasn't updated to the upgraded runtime version before the hardware failure, the machine would be provisioned with the previous runtime version when it's repaired. It is upgraded along with the rack when the rack starts its upgrade.
206-
### After a runtime upgrade, the cluster shows "Failed" Provisioning State
205+
If a hardware failure during an upgrade occurs, the runtime upgrade continues as long as the set thresholds are met for the compute and management/control nodes. Once the machine is fixed or replaced, it gets provisioned with the current platform runtime's OS, which contains the targeted version of the runtime. If a rack was updated before a failure, then the upgraded runtime version would be used when the nodes are reprovisioned. If the rack's spec wasn't updated to the upgraded runtime version before the hardware failure, the machine will provision with the previous runtime version when the hardware is repaired. The machine is upgraded along with the rack when the rack starts its upgrade.
206+
### After a runtime upgrade, the Cluster shows "Failed" Provisioning State
207207

208-
During a runtime upgrade, the cluster enters a state of `Upgrading`. If the runtime upgrade fails, the cluster goes into a `Failed` provisioning state. Infrastructure components (e.g the Storage Appliance) may cause failures during the upgrade. In some scenarios, it may be necessary to diagnose the failure with Microsoft support.
208+
During a runtime upgrade, the Cluster enters a state of `Upgrading`. If the runtime upgrade fails, the Cluster goes into a `Failed` provisioning state. Infrastructure components (e.g the Storage Appliance) may cause failures during the upgrade. In some scenarios, it may be necessary to diagnose the failure with Microsoft support.

0 commit comments

Comments
 (0)