You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/operator-nexus/howto-cluster-runtime-upgrade-with-pauserack-strategy.md
+14-15Lines changed: 14 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,25 +8,21 @@ ms.topic: how-to
8
8
ms.date: 08/16/2024
9
9
# ms.custom: template-include
10
10
---
11
-
12
-
# Upgrading cluster runtime with a pause rack strategy
11
+
## Upgrading cluster runtime with a pause rack strategy
13
12
14
13
This how-to guide explains the steps to execute a cluster runtime upgrade with pasue rack strategy. Executing cluster runtime upgrade with "PauseRack" strategy will update a single rack in a cluster and then pause to wait for confirmation before moving to the next rack. All existing thresholds will still be honoried with pause rack strategy.
15
14
16
15
## Prerequisites
17
16
18
-
Please follow the steps mentioned in prerequistie section of [Upgrading cluster runtime from Azure CLI](./howto-cluster-runtime-upgrade.md)
19
-
20
-
> **Note:**
17
+
> [!NOTE]
21
18
> Upgrades with the PauseRack strategy is available starting API version 2024-06-01-preview.
22
19
20
+
Please follow the steps mentioned in prerequistie section of [Upgrading cluster runtime from Azure CLI](./howto-cluster-runtime-upgrade.md)
21
+
23
22
## Procedure
24
23
25
24
1. Enable Rack Pause upgrade strategy on a Nexus cluster
26
25
27
-
> **Note:**
28
-
> Below is just a reference command, please choose threshold values as desired.
29
-
30
26
Example:
31
27
32
28
```azurecli
@@ -37,19 +33,22 @@ Please follow the steps mentioned in prerequistie section of [Upgrading cluster
37
33
38
34
2. Confirm that the cluster resource JSON in the JSON View reflects the rack pause upgrade strategy.
39
35
40
-
```shell
41
-
az networkcloud cluster show --cluster-name "clusterName" --resource-group "resourceGroupName"
42
-
```
36
+
```azurecli
37
+
az networkcloud cluster show --cluster-name "clusterName" --resource-group "resourceGroupName"
3.Trigger runtime bundle upgrade as usual from Azure portal / CLI. for reference [Upgrading cluster runtime from Azure CLI](./howto-cluster-runtime-upgrade.md)
42
+
3.Trigger runtime bundle upgrade as usual from Azure portal / CLI. for reference [Upgrading cluster runtime from Azure CLI](./howto-cluster-runtime-upgrade.md)
47
43
48
-
4.Once Rack 1 has completed, the runtime upgrade will pause, awaiting user action to resume the runtime upgrade for Rack 2.
44
+
4.Once Rack 1 has completed, the runtime upgrade will pause, awaiting user action to resume the runtime upgrade for Rack 2.
5. To resume the runtime upgrade, execute the following `az networkcloud` cli command to trigger the continue upgrade version action.
48
+
> [!NOTE]
49
+
> This message will be available in logs for programtic access, for more details follow [List of logs available for streaming in Azure Operator Nexus](list-logs-available.md)
50
+
51
+
5.To resume the runtime upgrade, execute the following `az networkcloud` cli command to trigger the continue upgrade version action.
53
52
54
53
```shell
55
54
az networkcloud cluster continue-update-version \
@@ -58,7 +57,7 @@ az networkcloud cluster continue-update-version \
58
57
--cluster-name=$CLUSTER_NAME
59
58
```
60
59
61
-
6.Continue repeating step 5 for each rack until all racks have been upgraded to the latest runtime bundle.
60
+
6.Continue repeating step 5 for each rack until all racks have been upgraded to the latest runtime bundle.
Copy file name to clipboardExpand all lines: articles/operator-nexus/howto-cluster-runtime-upgrade.md
+11-8Lines changed: 11 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -94,6 +94,7 @@ The output should be the target cluster's information and the cluster's detailed
94
94
For more detailed insights on the upgrade progress, the individual BMM in each Rack can be checked for status. Example of this is provided in the reference section under [BareMetal Machine roles](./reference-near-edge-baremetal-machine-roles.md).
95
95
96
96
## Configure compute threshold parameters for runtime upgrade using cluster updateStrategy
97
+
97
98
The following Azure CLI command is used to configure the compute threshold parameters for a runtime upgrade:
98
99
99
100
```azurecli
@@ -110,25 +111,28 @@ Optional arguments:
110
111
- wait-time-minutes: The delay or waiting period before updating a rack. The default value is 15.
Upon successful execution of the command, the updateStrategy values specified will be applied to the cluster:
117
-
```
118
-
"updateStrategy": {
120
+
121
+
```"updateStrategy": {
119
122
"maxUnavailable": 16,
120
123
"strategyType": "Rack",
121
124
"thresholdType": "PercentSuccess",
122
125
"thresholdValue": 70,
123
126
"waitTimeMinutes": 15,
124
127
},
125
128
```
126
-
> [!WARNING]
127
-
> When a threshold value below 100% is set, it’s possible that any unhealthy nodes might not be upgraded, yet the “Cluster” status could still indicate that upgrade was sucessfull. For troubleshooting issues with bare metal machines, please refer to the troubleshooting guide titled [Troubleshoot Azure Operator Nexus server problems](troubleshoot-reboot-reimage-replace.md)
129
+
130
+
> [!NOTE]
131
+
> When a threshold value below 100% is set, it’s possible that any unhealthy nodes might not be upgraded, yet the “Cluster” status could still indicate that upgrade was sucessfull. For troubleshooting issues with bare metal machines, please refer to [Troubleshoot Azure Operator Nexus server problems](troubleshoot-reboot-reimage-replace.md)
128
132
129
133
## Upgrade with PauseRack Strategy
130
134
131
-
Starting with API version 2024-06-01-preview, runtime upgrades can be triggered using a "PauseRack" strategy. When you execute a cluster runtime upgrade with the PauseRack" strategy, it will update one rack at a time in the cluster and then pause, awaiting confirmation before proceeding to the next rack. All existing thresholds will continue to be respected with the "PauseRack" strategy. To carry out a cluster runtime upgrade using the "PauseRack" strategy, please follow the steps outlined in [Upgrading cluster runtime with a pause rack strategy](howto-cluster-runtime-upgrade-with-pauserack-strategy.md)
135
+
Starting with API version 2024-06-01-preview, runtime upgrades can be triggered using a "PauseRack" strategy. When you execute a cluster runtime upgrade with the PauseRack" strategy, it will update one rack at a time in the cluster and then pause, awaiting confirmation before proceeding to the next rack. All existing thresholds will continue to be respected with the "PauseRack" strategy. To carry out a cluster runtime upgrade using the "PauseRack" strategy follow the steps outlined in [Upgrading cluster runtime with a pause rack strategy](howto-cluster-runtime-upgrade-with-pauserack-strategy.md)
132
136
133
137
## Frequently Asked Questions
134
138
@@ -152,16 +156,15 @@ During a runtime upgrade, the cluster enters a state of `Upgrading`. In the even
152
156
153
157
### Impact on Nexus Kubernetes tenant workloads during cluster runtime upgrade
154
158
155
-
During a runtime upgrade, impacted Nexus Kubernetes cluster nodes are cordoned and drained before the Bare Metal Hosts (BMH) are upgraded. Cordoning the cluster node prevents new pods from being scheduled on it and draining the cluster node allows pods that are running tenant workloads a chance to shift to another available cluster node, which helps to reduce the impact on services. The draining mechanism's effectiveness is contingent on the available capacity within the Nexus Kubernetes cluster. If the cluster is nearing full capacity and lacks space for the pods to relocate, they transition into a Pending state following the draining process.
159
+
During a runtime upgrade, impacted Nexus Kubernetes cluster nodes are cordoned and drained before the Bare Metal Hosts (BMH) are upgraded. Cordoning the cluster node prevents new pods from being scheduled on it and draining the cluster node allows pods that are running tenant workloads a chance to shift to another available cluster node, which helps to reduce the impact on services. The draining mechanism's effectiveness is contingent on the available capacity within the Nexus Kubernetes cluster. If the cluster is nearing full capacity and lacks space for the pods to relocate, they transition into a Pending state following the draining process.
156
160
157
161
Once the cordon and drain process of the tenant cluster node is completed, the upgrade of the BMH proceeds. Each tenant cluster node is allowed up to 10 minutes for the draining process to complete, after which the BMH upgrade will begin. This guarantees the BMH upgrade will make progress. BMHs are upgraded one rack at a time, and upgrades are performed in parallel within the same rack. The BMH upgrade does not wait for tenant resources to come online before continuing with the runtime upgrade of BMHs in the rack being upgraded. The benefit of this is that the maximum overall wait time for a rack upgrade is kept at 10 minutes regardless of how many nodes are available. This maximum wait time is specific to the cordon and drain procedure and is not applied to the overall upgrade procedure. Upon completion of each BMH upgrade, the Nexus Kubernetes cluster node starts, rejoins the cluster, and is uncordoned, allowing pods to be scheduled on the node once again.
158
162
159
163
It's important to note that the Nexus Kubernetes cluster node won't be shut down after the cordon and drain process. The BMH is rebooted with the new image as soon as all the Nexus Kubernetes cluster nodes are cordoned and drained, after 10 minutes if the drain process isn't completed. Additionally, the cordon and drain is not initiated for power-off or restart actions of the BMH; it's exclusively activated only during a runtime upgrade.
160
164
161
165
It is important to note that following the runtime upgrade, there could be instance where a Nexus Kubernetes Cluster node remains cordoned. For such scenario, you can manually uncordon the node by executing the following commands via(./includes/kubernetes-cluster/cluster-connect.md)
162
166
163
-
```
164
-
kubectl get nodes | grep SchedulingDisabled > /dev/null
167
+
```kubectl get nodes | grep SchedulingDisabled > /dev/null
165
168
if [ $? -eq 0 ]; then
166
169
for node in $(kubectl get nodes | grep SchedulingDisabled | awk '{print $1}'); do
0 commit comments