You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/operator-nexus/howto-cluster-runtime-upgrade.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ This how-to guide explains the steps for installing the required Azure CLI and e
17
17
## Prerequisites
18
18
19
19
1. The [Install Azure CLI][installation-instruction] must be installed.
20
-
2. The `networkcloud` CLI extension is required. If the `networkcloud` extension isn't installed, it can be installed following the steps listed [here](https://github.com/MicrosoftDocs/azure-docs-pr/blob/main/articles/operator-nexus/howto-install-cli-extensions.md).
20
+
2. The `networkcloud` CLI extension is required. If the `networkcloud` extension isn't installed, it can be installed following the steps listed [here](https://github.com/MicrosoftDocs/azure-docs-pr/blob/main/articles/operator-nexus/howto-install-cli-extensions.md).
21
21
3. Access to the Azure portal for the target cluster to be upgraded.
22
22
4. You must be logged in to the same subscription as your target cluster via `az login`
23
23
5. Target cluster must be in a running state, with all control plane nodes healthy and 80+% of compute nodes in a running and healthy state.
@@ -142,15 +142,15 @@ If the rack's spec wasn't updated to the upgraded runtime version before the har
142
142
143
143
### After a runtime upgrade, the cluster shows "Failed" Provisioning State
144
144
145
-
During a runtime upgrade the cluster will enter a state of `Upgrading`In the event of a failure of the runtime upgrade, for reasons related to the resources, the cluster will go into a `Failed` provisioning state. This state could be linked to the lifecycle of the components related to the cluster (e.g StorageAppliance) and might be necessary to diagnose the failure with Microsoft support.
145
+
During a runtime upgrade, the cluster will enter a state of `Upgrading` In the event of a failure of the runtime upgrade, for reasons related to the resources, the cluster will go into a `Failed` provisioning state. This state could be linked to the lifecycle of the components related to the cluster (e.g StorageAppliance) and might be necessary to diagnose the failure with Microsoft support.
146
146
147
147
### Impact on Nexus Kubernetes tenant workloads during cluster runtime upgrade
148
148
149
-
During a runtime upgrade, impacted Nexus Kubernetes cluster nodes are cordoned and drained before the Bare Metal Hosts (BMH) are upgraded. Cordoning of the cluster node prevents new pods from being scheduled onto it and draining of the cluster node allows pods that are running tenant workload opportunity to move to another available cluster node in order to help minimize service impact. Draining mechanismis affected by available capacity on the Nexus Kubernetes cluster, if the cluster is near capacity and there is no where for the pods to move, they will enter into Pending state after the drain process.
149
+
During a runtime upgrade, impacted Nexus Kubernetes cluster nodes are cordoned and drained before the Bare Metal Hosts (BMH) are upgraded. Cordoning the cluster node prevents new pods from being scheduled on it and draining the cluster node allows pods that are running tenant workloads a chance to shift to another available cluster node, which helps to reduce the impact on services. The draining mechanism's effectiveness is contingent on the available capacity within the Nexus Kubernetes cluster. If the cluster is nearing full capacity and lacks space for the pods to relocate, they will transition into a Pending state following the draining process.
150
150
151
-
Once the cordon and drain process of the tenant cluster node is completed, the BMH upgrade proceeds. Each tenant cluster node is allowed up to 10 minutes for the draining process to complete, after which the BMH upgrade will proceed regardless, this ensure BMH upgrade will continue to proceed without long delay. The BMHs are upgraded in parallel within the same rack, the benefit of this is that the maximum overall wait time for a rack upgrade is also kept at 10 minutes regardless of how many nodes are available. After the completion of each BMH upgrade, the Nexus kubernetes cluster node will start and rejoin the cluster, it will then be uncordoned so pods can be scheduled on the node again.
151
+
Once the cordon and drain process of the tenant cluster node is completed, the BMH upgrade proceeds. Each tenant cluster node is allowed up to 10 minutes for the draining process to complete, after which the BMH upgrade will proceed regardless, this ensure BMH upgrade will continue to proceed without long delay. This indicates that the upgrade does not wait for tenant resources to come online before continuing with the runtime upgrade. The BMHs are upgraded in parallel within the same rack, the benefit of this is that the maximum overall wait time for a rack upgrade is also kept at 10 minutes regardless of how many nodes are available. Additionally, it's essential to recognize that this process is confined to the scope of the rack.This maximum wait time specifically pertains to the cordon and drain procedure and is not indicative of the total duration of the BMH upgrade. Upon the completion of each BMH upgrade, the Nexus Kubernetes cluster node will start, rejoin the cluster, and then it will be uncordoned, allowing pods to be scheduled on the node once again.
152
152
153
-
It's important to note that the Nexus Kubernetes cluster node won't be shut down after the cordon and drain process. The BMH is rebooted with the new image as soon as all the Nexus Kubernetes cluster nodes are cordoned and drained, after 10 minutes if the drain process isn't completed. Additionally, the cordon and drain is not initiated for power-off or restart actions of the BMH; it's exclusively activated only during a runtime upgrade.
153
+
It's important to note that the Nexus Kubernetes cluster node won't be shut down after the cordon and drain process. The BMH is rebooted with the new image as soon as all the Nexus Kubernetes cluster nodes are cordoned and drained, after 10 minutes if the drain process isn't completed. Additionally, the cordon and drain isn't initiated for power-off or restart actions of the BMH; it's exclusively activated only during a runtime upgrade.
0 commit comments