Skip to content

Commit 1f38296

Browse files
committed
apply suggested edit
1 parent 69274f3 commit 1f38296

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/operator-nexus/howto-cluster-runtime-upgrade.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -144,13 +144,13 @@ If the rack's spec wasn't updated to the upgraded runtime version before the har
144144

145145
During a runtime upgrade the cluster will enter a state of `Upgrading` In the event of a failure of the runtime upgrade, for reasons related to the resources, the cluster will go into a `Failed` provisioning state. This state could be linked to the lifecycle of the components related to the cluster (e.g StorageAppliance) and might be necessary to diagnose the failure with Microsoft support.
146146

147-
### Purpose of cordoning and draining in a kubernetes cluster upgrade
147+
### Impact on Nexus Kubernetes tenant workloads during cluster runtime upgrade
148148

149-
The cordon and drain process for Nexus Kubernetes cluster nodes on Bare Metal Hosts (BMH) during upgrades is aimed at reducing interruptions and ensuring a smooth transition. This feature manages the orderly evacuation of Cloud-Native Network Function (CNF) Pods while the BMH hosting the cluster Virtual Machines (VMs) is upgraded. By isolating the tenant cluster node and evacuating the Pods beforehand, it allows the Pods to move to other nodes within the tenant cluster, given there's enough space. If not, the Pods will be put on hold in a Pending state until the drain is complete.
149+
During a Nexus cluster runtime upgrade, the Nexus Kubernetes workload cluster nodes will be cordoned and drained to ensure the proper draining of CNF Pods while upgrading the Bare Metal Hosts (BMH) where the cluster VMs are hosted. Cordoning the tenant cluster node and draining the Pods from it prior to the BMH upgrade allows the Pods to migrate to other nodes within the tenant cluster if sufficient capacity exists. If there is no capacity, the Pods will enter a Pending state after the drain process.
150150

151-
Once the cordon and drain process of the tenant cluster VMs in the BMH is complete, the BMH upgrade proceeds. The tenant cluster node drain timeout is set to 10 minutes; if draining takes longer, the BMH upgrade will still proceed after this timeout. Since this process happens in parallel, the overall maximum wait time for the entire rack is 10 minutes. After the BMH upgrade is complete and the BMH rejoins the bare metal cluster, the tenant cluster VM will be uncordoned.
151+
Once the cordon and drain process of the tenant cluster VMs on the BMH is complete, the BMH upgrade will proceed. The drain timeout for the tenant cluster node is set to 10 minutes; if the draining process exceeds this duration, the BMH upgrade will still continue after the timeout, and the Bare Metal Host will be rebooted. This process occurs in parallel, so the maximum overall wait time for the entire rack is limited to 10 minutes. After completing the BMH upgrade and rejoining the bare metal cluster, and once the VM is up and joins the cluster, the VM will be uncordoned.
152152

153-
It's also important to remember that there won't be a shutdown of tenant cluster VMs after the cordon and drain process, and the BMH will be temporarily offline for the upgrade. Additionally, the cordon and drain feature isn't triggered by BMH power-off and restart actions on the Nexus Kubernetes node; it's only activated for Nexus runtime upgrades.
153+
It is important to note that the tenant cluster VMs will not be shut down after the cordon and drain process. The BMH will be rebooted as soon as the drain is done, or if the drain is not successful within 10 minutes. Additionally, the cordon and drain feature is not triggered by power-off or restart actions on the Bare Metal Host; it is exclusively activated for Nexus runtime upgrades.
154154

155155
<!-- LINKS - External -->
156156
[installation-instruction]: https://aka.ms/azcli

0 commit comments

Comments
 (0)