Skip to content

Commit 05210dc

Browse files
committed
resolve new comments
1 parent 13a17c6 commit 05210dc

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/operator-nexus/howto-cluster-runtime-upgrade.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -142,13 +142,13 @@ If the rack's spec wasn't updated to the upgraded runtime version before the har
142142

143143
### After a runtime upgrade, the cluster shows "Failed" Provisioning State
144144

145-
During a runtime upgrade, the cluster enters a state of `Upgrading` In the event of a failure of the runtime upgrade, for reasons related to the resources, the cluster will go into a `Failed` provisioning state. This state could be linked to the lifecycle of the components related to the cluster (e.g StorageAppliance) and might be necessary to diagnose the failure with Microsoft support.
145+
During a runtime upgrade, the cluster enters a state of `Upgrading`. In the event of a failure of the runtime upgrade, the cluster will go into a `Failed` provisioning state. Failures during upgrade may be caused by infrastructure components (e.g the Storage Appliance). In some scenarios, it may be necessary to diagnose the failure with Microsoft support.
146146

147147
### Impact on Nexus Kubernetes tenant workloads during cluster runtime upgrade
148148

149149
During a runtime upgrade, impacted Nexus Kubernetes cluster nodes are cordoned and drained before the Bare Metal Hosts (BMH) are upgraded. Cordoning the cluster node prevents new pods from being scheduled on it and draining the cluster node allows pods that are running tenant workloads a chance to shift to another available cluster node, which helps to reduce the impact on services. The draining mechanism's effectiveness is contingent on the available capacity within the Nexus Kubernetes cluster. If the cluster is nearing full capacity and lacks space for the pods to relocate, they transition into a Pending state following the draining process.
150150

151-
Once the cordon and drain process of the tenant cluster node is completed, the BMH upgrade proceeds. Each tenant cluster node is allowed up to 10 minutes for the draining process to complete, after which the BMH upgrade proceeds regardless, this ensure BMH upgrade will continue to proceed without long delay. This indicates that the upgrade does not wait for tenant resources to come online before continuing with the runtime upgrade. The BMHs are upgraded in parallel within the same rack. The benefit of this is that the maximum overall wait time for a rack upgrade is also kept at 10 minutes regardless of how many nodes are available. Additionally, it's essential to recognize that this process is limited to the scope of the rack.This maximum wait time specifically pertains to the cordon and drain procedure and is not indicative of the total duration of the BMH upgrade. Upon the completion of each BMH upgrade, the Nexus Kubernetes cluster node starts, rejoin the cluster, and then it is uncordoned, allowing pods to be scheduled on the node once again.
151+
Once the cordon and drain process of the tenant cluster node is completed, the upgrade of the BMH proceeds. Each tenant cluster node is allowed up to 10 minutes for the draining process to complete, after which the BMH upgrade will begin. This guarantees the BMH upgrade will make progress. BMHs are upgraded one rack at a time, and upgrades are performed in parallel within the same rack. The BMH upgrade does not wait for tenant resources to come online before continuing with the runtime upgrade of BMHs in the rack being upgraded. The benefit of this is that the maximum overall wait time for a rack upgrade is kept at 10 minutes regardless of how many nodes are available. This maximum wait time is specific to the cordon and drain procedure and is not applied to the overall upgrade procedure. Upon completion of each BMH upgrade, the Nexus Kubernetes cluster node starts, rejoins the cluster, and is uncordoned, allowing pods to be scheduled on the node once again.
152152

153153
It's important to note that the Nexus Kubernetes cluster node won't be shut down after the cordon and drain process. The BMH is rebooted with the new image as soon as all the Nexus Kubernetes cluster nodes are cordoned and drained, after 10 minutes if the drain process isn't completed. Additionally, the cordon and drain is not initiated for power-off or restart actions of the BMH; it's exclusively activated only during a runtime upgrade.
154154

0 commit comments

Comments
 (0)