Skip to content

Commit e891974

Browse files
authored
Update concepts-cluster-upgrade-overview.md
1 parent 21346aa commit e891974

File tree

1 file changed

+8
-4
lines changed

1 file changed

+8
-4
lines changed

articles/operator-nexus/concepts-cluster-upgrade-overview.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -59,13 +59,17 @@ This strategy will pause the upgrade after the rack completes the upgrade. The n
5959

6060
Details on how to run an upgrade with rack pause are located [here](./howto-cluster-runtime-upgrade-with-pauserack-strategy.md).
6161

62-
## Nexus Kubernetes tenant workloads during cluster runtime upgrade
62+
## Nexus tenant workloads during cluster runtime upgrade
6363

64-
During a runtime upgrade, impacted Nexus Kubernetes Cluster nodes are cordoned and drained before the servers are upgraded. Cordoning the Kubernetes Cluster node prevents new pods from being scheduled on it. Draining the Kubernetes Cluster node allows pods that are running tenant workloads a chance to shift to another available Kubernetes Cluster node, which helps to reduce the disruption on services. The draining mechanism's effectiveness is contingent on the available capacity within the Nexus Kubernetes Cluster. If the Kubernetes Cluster is nearing full capacity and lacks space for the pods to relocate, they transition into a Pending state following the draining process.
64+
During a runtime upgrade, Nexus Kubernetes Cluster nodes that run on servers scheduled for upgrade are cordoned, drained, and then gracefully shut down before the upgrade begins. Cordoning a node prevents new pods from being scheduled on it, while draining allows pods running tenant workloads to shift to other available nodes, minimizing service disruption. The effectiveness of draining depends on the available capacity within the cluster. If the cluster is near full capacity and lacks space for pod relocation, those pods enter a Pending state after draining.
6565

66-
Once the cordon and drain process of the tenant cluster node is completed, the upgrade of the server proceeds. Each tenant cluster node is allowed up to 20 minutes for the draining process to complete, after which the server upgrade begins. This process guarantees the server upgrade makes progress. Servers are upgraded one rack at a time, and upgrades are performed in parallel within the same rack. The server upgrade doesn't wait for tenant resources to come online before continuing with the runtime upgrade of servers in the rack being upgraded. The benefit of this is that the maximum overall wait time for a rack upgrade is kept at 20 minutes regardless of how many nodes are available. This maximum wait time is specific to the cordon and drain procedure and isn't applied to the overall upgrade procedure. Upon completion of each server upgrade, the Nexus Kubernetes cluster node starts, rejoins the cluster, and is uncordoned, allowing pods to be scheduled on the node once again.
66+
Once the cordon and drain steps are complete, the node is shut down as part of the upgrade process. After the baremetal server upgrade, the node is restarted, rejoins the cluster, and is uncordoned, allowing pods to be scheduled on it again.
6767

68-
It's important to note that the Nexus Kubernetes cluster node won't be shut down after the cordon and drain process. The server is rebooted with the new image as soon as all the Nexus Kubernetes cluster nodes are cordoned and drained, after 20 minutes if the drain process isn't completed. Additionally, the cordon and drain isn't initiated for power-off or restart actions of the server; it exclusively activates only during a runtime upgrade.
68+
For Nexus VMs, the process is similar. The VMs are shut down before the baremetal server upgrade and automatically restarted once the server is back online.
69+
70+
Each tenant cluster node is allowed up to 20 minutes for the draining process to complete. After this window, the server upgrade proceeds regardless of drain completion to ensure progress. Servers are upgraded one rack at a time, with upgrades performed in parallel within the same rack. The server upgrade does not wait for tenant resources to come online before continuing with the runtime upgrade of other servers in the rack. This approach ensures that the maximum wait time per rack remains 20 minutes, specific to the cordon, drain, and shutdown procedure, and not the overall upgrade.
71+
72+
It's important to note that the Nexus Kubernetes cluster node won't be shut down after the cordon and drain process. The server is rebooted with the new image as soon as all the Nexus Kubernetes cluster nodes are cordoned and drained, after 20 minutes if the drain process isn't completed.
6973

7074
It's important to note that following the runtime upgrade, there could be instance where a Nexus Kubernetes Cluster node remains cordoned. For such scenario, you locate uncordon nodes by executing the following command.
7175

0 commit comments

Comments
 (0)