Skip to content

Commit fd1b9cd

Browse files
authored
Merge pull request #41606 from xenolinux/shutdown
[BZ2035505]: Increase the shutting down time for large-scale clusters
2 parents 7bebe1f + ae92382 commit fd1b9cd

File tree

1 file changed

+11
-1
lines changed

1 file changed

+11
-1
lines changed

modules/graceful-shutdown.adoc

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,9 @@ $ oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-s
4242
+
4343
[source,terminal]
4444
----
45-
$ for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 1; done
45+
$ for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 1 <1>; done
4646
----
47+
<1> Indicates how long, in minutes, this process lasts before the control-plane nodes are shut down. For large-scale clusters with 10 nodes or more, set to 10 minutes or longer to make sure all the compute nodes have time to shut down first.
4748
+
4849
.Example output
4950
----
@@ -61,6 +62,15 @@ Shutting down the nodes using one of these methods allows pods to terminate grac
6162
+
6263
[NOTE]
6364
====
65+
Adjust the shut down time to be longer for large-scale clusters:
66+
[source,terminal]
67+
----
68+
$ for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 10; done
69+
----
70+
====
71+
+
72+
[NOTE]
73+
====
6474
It is not necessary to drain control plane nodes of the standard pods that ship with {product-title} prior to shutdown.
6575

6676
Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained control plane nodes prior to shutdown because of custom workloads, you must mark the control plane nodes as schedulable before the cluster will be functional again after restart.

0 commit comments

Comments
 (0)