You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining control plane hosts.
10
+
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
11
11
12
12
[IMPORTANT]
13
13
====
@@ -38,7 +38,7 @@ If you do not complete this step, you will not be able to access the control pla
38
38
+
39
39
This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static pods to the `/home/core/` directory of your recovery control plane host.
40
40
41
-
. Stop the static pods on all other control plane nodes.
41
+
. Stop the static pods on any other control plane nodes.
If the status is `Pending`, or the output lists more than one running etcd pod, wait a few minutes and check again.
232
232
233
+
.. Repeat this step for each lost control plane host that is not the recovery host.
234
+
233
235
. In a separate terminal window, log in to the cluster as a user with the `cluster-admin` role by using the following command:
234
236
+
235
237
[source,terminal]
@@ -274,7 +276,7 @@ If the output includes multiple revision numbers, such as `2 nodes are at revisi
274
276
+
275
277
In a terminal that has access to the cluster as a `cluster-admin` user, run the following commands.
276
278
277
-
.. Update the `kubeapiserver`:
279
+
.. Force a new rollout for the Kubernetes API server:
278
280
+
279
281
[source,terminal]
280
282
----
@@ -299,7 +301,7 @@ AllNodesAtLatestRevision
299
301
+
300
302
If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again.
301
303
302
-
.. Update the `kubecontrollermanager`:
304
+
.. Force a new rollout for the Kubernetes controller manager:
303
305
+
304
306
[source,terminal]
305
307
----
@@ -324,7 +326,7 @@ AllNodesAtLatestRevision
324
326
+
325
327
If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again.
326
328
327
-
.. Update the `kubescheduler`:
329
+
.. Force a new rollout for the Kubernetes scheduler:
0 commit comments