You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining control plane hosts.
10
+
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
11
11
12
12
[IMPORTANT]
13
13
====
@@ -38,7 +38,7 @@ If you do not complete this step, you will not be able to access the control pla
38
38
+
39
39
This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static pods to the `/home/core/` directory of your recovery control plane host.
40
40
41
-
. Stop the static pods on all other control plane nodes.
41
+
. Stop the static pods on any other control plane nodes.
If the status is `Pending`, or the output lists more than one running etcd pod, wait a few minutes and check again.
188
188
189
+
.. Repeat this step for each lost control plane host that is not the recovery host.
190
+
189
191
. In a separate terminal window, log in to the cluster as a user with the `cluster-admin` role by using the following command:
190
192
+
191
193
[source,terminal]
@@ -230,7 +232,7 @@ If the output includes multiple revision numbers, such as `2 nodes are at revisi
230
232
+
231
233
In a terminal that has access to the cluster as a `cluster-admin` user, run the following commands.
232
234
233
-
.. Update the `kubeapiserver`:
235
+
.. Force a new rollout for the Kubernetes API server:
234
236
+
235
237
[source,terminal]
236
238
----
@@ -255,7 +257,7 @@ AllNodesAtLatestRevision
255
257
+
256
258
If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again.
257
259
258
-
.. Update the `kubecontrollermanager`:
260
+
.. Force a new rollout for the Kubernetes controller manager:
259
261
+
260
262
[source,terminal]
261
263
----
@@ -280,7 +282,7 @@ AllNodesAtLatestRevision
280
282
+
281
283
If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again.
282
284
283
-
.. Update the `kubescheduler`:
285
+
.. Force a new rollout for the Kubernetes scheduler:
0 commit comments