Skip to content

Commit 1a2c0a9

Browse files
committed
Updates for restoring to a previous cluster
1 parent c84b312 commit 1a2c0a9

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

modules/dr-restoring-cluster-state.adoc

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
[id="dr-scenario-2-restoring-cluster-state_{context}"]
88
= Restoring to a previous cluster state
99

10-
You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining control plane hosts.
10+
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
1111

1212
[IMPORTANT]
1313
====
@@ -38,7 +38,7 @@ If you do not complete this step, you will not be able to access the control pla
3838
+
3939
This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static pods to the `/home/core/` directory of your recovery control plane host.
4040

41-
. Stop the static pods on all other control plane nodes.
41+
. Stop the static pods on any other control plane nodes.
4242
+
4343
[NOTE]
4444
====
@@ -186,6 +186,8 @@ etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1
186186
+
187187
If the status is `Pending`, or the output lists more than one running etcd pod, wait a few minutes and check again.
188188

189+
.. Repeat this step for each lost control plane host that is not the recovery host.
190+
189191
. In a separate terminal window, log in to the cluster as a user with the `cluster-admin` role by using the following command:
190192
+
191193
[source,terminal]
@@ -230,7 +232,7 @@ If the output includes multiple revision numbers, such as `2 nodes are at revisi
230232
+
231233
In a terminal that has access to the cluster as a `cluster-admin` user, run the following commands.
232234

233-
.. Update the `kubeapiserver`:
235+
.. Force a new rollout for the Kubernetes API server:
234236
+
235237
[source,terminal]
236238
----
@@ -255,7 +257,7 @@ AllNodesAtLatestRevision
255257
+
256258
If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again.
257259

258-
.. Update the `kubecontrollermanager`:
260+
.. Force a new rollout for the Kubernetes controller manager:
259261
+
260262
[source,terminal]
261263
----
@@ -280,7 +282,7 @@ AllNodesAtLatestRevision
280282
+
281283
If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again.
282284

283-
.. Update the `kubescheduler`:
285+
.. Force a new rollout for the Kubernetes scheduler:
284286
+
285287
[source,terminal]
286288
----

0 commit comments

Comments
 (0)