Skip to content

Commit 9ef083f

Browse files
authored
Merge pull request #37499 from tmalove/BZ1845392-restore-prev-cluster-1
[BZ:1845393]: Updates for restoring to a previous cluster
2 parents e234e68 + 1a2c0a9 commit 9ef083f

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

modules/dr-restoring-cluster-state.adoc

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
[id="dr-scenario-2-restoring-cluster-state_{context}"]
88
= Restoring to a previous cluster state
99

10-
You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining control plane hosts.
10+
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
1111

1212
[IMPORTANT]
1313
====
@@ -38,7 +38,7 @@ If you do not complete this step, you will not be able to access the control pla
3838
+
3939
This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static pods to the `/home/core/` directory of your recovery control plane host.
4040

41-
. Stop the static pods on all other control plane nodes.
41+
. Stop the static pods on any other control plane nodes.
4242
+
4343
[NOTE]
4444
====
@@ -230,6 +230,8 @@ etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1
230230
+
231231
If the status is `Pending`, or the output lists more than one running etcd pod, wait a few minutes and check again.
232232

233+
.. Repeat this step for each lost control plane host that is not the recovery host.
234+
233235
. In a separate terminal window, log in to the cluster as a user with the `cluster-admin` role by using the following command:
234236
+
235237
[source,terminal]
@@ -274,7 +276,7 @@ If the output includes multiple revision numbers, such as `2 nodes are at revisi
274276
+
275277
In a terminal that has access to the cluster as a `cluster-admin` user, run the following commands.
276278

277-
.. Update the `kubeapiserver`:
279+
.. Force a new rollout for the Kubernetes API server:
278280
+
279281
[source,terminal]
280282
----
@@ -299,7 +301,7 @@ AllNodesAtLatestRevision
299301
+
300302
If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again.
301303

302-
.. Update the `kubecontrollermanager`:
304+
.. Force a new rollout for the Kubernetes controller manager:
303305
+
304306
[source,terminal]
305307
----
@@ -324,7 +326,7 @@ AllNodesAtLatestRevision
324326
+
325327
If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again.
326328

327-
.. Update the `kubescheduler`:
329+
.. Force a new rollout for the Kubernetes scheduler:
328330
+
329331
[source,terminal]
330332
----

0 commit comments

Comments
 (0)