Skip to content

Commit f132ead

Browse files
authored
Merge pull request #27377 from bergerhoffer/term-updates-backup
Terminology style updates for backup and restore book
2 parents af6c77c + a1bf5e0 commit f132ead

6 files changed

+23
-23
lines changed

backup_and_restore/replacing-unhealthy-etcd-member.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ toc::[]
77

88
This document describes the process to replace a single unhealthy etcd member.
99

10-
This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd Pod is crashlooping.
10+
This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping.
1111

1212
[NOTE]
1313
====
@@ -33,10 +33,10 @@ include::modules/restore-determine-state-etcd-member.adoc[leveloffset=+1]
3333
Depending on the state of your unhealthy etcd member, use one of the following procedures:
3434

3535
* xref:../backup_and_restore/replacing-unhealthy-etcd-member.adoc#restore-replace-stopped-etcd-member_replacing-unhealthy-etcd-member[Replacing an unhealthy etcd member whose machine is not running or whose node is not ready]
36-
* xref:../backup_and_restore/replacing-unhealthy-etcd-member.adoc#restore-replace-crashlooping-etcd-member_replacing-unhealthy-etcd-member[Replacing an unhealthy etcd member whose etcd Pod is crashlooping]
36+
* xref:../backup_and_restore/replacing-unhealthy-etcd-member.adoc#restore-replace-crashlooping-etcd-member_replacing-unhealthy-etcd-member[Replacing an unhealthy etcd member whose etcd pod is crashlooping]
3737

3838
// Replacing an unhealthy etcd member whose machine is not running or whose node is not ready
3939
include::modules/restore-replace-stopped-etcd-member.adoc[leveloffset=+2]
4040

41-
// Replacing an unhealthy etcd member whose etcd Pod is crashlooping
41+
// Replacing an unhealthy etcd member whose etcd pod is crashlooping
4242
include::modules/restore-replace-crashlooping-etcd-member.adoc[leveloffset=+2]

modules/dr-restoring-cluster-state.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ It is not required to manually stop the pods on the recovery host. The recovery
4141

4242
.. Access a control plane host that is not the recovery host.
4343

44-
.. Move the existing etcd Pod file out of the kubelet manifest directory:
44+
.. Move the existing etcd pod file out of the kubelet manifest directory:
4545
+
4646
[source,terminal]
4747
----
@@ -57,7 +57,7 @@ It is not required to manually stop the pods on the recovery host. The recovery
5757
+
5858
The output of this command should be empty. If it is not empty, wait a few minutes and check again.
5959

60-
.. Move the existing Kubernetes API server Pod file out of the kubelet manifest directory:
60+
.. Move the existing Kubernetes API server pod file out of the kubelet manifest directory:
6161
+
6262
[source,terminal]
6363
----
@@ -154,7 +154,7 @@ static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml
154154
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
155155
----
156156

157-
.. From the recovery host, verify that the etcd Pod is running.
157+
.. From the recovery host, verify that the etcd pod is running.
158158
+
159159
[source,terminal]
160160
----

modules/graceful-restart.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ ip-10-0-170-223.ec2.internal Ready master 75m v1.19.0
3939
ip-10-0-211-16.ec2.internal Ready master 75m v1.19.0
4040
----
4141

42-
. If the master nodes are _not_ ready, then check whether there are any pending certificate signing requests that must be approved.
42+
. If the master nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
4343

4444
.. Get the list of current CSRs:
4545
+
@@ -80,7 +80,7 @@ ip-10-0-182-134.ec2.internal Ready worker 64m v1.19.0
8080
ip-10-0-250-100.ec2.internal Ready worker 64m v1.19.0
8181
----
8282

83-
. If the worker nodes are _not_ ready, then check whether there are any pending certificate signing requests that must be approved.
83+
. If the worker nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
8484

8585
.. Get the list of current CSRs:
8686
+
@@ -129,7 +129,7 @@ etcd 4.6.0 True False F
129129
...
130130
----
131131

132-
.. Check that all nodes are in the ready state:
132+
.. Check that all nodes are in the `Ready` state:
133133
+
134134
[source,terminal]
135135
----

modules/restore-determine-state-etcd-member.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in:
99

1010
* The machine is not running or the node is not ready
11-
* The etcd Pod is crashlooping
11+
* The etcd pod is crashlooping
1212

1313
This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member.
1414

@@ -79,9 +79,9 @@ ip-10-0-131-183.ec2.internal NotReady master 122m v1.19.0 <1>
7979
If the *node is not ready*, then follow the _Replacing an unhealthy etcd member whose machine is not running or whose node is not ready_ procedure.
8080

8181

82-
. Determine if the *etcd Pod is crashlooping*.
82+
. Determine if the *etcd pod is crashlooping*.
8383
+
84-
If the machine is running and the node is ready, then check whether the etcd Pod is crashlooping.
84+
If the machine is running and the node is ready, then check whether the etcd pod is crashlooping.
8585

8686
.. Verify that all master nodes are listed as `Ready`:
8787
+
@@ -99,7 +99,7 @@ ip-10-0-164-97.ec2.internal Ready master 6h13m v1.19.0
9999
ip-10-0-154-204.ec2.internal Ready master 6h13m v1.19.0
100100
----
101101

102-
.. Check whether the status of an etcd Pod is either `Error` or `CrashloopBackoff`:
102+
.. Check whether the status of an etcd pod is either `Error` or `CrashloopBackoff`:
103103
+
104104
[source,terminal]
105105
----
@@ -113,8 +113,8 @@ etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7
113113
etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m
114114
etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
115115
----
116-
<1> Since this status of this Pod is `Error`, then the *etcd Pod is crashlooping*.
116+
<1> Since this status of this pod is `Error`, then the *etcd pod is crashlooping*.
117117

118118
+
119119
// TODO: xref
120-
If the *etcd Pod is crashlooping*, then follow the _Replacing an unhealthy etcd member whose etcd Pod is crashlooping_ procedure.
120+
If the *etcd pod is crashlooping*, then follow the _Replacing an unhealthy etcd member whose etcd pod is crashlooping_ procedure.

modules/restore-replace-crashlooping-etcd-member.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,14 +3,14 @@
33
// * backup_and_restore/replacing-unhealthy-etcd-member.adoc
44

55
[id="restore-replace-crashlooping-etcd-member_{context}"]
6-
= Replacing an unhealthy etcd member whose etcd Pod is crashlooping
6+
= Replacing an unhealthy etcd member whose etcd pod is crashlooping
77

8-
This procedure details the steps to replace an etcd member that is unhealthy because the etcd Pod is crashlooping.
8+
This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.
99

1010
.Prerequisites
1111

1212
* You have identified the unhealthy etcd member.
13-
* You have verified that the etcd Pod is crashlooping.
13+
* You have verified that the etcd pod is crashlooping.
1414
* You have access to the cluster as a user with the `cluster-admin` role.
1515
* You have taken an etcd backup.
1616
+
@@ -40,7 +40,7 @@ $ oc debug node/ip-10-0-131-183.ec2.internal <1>
4040
sh-4.2# chroot /host
4141
----
4242

43-
.. Move the existing etcd Pod file out of the kubelet manifest directory:
43+
.. Move the existing etcd pod file out of the kubelet manifest directory:
4444
+
4545
[source,terminal]
4646
----
@@ -63,7 +63,7 @@ You can now exit the node shell.
6363

6464
. Remove the unhealthy member.
6565

66-
.. Choose a Pod that is _not_ on the affected node.
66+
.. Choose a pod that is _not_ on the affected node.
6767
+
6868
In a terminal that has access to the cluster as a `cluster-admin` user, run the following command:
6969
+
@@ -80,7 +80,7 @@ etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0
8080
etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
8181
----
8282

83-
.. Connect to the running etcd container, passing in the name of a Pod that is not on the affected node.
83+
.. Connect to the running etcd container, passing in the name of a pod that is not on the affected node.
8484
+
8585
In a terminal that has access to the cluster as a `cluster-admin` user, run the following command:
8686
+

modules/restore-replace-stopped-etcd-member.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ It is important to take an etcd backup before performing this procedure so that
2323

2424
. Remove the unhealthy member.
2525

26-
.. Choose a Pod that is _not_ on the affected node:
26+
.. Choose a pod that is _not_ on the affected node:
2727
+
2828
In a terminal that has access to the cluster as a `cluster-admin` user, run the following command:
2929
+
@@ -40,7 +40,7 @@ etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0
4040
etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
4141
----
4242

43-
.. Connect to the running etcd container, passing in the name of a Pod that is not on the affected node:
43+
.. Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
4444
+
4545
In a terminal that has access to the cluster as a `cluster-admin` user, run the following command:
4646
+

0 commit comments

Comments
 (0)