Skip to content

Commit b6f0d8f

Browse files
authored
Review zookeper tutorial and fix command error (#31914)
* Misplaced command result On zookeeper tutorial https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#surviving-maintenance command result is concatenated to the command itself: kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data "kubernetes-node-ixsl" cordoned * Review zookeeper tutorial #31873 Review done!
1 parent cd26a2b commit b6f0d8f

File tree

1 file changed

+26
-4
lines changed

1 file changed

+26
-4
lines changed

content/en/docs/tutorials/stateful-application/zookeeper.md

Lines changed: 26 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -442,7 +442,7 @@ datadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi R
442442
443443
The `volumeMounts` section of the `StatefulSet`'s container `template` mounts the PersistentVolumes in the ZooKeeper servers' data directories.
444444
445-
```shell
445+
```yaml
446446
volumeMounts:
447447
- name: datadir
448448
mountPath: /var/lib/zookeeper
@@ -661,6 +661,8 @@ Use the `kubectl rollout history` command to view a history or previous configur
661661
kubectl rollout history sts/zk
662662
```
663663

664+
The output is similar to this:
665+
664666
```
665667
statefulsets "zk"
666668
REVISION
@@ -674,6 +676,8 @@ Use the `kubectl rollout undo` command to roll back the modification.
674676
kubectl rollout undo sts/zk
675677
```
676678

679+
The output is similar to this:
680+
677681
```
678682
statefulset.apps/zk rolled back
679683
```
@@ -773,7 +777,7 @@ kubectl get pod -w -l app=zk
773777
In another window, using the following command to delete the `zookeeper-ready` script from the file system of Pod `zk-0`.
774778

775779
```shell
776-
kubectl exec zk-0 -- rm /usr/bin/zookeeper-ready
780+
kubectl exec zk-0 -- rm /opt/zookeeper/bin/zookeeper-ready
777781
```
778782

779783
When the liveness probe for the ZooKeeper process fails, Kubernetes will
@@ -926,6 +930,8 @@ In another terminal, use this command to get the nodes that the Pods are current
926930
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
927931
```
928932

933+
The output is similar to this:
934+
929935
```
930936
kubernetes-node-pb41
931937
kubernetes-node-ixsl
@@ -939,6 +945,8 @@ drain the node on which the `zk-0` Pod is scheduled.
939945
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
940946
```
941947

948+
The output is similar to this:
949+
942950
```
943951
node "kubernetes-node-pb41" cordoned
944952
@@ -971,22 +979,28 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
971979
`zk-1` is scheduled.
972980

973981
```shell
974-
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data "kubernetes-node-ixsl" cordoned
982+
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
975983
```
976984

985+
The output is similar to this:
986+
977987
```
988+
"kubernetes-node-ixsl" cordoned
978989
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
979990
pod "zk-1" deleted
980991
node "kubernetes-node-ixsl" drained
981992
```
982993

994+
983995
The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing
984996
co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.
985997

986998
```shell
987999
kubectl get pods -w -l app=zk
9881000
```
9891001

1002+
The output is similar to this:
1003+
9901004
```
9911005
NAME READY STATUS RESTARTS AGE
9921006
zk-0 1/1 Running 2 1h
@@ -1017,6 +1031,8 @@ Continue to watch the Pods of the StatefulSet, and drain the node on which
10171031
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
10181032
```
10191033

1034+
The output is similar to this:
1035+
10201036
```
10211037
node "kubernetes-node-i4c4" cordoned
10221038
@@ -1060,6 +1076,8 @@ Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#unc
10601076
kubectl uncordon kubernetes-node-pb41
10611077
```
10621078

1079+
The output is similar to this:
1080+
10631081
```
10641082
node "kubernetes-node-pb41" uncordoned
10651083
```
@@ -1070,6 +1088,8 @@ node "kubernetes-node-pb41" uncordoned
10701088
kubectl get pods -w -l app=zk
10711089
```
10721090

1091+
The output is similar to this:
1092+
10731093
```
10741094
NAME READY STATUS RESTARTS AGE
10751095
zk-0 1/1 Running 2 1h
@@ -1103,7 +1123,7 @@ Attempt to drain the node on which `zk-2` is scheduled.
11031123
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
11041124
```
11051125

1106-
The output:
1126+
The output is similar to this:
11071127

11081128
```
11091129
node "kubernetes-node-i4c4" already cordoned
@@ -1121,6 +1141,8 @@ Uncordon the second node to allow `zk-2` to be rescheduled.
11211141
kubectl uncordon kubernetes-node-ixsl
11221142
```
11231143

1144+
The output is similar to this:
1145+
11241146
```
11251147
node "kubernetes-node-ixsl" uncordoned
11261148
```

0 commit comments

Comments
 (0)