@@ -442,7 +442,7 @@ datadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi R
442
442
443
443
The `volumeMounts` section of the `StatefulSet`'s container `template` mounts the PersistentVolumes in the ZooKeeper servers' data directories.
444
444
445
- ```shell
445
+ ```yaml
446
446
volumeMounts:
447
447
- name: datadir
448
448
mountPath: /var/lib/zookeeper
@@ -661,6 +661,8 @@ Use the `kubectl rollout history` command to view a history or previous configur
661
661
kubectl rollout history sts/zk
662
662
```
663
663
664
+ The output is similar to this:
665
+
664
666
```
665
667
statefulsets "zk"
666
668
REVISION
@@ -674,6 +676,8 @@ Use the `kubectl rollout undo` command to roll back the modification.
674
676
kubectl rollout undo sts/zk
675
677
```
676
678
679
+ The output is similar to this:
680
+
677
681
```
678
682
statefulset.apps/zk rolled back
679
683
```
@@ -773,7 +777,7 @@ kubectl get pod -w -l app=zk
773
777
In another window, using the following command to delete the ` zookeeper-ready ` script from the file system of Pod ` zk-0 ` .
774
778
775
779
``` shell
776
- kubectl exec zk-0 -- rm /usr /bin/zookeeper-ready
780
+ kubectl exec zk-0 -- rm /opt/zookeeper /bin/zookeeper-ready
777
781
```
778
782
779
783
When the liveness probe for the ZooKeeper process fails, Kubernetes will
@@ -926,6 +930,8 @@ In another terminal, use this command to get the nodes that the Pods are current
926
930
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo " " ; done
927
931
```
928
932
933
+ The output is similar to this:
934
+
929
935
```
930
936
kubernetes-node-pb41
931
937
kubernetes-node-ixsl
@@ -939,6 +945,8 @@ drain the node on which the `zk-0` Pod is scheduled.
939
945
kubectl drain $( kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
940
946
```
941
947
948
+ The output is similar to this:
949
+
942
950
```
943
951
node "kubernetes-node-pb41" cordoned
944
952
@@ -971,22 +979,28 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
971
979
` zk-1 ` is scheduled.
972
980
973
981
``` shell
974
- kubectl drain $( kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data " kubernetes-node-ixsl " cordoned
982
+ kubectl drain $( kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
975
983
```
976
984
985
+ The output is similar to this:
986
+
977
987
```
988
+ "kubernetes-node-ixsl" cordoned
978
989
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
979
990
pod "zk-1" deleted
980
991
node "kubernetes-node-ixsl" drained
981
992
```
982
993
994
+
983
995
The ` zk-1 ` Pod cannot be scheduled because the ` zk ` ` StatefulSet ` contains a ` PodAntiAffinity ` rule preventing
984
996
co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.
985
997
986
998
``` shell
987
999
kubectl get pods -w -l app=zk
988
1000
```
989
1001
1002
+ The output is similar to this:
1003
+
990
1004
```
991
1005
NAME READY STATUS RESTARTS AGE
992
1006
zk-0 1/1 Running 2 1h
@@ -1017,6 +1031,8 @@ Continue to watch the Pods of the StatefulSet, and drain the node on which
1017
1031
kubectl drain $( kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
1018
1032
```
1019
1033
1034
+ The output is similar to this:
1035
+
1020
1036
```
1021
1037
node "kubernetes-node-i4c4" cordoned
1022
1038
@@ -1060,6 +1076,8 @@ Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#unc
1060
1076
kubectl uncordon kubernetes-node-pb41
1061
1077
```
1062
1078
1079
+ The output is similar to this:
1080
+
1063
1081
```
1064
1082
node "kubernetes-node-pb41" uncordoned
1065
1083
```
@@ -1070,6 +1088,8 @@ node "kubernetes-node-pb41" uncordoned
1070
1088
kubectl get pods -w -l app=zk
1071
1089
```
1072
1090
1091
+ The output is similar to this:
1092
+
1073
1093
```
1074
1094
NAME READY STATUS RESTARTS AGE
1075
1095
zk-0 1/1 Running 2 1h
@@ -1103,7 +1123,7 @@ Attempt to drain the node on which `zk-2` is scheduled.
1103
1123
kubectl drain $( kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
1104
1124
```
1105
1125
1106
- The output:
1126
+ The output is similar to this :
1107
1127
1108
1128
```
1109
1129
node "kubernetes-node-i4c4" already cordoned
@@ -1121,6 +1141,8 @@ Uncordon the second node to allow `zk-2` to be rescheduled.
1121
1141
kubectl uncordon kubernetes-node-ixsl
1122
1142
```
1123
1143
1144
+ The output is similar to this:
1145
+
1124
1146
```
1125
1147
node "kubernetes-node-ixsl" uncordoned
1126
1148
```
0 commit comments