Skip to content

Commit e53215a

Browse files
authored
Merge pull request #33148 from my-git9/stateful-application-zookeeper
[zh] Update zookeeper.md
2 parents b07b108 + 8777257 commit e53215a

File tree

1 file changed

+42
-4
lines changed

1 file changed

+42
-4
lines changed

content/zh/docs/tutorials/stateful-application/zookeeper.md

Lines changed: 42 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -981,6 +981,8 @@ statefulset rolling update complete 3 pods at revision zk-5db4499664...
981981
This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the new configuration. This ensures that quorum is maintained during a rolling update.
982982
983983
Use the `kubectl rollout history` command to view a history or previous configurations.
984+
985+
The output is similar to this:
984986
-->
985987
这项操作会逆序地依次终止每一个 Pod,并用新的配置重新创建。
986988
这样做确保了在滚动更新的过程中 quorum 依旧保持工作。
@@ -991,6 +993,8 @@ Use the `kubectl rollout history` command to view a history or previous configur
991993
kubectl rollout history sts/zk
992994
```
993995

996+
输出类似于:
997+
994998
```
995999
statefulsets "zk"
9961000
REVISION
@@ -1000,13 +1004,17 @@ REVISION
10001004

10011005
<!--
10021006
Use the `kubectl rollout undo` command to roll back the modification.
1007+
1008+
The output is similar to this:
10031009
-->
10041010
使用 `kubectl rollout undo` 命令撤销这次的改动。
10051011

10061012
```shell
10071013
kubectl rollout undo sts/zk
10081014
```
10091015

1016+
输出类似于:
1017+
10101018
```
10111019
statefulset.apps/zk rolled back
10121020
```
@@ -1154,7 +1162,7 @@ In another window, using the following command to delete the `zookeeper-ready` s
11541162
在另一个窗口中,从 Pod `zk-0` 的文件系统中删除 `zookeeper-ready` 脚本。
11551163

11561164
```shell
1157-
kubectl exec zk-0 -- rm /usr/bin/zookeeper-ready
1165+
kubectl exec zk-0 -- rm /opt/zookeeper/bin/zookeeper-ready
11581166
```
11591167

11601168
<!--
@@ -1406,6 +1414,8 @@ kubernetes-node-i4c4
14061414
<!--
14071415
Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain) to cordon and
14081416
drain the node on which the `zk-0` Pod is scheduled.
1417+
1418+
The output is similar to this:
14091419
-->
14101420

14111421
使用 [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain)
@@ -1415,6 +1425,8 @@ drain the node on which the `zk-0` Pod is scheduled.
14151425
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
14161426
```
14171427

1428+
输出类似于:
1429+
14181430
```
14191431
node "kubernetes-node-pb41" cordoned
14201432
@@ -1449,14 +1461,19 @@ zk-0 1/1 Running 0 1m
14491461
<!--
14501462
Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node on which
14511463
`zk-1` is scheduled.
1464+
1465+
The output is similar to this:
14521466
-->
14531467
在第一个终端中持续观察 `StatefulSet` 的 Pods 并腾空 `zk-1` 调度所在的节点。
14541468

14551469
```shell
1456-
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force -delete-emptydir-data "kubernetes-node-ixsl" cordoned
1470+
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
14571471
```
14581472

1473+
输出类似于:
1474+
14591475
```
1476+
kubernetes-node-ixsl" cordoned
14601477
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
14611478
pod "zk-1" deleted
14621479
node "kubernetes-node-ixsl" drained
@@ -1465,6 +1482,8 @@ node "kubernetes-node-ixsl" drained
14651482
<!--
14661483
The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing
14671484
co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.
1485+
1486+
The output is similar to this:
14681487
-->
14691488
`zk-1` Pod 不能被调度,这是因为 `zk` `StatefulSet` 包含了一个防止 Pods
14701489
共存的 `PodAntiAffinity` 规则,而且只有两个节点可用于调度,
@@ -1474,6 +1493,8 @@ co-location of the Pods, and as only two nodes are schedulable, the Pod will rem
14741493
kubectl get pods -w -l app=zk
14751494
```
14761495

1496+
输出类似于:
1497+
14771498
```
14781499
NAME READY STATUS RESTARTS AGE
14791500
zk-0 1/1 Running 2 1h
@@ -1500,12 +1521,17 @@ zk-1 0/1 Pending 0 0s
15001521
<!--
15011522
Continue to watch the Pods of the StatefulSet, and drain the node on which
15021523
`zk-2` is scheduled.
1524+
1525+
The output is similar to this:
15031526
-->
15041527
继续观察 StatefulSet 中的 Pods 并腾空 `zk-2` 调度所在的节点。
15051528

15061529
```shell
15071530
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
15081531
```
1532+
1533+
输出类似于:
1534+
15091535
```
15101536
node "kubernetes-node-i4c4" cordoned
15111537
@@ -1556,6 +1582,8 @@ numChildren = 0
15561582

15571583
<!--
15581584
Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#uncordon) to uncordon the first node.
1585+
1586+
The output is similar to this:
15591587
-->
15601588
使用 [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#uncordon)
15611589
来取消对第一个节点的隔离。
@@ -1564,19 +1592,25 @@ Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#unc
15641592
kubectl uncordon kubernetes-node-pb41
15651593
```
15661594

1595+
输出类似于:
1596+
15671597
```
15681598
node "kubernetes-node-pb41" uncordoned
15691599
```
15701600

15711601
<!--
15721602
`zk-1` is rescheduled on this node. Wait until `zk-1` is Running and Ready.
1603+
1604+
The output is similar to this:
15731605
-->
15741606
`zk-1` 被重新调度到了这个节点。等待 `zk-1` 变为 Running 和 Ready 状态。
15751607

15761608
```shell
15771609
kubectl get pods -w -l app=zk
15781610
```
15791611

1612+
输出类似于:
1613+
15801614
```
15811615
NAME READY STATUS RESTARTS AGE
15821616
zk-0 1/1 Running 2 1h
@@ -1614,9 +1648,9 @@ kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-dae
16141648
```
16151649

16161650
<!--
1617-
The output:
1651+
The output is similar to this:
16181652
-->
1619-
输出
1653+
输出类似于
16201654

16211655
```
16221656
node "kubernetes-node-i4c4" already cordoned
@@ -1630,6 +1664,8 @@ node "kubernetes-node-i4c4" drained
16301664
This time `kubectl drain` succeeds.
16311665
16321666
Uncordon the second node to allow `zk-2` to be rescheduled.
1667+
1668+
The output is similar to this:
16331669
-->
16341670
这次 `kubectl drain` 执行成功。
16351671

@@ -1639,6 +1675,8 @@ Uncordon the second node to allow `zk-2` to be rescheduled.
16391675
kubectl uncordon kubernetes-node-ixsl
16401676
```
16411677

1678+
输出类似于:
1679+
16421680
```
16431681
node "kubernetes-node-ixsl" uncordoned
16441682
```

0 commit comments

Comments
 (0)