Skip to content

Commit dd089f4

Browse files
committed
[zh] sync files in /tasks/run-application/
1 parent 1a1f558 commit dd089f4

File tree

3 files changed

+75
-42
lines changed

3 files changed

+75
-42
lines changed

content/zh-cn/docs/tasks/run-application/configure-pdb.md

Lines changed: 39 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,9 @@ selector goes into the PDBs `.spec.selector`.
8484
`.spec.selector` 字段中加入同样的选择算符。
8585

8686
<!--
87-
From version 1.15 PDBs support custom controllers where the [scale subresource](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource) is enabled.
87+
From version 1.15 PDBs support custom controllers where the
88+
[scale subresource](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource)
89+
is enabled.
8890
-->
8991
从 1.15 版本开始,PDB 支持启用
9092
[Scale 子资源](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource)
@@ -122,7 +124,8 @@ due to a voluntary disruption.
122124
- Multiple-instance Stateful application such as Consul, ZooKeeper, or etcd:
123125
- Concern: Do not reduce number of instances below quorum, otherwise writes fail.
124126
- Possible Solution 1: set maxUnavailable to 1 (works with varying scale of application).
125-
- Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5). (Allows more disruptions at once).
127+
- Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5).
128+
(Allows more disruptions at once).
126129
- Restartable Batch Job:
127130
- Concern: Job needs to complete in case of voluntary disruption.
128131
- Possible solution: Do not create a PDB. The Job controller will create a replacement pod.
@@ -155,23 +158,26 @@ Values for `minAvailable` or `maxUnavailable` can be expressed as integers or as
155158
`minAvailable``maxUnavailable` 的值可以表示为整数或百分比。
156159

157160
<!--
158-
- When you specify an integer, it represents a number of Pods. For instance, if you set `minAvailable` to 10, then 10
159-
Pods must always be available, even during a disruption.
160-
- When you specify a percentage by setting the value to a string representation of a percentage (eg. `"50%"`), it represents a percentage of
161-
total Pods. For instance, if you set `minAvailable` to `"50%"`, then at least 50% of the Pods remain available during a
162-
disruption.
161+
- When you specify an integer, it represents a number of Pods. For instance, if you set
162+
`minAvailable` to 10, then 10 Pods must always be available, even during a disruption.
163+
- When you specify a percentage by setting the value to a string representation of a
164+
percentage (eg. `"50%"`), it represents a percentage of total Pods. For instance, if
165+
you set `minAvailable` to `"50%"`, then at least 50% of the Pods remain available
166+
during a disruption.
163167
-->
164168
- 指定整数值时,它表示 Pod 个数。例如,如果将 `minAvailable` 设置为 10,
165169
那么即使在干扰期间,也必须始终有 10 个 Pod 可用。
166170
- 通过将值设置为百分比的字符串表示形式(例如 `"50%"`)来指定百分比时,它表示占总 Pod 数的百分比。
167171
例如,如果将 `minAvailable` 设置为 `"50%"`,则干扰期间至少 50% 的 Pod 保持可用。
168172

169173
<!--
170-
When you specify the value as a percentage, it may not map to an exact number of Pods. For example, if you have 7 Pods and
171-
you set `minAvailable` to `"50%"`, it's not immediately obvious whether that means 3 Pods or 4 Pods must be available.
172-
Kubernetes rounds up to the nearest integer, so in this case, 4 Pods must be available. When you specify the value
173-
`maxUnavailable` as a percentage, Kubernetes rounds up the number of Pods that may be disrupted. Thereby a disruption
174-
can exceed your defined `maxUnavailable` percentage. You can examine the
174+
When you specify the value as a percentage, it may not map to an exact number of Pods.
175+
For example, if you have 7 Pods and you set `minAvailable` to `"50%"`, it's not
176+
immediately obvious whether that means 3 Pods or 4 Pods must be available. Kubernetes
177+
rounds up to the nearest integer, so in this case, 4 Pods must be available. When you
178+
specify the value `maxUnavailable` as a percentage, Kubernetes rounds up the number of
179+
Pods that may be disrupted. Thereby a disruption can exceed your defined
180+
`maxUnavailable` percentage. You can examine the
175181
[code](https://github.com/kubernetes/kubernetes/blob/23be9587a0f8677eb8091464098881df939c44a9/pkg/controller/disruption/disruption.go#L539)
176182
that controls this behavior.
177183
-->
@@ -285,8 +291,8 @@ Pod 的数量低于预算指定值。预算只能够针对自发的驱逐提供
285291
If you set `maxUnavailable` to 0% or 0, or you set `minAvailable` to 100% or the number of replicas,
286292
you are requiring zero voluntary evictions. When you set zero voluntary evictions for a workload
287293
object such as ReplicaSet, then you cannot successfully drain a Node running one of those Pods.
288-
If you try to drain a Node where an unevictable Pod is running, the drain never completes. This is permitted as per the
289-
semantics of `PodDisruptionBudget`.
294+
If you try to drain a Node where an unevictable Pod is running, the drain never completes.
295+
This is permitted as per the semantics of `PodDisruptionBudget`.
290296
-->
291297
如果你将 `maxUnavailable` 的值设置为 0%(或 0)或设置 `minAvailable` 值为 100%(或等于副本数)
292298
则会阻止所有的自愿驱逐。
@@ -409,7 +415,8 @@ status:
409415
<!--
410416
### Healthiness of a Pod
411417
412-
The current implementation considers healthy pods, as pods that have `.status.conditions` item with `type="Ready"` and `status="True"`.
418+
The current implementation considers healthy pods, as pods that have `.status.conditions`
419+
item with `type="Ready"` and `status="True"`.
413420
These pods are tracked via `.status.currentHealthy` field in the PDB status.
414421
-->
415422
### Pod 的健康 {#healthiness-of-a-pod}
@@ -454,13 +461,16 @@ Policies:
454461

455462
<!--
456463
`IfHealthyBudget`
457-
: Running pods (`.status.phase="Running"`), but not yet healthy can be evicted only if the guarded application is not
458-
disrupted (`.status.currentHealthy` is at least equal to `.status.desiredHealthy`).
464+
: Running pods (`.status.phase="Running"`), but not yet healthy can be evicted only
465+
if the guarded application is not disrupted (`.status.currentHealthy` is at least
466+
equal to `.status.desiredHealthy`).
459467

460-
: This policy ensures that running pods of an already disrupted application have the best chance to become healthy.
461-
This has negative implications for draining nodes, which can be blocked by misbehaving applications that are guarded by a PDB.
462-
More specifically applications with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration),
463-
or pods that are just failing to report the `Ready` condition.
468+
: This policy ensures that running pods of an already disrupted application have
469+
the best chance to become healthy. This has negative implications for draining
470+
nodes, which can be blocked by misbehaving applications that are guarded by a PDB.
471+
More specifically applications with pods in `CrashLoopBackOff` state
472+
(due to a bug or misconfiguration), or pods that are just failing to report the
473+
`Ready` condition.
464474
-->
465475
`IfHealthyBudget`
466476
: 对于运行中但还不健康的 Pod(`.status.phase="Running"`),只有所守护的应用程序不受干扰
@@ -473,13 +483,14 @@ or pods that are just failing to report the `Ready` condition.
473483

474484
<!--
475485
`AlwaysAllow`
476-
: Running pods (`.status.phase="Running"`), but not yet healthy are considered disrupted and can be evicted
477-
regardless of whether the criteria in a PDB is met.
478-
479-
: This means prospective running pods of a disrupted application might not get a chance to become healthy.
480-
By using this policy, cluster managers can easily evict misbehaving applications that are guarded by a PDB.
481-
More specifically applications with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration),
482-
or pods that are just failing to report the `Ready` condition.
486+
: Running pods (`.status.phase="Running"`), but not yet healthy are considered
487+
disrupted and can be evicted regardless of whether the criteria in a PDB is met.
488+
489+
: This means prospective running pods of a disrupted application might not get a
490+
chance to become healthy. By using this policy, cluster managers can easily evict
491+
misbehaving applications that are guarded by a PDB. More specifically applications
492+
with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration), or pods
493+
that are just failing to report the `Ready` condition.
483494
-->
484495
`AlwaysAllow`
485496
: 运行中但还不健康的 Pod(`.status.phase="Running"`)将被视为已受干扰且可以被驱逐,

content/zh-cn/docs/tasks/run-application/delete-stateful-set.md

Lines changed: 23 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,8 @@ This task shows you how to delete a {{< glossary_tooltip term_id="StatefulSet" >
3434
<!--
3535
## Deleting a StatefulSet
3636
37-
You can delete a StatefulSet in the same way you delete other resources in Kubernetes: use the `kubectl delete` command, and specify the StatefulSet either by file or by name.
37+
You can delete a StatefulSet in the same way you delete other resources in Kubernetes:
38+
use the `kubectl delete` command, and specify the StatefulSet either by file or by name.
3839
-->
3940
## 删除 StatefulSet {#deleting-a-statefulset}
4041

@@ -68,8 +69,9 @@ kubectl delete service <服务名称>
6869
```
6970

7071
<!--
71-
When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=orphan`.
72-
For example:
72+
When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0.
73+
All Pods that are part of this workload are also deleted. If you want to delete
74+
only the StatefulSet and not the Pods, use `--cascade=orphan`. For example:
7375
--->
7476
当通过 `kubectl` 删除 StatefulSet 时,StatefulSet 会被缩容为 0。
7577
属于该 StatefulSet 的所有 Pod 也被删除。
@@ -80,7 +82,9 @@ kubectl delete -f <file.yaml> --cascade=orphan
8082
```
8183

8284
<!--
83-
By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows:
85+
By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet
86+
are left behind even after the StatefulSet object itself is deleted. If the pods have
87+
a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows:
8488
--->
8589
通过将 `--cascade=orphan` 传递给 `kubectl delete`,在删除 StatefulSet 对象之后,
8690
StatefulSet 管理的 Pod 会被保留下来。如果 Pod 具有标签 `app.kubernetes.io/name=MyApp`
@@ -93,7 +97,12 @@ kubectl delete pods -l app.kubernetes.io/name=MyApp
9397
<!--
9498
### Persistent Volumes
9599
96-
Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods have terminated might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion.
100+
Deleting the Pods in a StatefulSet will not delete the associated volumes.
101+
This is to ensure that you have the chance to copy data off the volume before
102+
deleting it. Deleting the PVC after the pods have terminated might trigger
103+
deletion of the backing Persistent Volumes depending on the storage class
104+
and reclaim policy. You should never assume ability to access a volume
105+
after claim deletion.
97106
-->
98107
### 持久卷 {#persistent-volumes}
99108

@@ -111,7 +120,8 @@ Use caution when deleting a PVC, as it may lead to data loss.
111120
<!--
112121
### Complete deletion of a StatefulSet
113122
114-
To delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
123+
To delete everything in a StatefulSet, including the associated pods,
124+
you can run a series of commands similar to the following:
115125
-->
116126
### 完全删除 StatefulSet {#complete-deletion-of-a-statefulset}
117127

@@ -126,14 +136,19 @@ kubectl delete pvc -l app.kubernetes.io/name=MyApp
126136
```
127137

128138
<!--
129-
In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; substitute your own label as appropriate.
139+
In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`;
140+
substitute your own label as appropriate.
130141
-->
131142
在上面的例子中,Pod 的标签为 `app.kubernetes.io/name=MyApp`;适当地替换你自己的标签。
132143

133144
<!--
134145
### Force deletion of StatefulSet pods
135146
136-
If you find that some pods in your StatefulSet are stuck in the 'Terminating' or 'Unknown' states for an extended period of time, you may need to manually intervene to forcefully delete the pods from the apiserver. This is a potentially dangerous task. Refer to [Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/) for details.
147+
If you find that some pods in your StatefulSet are stuck in the 'Terminating'
148+
or 'Unknown' states for an extended period of time, you may need to manually
149+
intervene to forcefully delete the pods from the apiserver.
150+
This is a potentially dangerous task. Refer to
151+
[Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/)
137152
-->
138153
### 强制删除 StatefulSet 的 Pod {#force-deletion-of-statefulset-pods}
139154

content/zh-cn/docs/tasks/run-application/scale-stateful-set.md

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,8 @@ weight: 50
1919

2020
<!-- overview -->
2121
<!--
22-
This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
22+
This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to
23+
increasing or decreasing the number of replicas.
2324
-->
2425
本文介绍如何扩缩 StatefulSet。StatefulSet 的扩缩指的是增加或者减少副本个数。
2526

@@ -29,7 +30,9 @@ This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to incr
2930
- StatefulSets are only available in Kubernetes version 1.5 or later.
3031
To check your version of Kubernetes, run `kubectl version`.
3132
32-
- Not all stateful applications scale nicely. If you are unsure about whether to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/) or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information.
33+
- Not all stateful applications scale nicely. If you are unsure about whether
34+
to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/)
35+
or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information.
3336
3437
- You should perform scaling only when you are confident that your stateful application
3538
cluster is completely healthy.
@@ -82,7 +85,9 @@ kubectl scale statefulsets <statefulset 名称> --replicas=<新的副本数>
8285
<!--
8386
### Make in-place updates on your StatefulSets
8487
85-
Alternatively, you can do [in-place updates](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources) on your StatefulSets.
88+
Alternatively, you can do
89+
[in-place updates](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources)
90+
on your StatefulSets.
8691
8792
If your StatefulSet was initially created with `kubectl apply`,
8893
update `.spec.replicas` of the StatefulSet manifests, and then do a `kubectl apply`:
@@ -137,10 +142,12 @@ kubectl patch statefulsets <statefulset 名称> -p '{"spec":{"replicas":<new-rep
137142
### 缩容操作无法正常工作 {#scaling-down-does-not-work}
138143

139144
<!--
140-
You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place
141-
after those stateful Pods become running and ready.
145+
You cannot scale down a StatefulSet when any of the stateful Pods it manages is
146+
unhealthy. Scaling down only takes place after those stateful Pods become running and ready.
142147
143-
If spec.replicas > 1, Kubernetes cannot determine the reason for an unhealthy Pod. It might be the result of a permanent fault or of a transient fault. A transient fault can be caused by a restart required by upgrading or maintenance.
148+
If spec.replicas > 1, Kubernetes cannot determine the reason for an unhealthy Pod.
149+
It might be the result of a permanent fault or of a transient fault. A transient
150+
fault can be caused by a restart required by upgrading or maintenance.
144151
-->
145152
当 Stateful 所管理的任何 Pod 不健康时,你不能对该 StatefulSet 执行缩容操作。
146153
仅当 StatefulSet 的所有 Pod 都处于运行状态和 Ready 状况后才可缩容。

0 commit comments

Comments
 (0)