Skip to content

Commit af5cd5c

Browse files
authored
Merge pull request #28179 from chenrui333/zh/sync-concepts-workloads-files
zh: resync concepts workloads files
2 parents 50395fc + 29e7f7c commit af5cd5c

File tree

3 files changed

+17
-14
lines changed

3 files changed

+17
-14
lines changed

content/zh/docs/concepts/workloads/pods/disruptions.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -155,18 +155,23 @@ and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/)
155155

156156
<!--
157157
The frequency of voluntary disruptions varies. On a basic Kubernetes cluster, there are
158-
no voluntary disruptions at all. However, your cluster administrator or hosting provider
158+
no automated voluntary disruptions (only user-triggered ones). However, your cluster administrator or hosting provider
159159
may run some additional services which cause voluntary disruptions. For example,
160160
rolling out node software updates can cause voluntary disruptions. Also, some implementations
161161
of cluster (node) autoscaling may cause voluntary disruptions to defragment and compact nodes.
162162
Your cluster administrator or hosting provider should have documented what level of voluntary
163-
disruptions, if any, to expect.
163+
disruptions, if any, to expect. Certain configuration options, such as
164+
[using PriorityClasses](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/)
165+
in your pod spec can also cause voluntary (and involuntary) disruptions.
164166
-->
165-
自愿干扰的频率各不相同。在一个基本的 Kubernetes 集群中,根本没有自愿干扰。然而,集群管理
166-
或托管提供商可能运行一些可能导致自愿干扰的额外服务。例如,节点软
167+
自愿干扰的频率各不相同。在一个基本的 Kubernetes 集群中,没有自愿干扰(只有用户触发的干扰)。
168+
然而,集群管理员或托管提供商可能运行一些可能导致自愿干扰的额外服务。例如,节点软
167169
更新可能导致自愿干扰。另外,集群(节点)自动缩放的某些
168170
实现可能导致碎片整理和紧缩节点的自愿干扰。集群
169171
管理员或托管提供商应该已经记录了各级别的自愿干扰(如果有的话)。
172+
有些配置选项,例如在 pod spec 中
173+
[使用 PriorityClasses]https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/)
174+
也会产生自愿(和非自愿)的干扰。
170175

171176
<!--
172177
Kubernetes offers features to help run highly available applications at the same
@@ -267,7 +272,7 @@ during application updates is configured in spec for the specific workload resou
267272
<!--
268273
When a pod is evicted using the eviction API, it is gracefully
269274
[terminated](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination),
270-
hornoring the
275+
hornoring the
271276
`terminationGracePeriodSeconds` setting in its [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
272277
-->
273278
当使用驱逐 API 驱逐 Pod 时,Pod 会被体面地
@@ -504,4 +509,3 @@ the nodes in your cluster, such as a node or system software upgrade, here are s
504509
* 进一步了解[排空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)的信息。
505510
* 了解[更新 Deployment](/zh/docs/concepts/workloads/controllers/deployment/#updating-a-deployment)
506511
的过程,包括如何在其进程中维持应用的可用性
507-

content/zh/docs/concepts/workloads/pods/init-containers.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Init 容器与普通的容器非常像,除了如下两点:
5454
* 每个都必须在下一个启动之前成功完成。
5555

5656
<!--
57-
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
57+
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
5858
However, if the Pod has a `restartPolicy` of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed.
5959
-->
6060
如果 Pod 的 Init 容器失败,kubelet 会不断地重启该 Init 容器直到该容器成功为止。
@@ -391,10 +391,10 @@ myapp-pod 1/1 Running 0 9m
391391
392392
<!--
393393
This simple example should provide some inspiration for you to create your own
394-
init containers. [What's next](#whats-next) contains a link to a more detailed example.
394+
init containers. [What's next](#what-s-next) contains a link to a more detailed example.
395395
-->
396396
这个简单例子应该能为你创建自己的 Init 容器提供一些启发。
397-
[接下来](#whats-next)节提供了更详细例子的链接。
397+
[接下来](#what-s-next)节提供了更详细例子的链接。
398398
399399
<!--
400400
## Detailed behavior
@@ -546,4 +546,3 @@ Pod 不会被重启。这一行为适用于 Kubernetes v1.20 及更新版本。
546546
-->
547547
* 阅读[创建包含 Init 容器的 Pod](/zh/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
548548
* 学习如何[调试 Init 容器](/zh/docs/tasks/debug-application-cluster/debug-init-containers/)
549-

content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te
2727

2828
<!--
2929
{{< note >}}
30-
In versions of Kubernetes before v1.19, you must enable the `EvenPodsSpread`
30+
In versions of Kubernetes before v1.18, you must enable the `EvenPodsSpread`
3131
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on
3232
the [API server](/docs/concepts/overview/components/#kube-apiserver) and the
3333
[scheduler](/docs/reference/generated/kube-scheduler/) in order to use Pod
@@ -36,7 +36,8 @@ topology spread constraints.
3636
-->
3737

3838
{{< note >}}
39-
在 v1.19 之前的 Kubernetes 版本中,如果要使用 Pod 拓扑扩展约束,你必须在 [API 服务器](/zh/docs/concepts/overview/components/#kube-apiserver)
39+
在 v1.18 之前的 Kubernetes 版本中,如果要使用 Pod 拓扑扩展约束,你必须在
40+
[API 服务器](/zh/docs/concepts/overview/components/#kube-apiserver)
4041
[调度器](/zh/docs/reference/command-line-tools-reference/kube-scheduler/)
4142
中启用 `EvenPodsSpread` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
4243
{{< /note >}}
@@ -218,7 +219,7 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones,
218219
则让它保持悬决状态。
219220

220221
<!--
221-
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1],
222+
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1],
222223
hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
223224
-->
224225
如果调度器将新的 Pod 放入 "zoneA",Pods 分布将变为 [3, 1],因此实际的偏差
@@ -645,4 +646,3 @@ See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig
645646
-->
646647
- [博客: PodTopologySpread介绍](https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/)
647648
详细解释了 `maxSkew`,并给出了一些高级的使用示例。
648-

0 commit comments

Comments
 (0)