Skip to content

Commit 29e152a

Browse files
authored
Merge pull request #39341 from Zhuzhenghao/disruption
[zh] Cleanup page disruptions
2 parents 4c32ddf + ad50091 commit 29e152a

File tree

1 file changed

+16
-15
lines changed

1 file changed

+16
-15
lines changed

content/zh-cn/docs/concepts/workloads/pods/disruptions.md

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -212,7 +212,8 @@ For example, the `kubectl drain` subcommand lets you mark a node as going out of
212212
service. When you run `kubectl drain`, the tool tries to evict all of the Pods on
213213
the Node you're taking out of service. The eviction request that `kubectl` submits on
214214
your behalf may be temporarily rejected, so the tool periodically retries all failed
215-
requests until all Pods on the target node are terminated, or until a configurable timeout is reached.
215+
requests until all Pods on the target node are terminated, or until a configurable timeout
216+
is reached.
216217
-->
217218
例如,`kubectl drain` 命令可以用来标记某个节点即将停止服务。
218219
运行 `kubectl drain` 命令时,工具会尝试驱逐你所停服的节点上的所有 Pod。
@@ -426,7 +427,7 @@ can happen, according to:
426427
- 控制器的类型
427428
- 集群的资源能力
428429

429-
<!--
430+
<!--
430431
## Pod disruption conditions {#pod-disruption-conditions}
431432
-->
432433
## Pod 干扰状况 {#pod-disruption-conditions}
@@ -451,7 +452,7 @@ enabled in your cluster.
451452
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
452453
{{< /note >}}
453454

454-
<!--
455+
<!--
455456
When enabled, a dedicated Pod `DisruptionTarget` [condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions) is added to indicate
456457
that the Pod is about to be deleted due to a {{<glossary_tooltip term_id="disruption" text="disruption">}}.
457458
The `reason` field of the condition additionally
@@ -462,18 +463,18 @@ indicates one of the following reasons for the Pod termination:
462463
用来表明该 Pod 因为发生{{<glossary_tooltip term_id="disruption" text="干扰">}}而被删除。
463464
状况中的 `reason` 字段进一步给出 Pod 终止的原因,如下:
464465

465-
<!--
466+
<!--
466467
`PreemptionByKubeScheduler`
467-
: Pod is due to be {{<glossary_tooltip term_id="preemption" text="preempted">}} by a scheduler in order to accommodate a new Pod with a higher priority. For more information, see [Pod priority preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/).
468+
: Pod is due to be {{<glossary_tooltip term_id="preemption" text="preempted">}} by a scheduler in order to accommodate a new Pod with a higher priority. For more information, see [Pod priority preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/).
468469
-->
469470
`PreemptionByKubeScheduler`
470471
: Pod 将被调度器{{<glossary_tooltip term_id="preemption" text="抢占">}},
471472
目的是接受优先级更高的新 Pod。
472473
要了解更多的相关信息,请参阅 [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
473474

474-
<!--
475+
<!--
475476
`DeletionByTaintManager`
476-
: Pod is due to be deleted by Taint Manager (which is part of the node lifecycle controller within `kube-controller-manager`) due to a `NoExecute` taint that the Pod does not tolerate; see {{<glossary_tooltip term_id="taint" text="taint">}}-based evictions.
477+
: Pod is due to be deleted by Taint Manager (which is part of the node lifecycle controller within `kube-controller-manager`) due to a `NoExecute` taint that the Pod does not tolerate; see {{<glossary_tooltip term_id="taint" text="taint">}}-based evictions.
477478
-->
478479
`DeletionByTaintManager`
479480
: 由于 Pod 不能容忍 `NoExecute` 污点,Pod 将被
@@ -482,14 +483,14 @@ Taint Manager(`kube-controller-manager` 中节点生命周期控制器的一
482483

483484
<!--
484485
`EvictionByEvictionAPI`
485-
: Pod has been marked for {{<glossary_tooltip term_id="api-eviction" text="eviction using the Kubernetes API">}} .
486+
: Pod has been marked for {{<glossary_tooltip term_id="api-eviction" text="eviction using the Kubernetes API">}}.
486487
-->
487488
`EvictionByEvictionAPI`
488489
: Pod 已被标记为{{<glossary_tooltip term_id="api-eviction" text="通过 Kubernetes API 驱逐">}}。
489490

490-
<!--
491+
<!--
491492
`DeletionByPodGC`
492-
: Pod, that is bound to a no longer existing Node, is due to be deleted by [Pod garbage collection](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection).
493+
: Pod, that is bound to a no longer existing Node, is due to be deleted by [Pod garbage collection](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection).
493494
-->
494495
`DeletionByPodGC`
495496
: 绑定到一个不再存在的 Node 上的 Pod 将被
@@ -501,16 +502,16 @@ Taint Manager(`kube-controller-manager` 中节点生命周期控制器的一
501502
-->
502503
`TerminationByKubelet`
503504
: Pod
504-
由于{{<glossary_tooltip term_id="node-pressure-eviction" text="节点压力驱逐">}}或[节点体面关闭](/zh-cn/docs/concepts/architecture/nodes/#graceful-node-shutdown)而被
505-
kubelet 终止。
505+
由于{{<glossary_tooltip term_id="node-pressure-eviction" text="节点压力驱逐">}}或[节点体面关闭](/zh-cn/docs/concepts/architecture/nodes/#graceful-node-shutdown)而被
506+
kubelet 终止。
506507

507508
{{< note >}}
508509
<!--
509510
A Pod disruption might be interrupted. The control plane might re-attempt to
510511
continue the disruption of the same Pod, but it is not guaranteed. As a result,
511512
the `DisruptionTarget` condition might be added to a Pod, but that Pod might then not actually be
512513
deleted. In such a situation, after some time, the
513-
Pod disruption condition will be cleared.
514+
Pod disruption condition will be cleared.
514515
-->
515516
Pod 的干扰可能会被中断。控制平面可能会重新尝试继续干扰同一个 Pod,但这没办法保证。
516517
因此,`DisruptionTarget` 条件可能会添被加到 Pod 上,
@@ -527,9 +528,9 @@ phase (see also [Pod garbage collection](/docs/concepts/workloads/pods/pod-lifec
527528
则 Pod 垃圾回收器 (PodGC) 也会将这些 Pod 标记为失效
528529
(另见 [Pod 垃圾回收](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection))。
529530

530-
<!--
531+
<!--
531532
When using a Job (or CronJob), you may want to use these Pod disruption conditions as part of your Job's
532-
[Pod failure policy](/docs/concepts/workloads/controllers/job#pod-failure-policy).
533+
[Pod failure policy](/docs/concepts/workloads/controllers/job#pod-failure-policy).
533534
-->
534535
使用 Job(或 CronJob)时,你可能希望将这些 Pod 干扰状况作为 Job
535536
[Pod 失效策略](/zh-cn/docs/concepts/workloads/controllers/job#pod-failure-policy)的一部分。

0 commit comments

Comments
 (0)