Skip to content

Commit 0addcba

Browse files
authored
Merge pull request #36261 from jzhupup/disruption
[zh-cn] sync1.25 disruptions.md
2 parents 79cb2f0 + 53ce420 commit 0addcba

File tree

1 file changed

+82
-0
lines changed

1 file changed

+82
-0
lines changed

content/zh-cn/docs/concepts/workloads/pods/disruptions.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -432,6 +432,88 @@ can happen, according to:
432432
- 控制器的类型
433433
- 集群的资源能力
434434

435+
<!--
436+
## Pod disruption conditions {#pod-disruption-conditions}
437+
-->
438+
## Pod 干扰状况 {#pod-disruption-conditions}
439+
440+
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
441+
442+
{{< note >}}
443+
<!--
444+
In order to use this behavior, you must enable the `PodDisruptionsCondition`
445+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
446+
in your cluster.
447+
-->
448+
要使用此行为,你必须在集群中启用 `PodDisruptionsCondition`
449+
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
450+
{{< /note >}}
451+
452+
<!--
453+
When enabled, a dedicated Pod `DisruptionTarget` [condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions) is added to indicate
454+
that the Pod is about to be deleted due to a {{<glossary_tooltip term_id="disruption" text="disruption">}}.
455+
The `reason` field of the condition additionally
456+
indicates one of the following reasons for the Pod termination:
457+
-->
458+
启用后,会给 Pod 添加一个 `DisruptionTarget`
459+
[状况](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions)
460+
用来表明该 Pod 因为发生{{<glossary_tooltip term_id="disruption" text="干扰">}}而被删除。
461+
状况中的 `reason` 字段进一步给出 Pod 终止的原因,如下:
462+
463+
<!--
464+
`PreemptionByKubeScheduler`
465+
: Pod is due to be {{<glossary_tooltip term_id="preemption" text="preempted">}} by a scheduler in order to accommodate a new Pod with a higher priority. For more information, see [Pod priority preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/).
466+
-->
467+
`PreemptionByKubeScheduler`
468+
: Pod 将被调度器{{<glossary_tooltip term_id="preemption" text="抢占">}},
469+
目的是接受优先级更高的新 Pod。
470+
要了解更多的相关信息,请参阅 [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
471+
472+
<!--
473+
`DeletionByTaintManager`
474+
: Pod is due to be deleted by Taint Manager (which is part of the node lifecycle controller within `kube-controller-manager`) due to a `NoExecute` taint that the Pod does not tolerate; see {{<glossary_tooltip term_id="taint" text="taint">}}-based evictions.
475+
-->
476+
`DeletionByTaintManager`
477+
: 由于 Pod 不能容忍 `NoExecute` 污点,Pod 将被
478+
Taint Manager(`kube-controller-manager` 中节点生命周期控制器的一部分)删除;
479+
请参阅基于{{<glossary_tooltip term_id="taint" text="污点">}}的驱逐。
480+
481+
<!--
482+
`EvictionByEvictionAPI`
483+
: Pod has been marked for {{<glossary_tooltip term_id="api-eviction" text="eviction using the Kubernetes API">}} .
484+
-->
485+
`EvictionByEvictionAPI`
486+
: Pod 已被标记为{{<glossary_tooltip term_id="api-eviction" text="通过 Kubernetes API 驱逐">}}。
487+
488+
<!--
489+
`DeletionByPodGC`
490+
: Pod, that is bound to a no longer existing Node, is due to be deleted by [Pod garbage collection](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection).
491+
-->
492+
`DeletionByPodGC`
493+
: 绑定到一个不再存在的 Node 上的 Pod 将被
494+
[Pod 垃圾收集](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)删除。
495+
496+
{{< note >}}
497+
<!--
498+
A Pod disruption might be interrupted. The control plane might re-attempt to
499+
continue the disruption of the same Pod, but it is not guaranteed. As a result,
500+
the `DisruptionTarget` condition might be added to a Pod, but that Pod might then not actually be
501+
deleted. In such a situation, after some time, the
502+
Pod disruption condition will be cleared.
503+
-->
504+
Pod 的干扰可能会被中断。控制平面可能会重新尝试继续干扰同一个 Pod,但这没办法保证。
505+
因此,`DisruptionTarget` 条件可能会添被加到 Pod 上,
506+
但该 Pod 实际上可能不会被删除。
507+
在这种情况下,一段时间后,Pod 干扰状况将被清除。
508+
{{< /note >}}
509+
510+
<!--
511+
When using a Job (or CronJob), you may want to use these Pod disruption conditions as part of your Job's
512+
[Pod failure policy](/docs/concepts/workloads/controllers/job#pod-failure-policy).
513+
-->
514+
使用 Job(或 CronJob)时,你可能希望将这些 Pod 干扰状况作为 Job
515+
[Pod 失效策略](/zh-cn/docs/concepts/workloads/controllers/job#pod-failure-policy)的一部分。
516+
435517
<!--
436518
## Separating Cluster Owner and Application Owner Roles
437519

0 commit comments

Comments
 (0)