You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Pods](/docs/concepts/workloads/pods/) can have _priority_. Priority indicates the
22
22
importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the
23
23
scheduler tries to preempt (evict) lower priority Pods to make scheduling of the
@@ -31,7 +31,7 @@ pending Pod possible.
31
31
<!-- body -->
32
32
33
33
{{< warning >}}
34
-
<!--
34
+
<!--
35
35
In a cluster where not all users are trusted, a malicious user could create Pods
36
36
at the highest possible priorities, causing other Pods to be evicted/not get
37
37
scheduled.
@@ -48,7 +48,7 @@ for details.
48
48
49
49
{{< /warning >}}
50
50
51
-
<!--
51
+
<!--
52
52
## How to use priority and preemption
53
53
54
54
To use priority and preemption:
@@ -75,7 +75,7 @@ Keep reading for more information about these steps.
75
75
继续阅读以获取有关这些步骤的更多信息。
76
76
77
77
{{< note >}}
78
-
<!--
78
+
<!--
79
79
Kubernetes already ships with two PriorityClasses:
80
80
`system-cluster-critical` and `system-node-critical`.
81
81
These are common classes and are used to [ensure that critical components are always scheduled first](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/).
或者最低优先级的 Pod 受 PodDisruptionBudget 保护时,才会考虑优先级较高的 Pod。
628
628
629
-
<!--
629
+
<!--
630
630
The kubelet uses Priority to determine pod order for [node-pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
631
631
You can use the QoS class to estimate the order in which pods are most likely
632
632
to get evicted. The kubelet ranks pods for eviction based on the following factors:
633
633
634
634
1. Whether the starved resource usage exceeds requests
635
635
1. Pod Priority
636
-
1. Amount of resource usage relative to requests
636
+
1. Amount of resource usage relative to requests
637
637
638
638
See [Pod selection for kubelet eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)
639
639
for more details.
@@ -647,9 +647,9 @@ kubelet 使用优先级来确定
647
647
[节点压力驱逐](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/) Pod 的顺序。
648
648
你可以使用 QoS 类来估计 Pod 最有可能被驱逐的顺序。kubelet 根据以下因素对 Pod 进行驱逐排名:
649
649
650
-
1. 对紧俏资源的使用是否超过请求值
651
-
1. Pod 优先级
652
-
1. 相对于请求的资源使用量
650
+
1. 对紧俏资源的使用是否超过请求值
651
+
1. Pod 优先级
652
+
1. 相对于请求的资源使用量
653
653
654
654
有关更多详细信息,请参阅
655
655
[kubelet 驱逐时 Pod 的选择](/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)。
@@ -660,7 +660,7 @@ kubelet 使用优先级来确定
660
660
661
661
## {{% heading "whatsnext" %}}
662
662
663
-
<!--
663
+
<!--
664
664
* Read about using ResourceQuotas in connection with PriorityClasses: [limit Priority Class consumption by default](/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default)
665
665
* Learn about [Pod Disruption](/docs/concepts/workloads/pods/disruptions/)
666
666
* Learn about [API-initiated Eviction](/docs/concepts/scheduling-eviction/api-eviction/)
Copy file name to clipboardExpand all lines: content/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration.md
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,8 +27,10 @@ hard requirement). _Taints_ are the opposite -- they allow a node to repel a set
27
27
**污点(Taint)** 则相反——它使节点能够排斥一类特定的 Pod。
28
28
29
29
<!--
30
-
31
-
_Tolerations_ are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also [evaluates other parameters](/docs/concepts/scheduling-eviction/pod-priority-preemption/) as part of its function.
30
+
_Tolerations_ are applied to pods. Tolerations allow the scheduler to schedule pods with matching
31
+
taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also
32
+
[evaluates other parameters](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
33
+
as part of its function.
32
34
33
35
Taints and tolerations work together to ensure that pods are not scheduled
34
36
onto inappropriate nodes. One or more taints are applied to a node; this
@@ -482,7 +484,7 @@ decisions. This ensures that node conditions don't directly affect scheduling.
482
484
For example, if the `DiskPressure` node condition is active, the control plane
483
485
adds the `node.kubernetes.io/disk-pressure` taint and does not schedule new pods
484
486
onto the affected node. If the `MemoryPressure` node condition is active, the
485
-
control plane adds the `node.kubernetes.io/memory-pressure` taint.
487
+
control plane adds the `node.kubernetes.io/memory-pressure` taint.
486
488
-->
487
489
调度器在进行调度时检查污点,而不是检查节点状况。这确保节点状况不会直接影响调度。
488
490
例如,如果 `DiskPressure` 节点状况处于活跃状态,则控制平面添加
@@ -492,12 +494,12 @@ control plane adds the `node.kubernetes.io/memory-pressure` taint.
492
494
493
495
<!--
494
496
You can ignore node conditions for newly created pods by adding the corresponding
495
-
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
496
-
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
497
-
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
497
+
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
498
+
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
499
+
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
498
500
or `Burstable` QoS classes (even pods with no memory request set) as if they are
499
501
able to cope with memory pressure, while new `BestEffort` pods are not scheduled
0 commit comments