You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md
+17-4Lines changed: 17 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -266,9 +266,23 @@ This ensures that DaemonSet pods are never evicted due to these problems.
266
266
267
267
## Taint Nodes by Condition
268
268
269
-
The node lifecycle controller automatically creates taints corresponding to
270
-
Node conditions with `NoSchedule` effect.
271
-
Similarly the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations.
269
+
The control plane, using the node {{<glossary_tooltip text="controller" term_id="controller">}},
270
+
automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/pod-eviction#node-conditions).
271
+
272
+
The scheduler checks taints, not node conditions, when it makes scheduling
273
+
decisions. This ensures that node conditions don't directly affect scheduling.
274
+
For example, if the `DiskPressure` node condition is active, the control plane
275
+
adds the `node.kubernetes.io/disk-pressure` taint and does not schedule new pods
276
+
onto the affected node. If the `MemoryPressure` node condition is active, the
277
+
control plane adds the `node.kubernetes.io/memory-pressure` taint.
278
+
279
+
You can ignore node conditions for newly created pods by adding the corresponding
280
+
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
281
+
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
282
+
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
283
+
or `Burstable` QoS classes (even pods with no memory request set) as if they are
284
+
able to cope with memory pressure, while new `BestEffort` pods are not scheduled
285
+
onto the affected node.
272
286
273
287
The DaemonSet controller automatically adds the following `NoSchedule`
274
288
tolerations to all daemons, to prevent DaemonSets from breaking.
@@ -282,7 +296,6 @@ tolerations to all daemons, to prevent DaemonSets from breaking.
282
296
Adding these tolerations ensures backward compatibility. You can also add
283
297
arbitrary tolerations to DaemonSets.
284
298
285
-
286
299
## {{% heading "whatsnext" %}}
287
300
288
301
* Read about [out of resource handling](/docs/concepts/scheduling-eviction/out-of-resource/) and how you can configure it
0 commit comments