@@ -71,7 +71,7 @@ The default value for `operator` is `Equal`.
71
71
A toleration "matches" a taint if the keys are the same and the effects are the same, and :
72
72
73
73
* the `operator` is `Exists` (in which case no `value` should be specified), or
74
- * the `operator` is `Equal` and the `value`s are equal.
74
+ * the `operator` is `Equal` and the values should be equal.
75
75
76
76
{{< note >}}
77
77
@@ -97,15 +97,15 @@ The allowed values for the `effect` field are:
97
97
* Pods that tolerate the taint with a specified `tolerationSeconds` remain
98
98
bound for the specified amount of time. After that time elapses, the node
99
99
lifecycle controller evicts the Pods from the node.
100
-
100
+
101
101
` NoSchedule`
102
102
: No new Pods will be scheduled on the tainted node unless they have a matching
103
103
toleration. Pods currently running on the node are **not** evicted.
104
104
105
105
` PreferNoSchedule`
106
106
: `PreferNoSchedule` is a "preference" or "soft" version of `NoSchedule`.
107
107
The control plane will *try* to avoid placing a Pod that does not tolerate
108
- the taint on the node, but it is not guaranteed.
108
+ the taint on the node, but it is not guaranteed.
109
109
110
110
You can put multiple taints on the same node and multiple tolerations on the same pod.
111
111
The way Kubernetes processes multiple taints and tolerations is like a filter : start
@@ -293,15 +293,15 @@ decisions. This ensures that node conditions don't directly affect scheduling.
293
293
For example, if the `DiskPressure` node condition is active, the control plane
294
294
adds the `node.kubernetes.io/disk-pressure` taint and does not schedule new pods
295
295
onto the affected node. If the `MemoryPressure` node condition is active, the
296
- control plane adds the `node.kubernetes.io/memory-pressure` taint.
296
+ control plane adds the `node.kubernetes.io/memory-pressure` taint.
297
297
298
298
You can ignore node conditions for newly created pods by adding the corresponding
299
- Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
300
- toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
301
- other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
299
+ Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
300
+ toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
301
+ other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
302
302
or `Burstable` QoS classes (even pods with no memory request set) as if they are
303
303
able to cope with memory pressure, while new `BestEffort` pods are not scheduled
304
- onto the affected node.
304
+ onto the affected node.
305
305
306
306
The DaemonSet controller automatically adds the following `NoSchedule`
307
307
tolerations to all daemons, to prevent DaemonSets from breaking.
0 commit comments