Skip to content

Commit 362f84d

Browse files
committed
'value's to 'values'
commit
1 parent d1c9d2b commit 362f84d

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ The default value for `operator` is `Equal`.
7171
A toleration "matches" a taint if the keys are the same and the effects are the same, and:
7272

7373
* the `operator` is `Exists` (in which case no `value` should be specified), or
74-
* the `operator` is `Equal` and the `value`s are equal.
74+
* the `operator` is `Equal` and the values should be equal.
7575

7676
{{< note >}}
7777

@@ -97,15 +97,15 @@ The allowed values for the `effect` field are:
9797
* Pods that tolerate the taint with a specified `tolerationSeconds` remain
9898
bound for the specified amount of time. After that time elapses, the node
9999
lifecycle controller evicts the Pods from the node.
100-
100+
101101
`NoSchedule`
102102
: No new Pods will be scheduled on the tainted node unless they have a matching
103103
toleration. Pods currently running on the node are **not** evicted.
104104

105105
`PreferNoSchedule`
106106
: `PreferNoSchedule` is a "preference" or "soft" version of `NoSchedule`.
107107
The control plane will *try* to avoid placing a Pod that does not tolerate
108-
the taint on the node, but it is not guaranteed.
108+
the taint on the node, but it is not guaranteed.
109109

110110
You can put multiple taints on the same node and multiple tolerations on the same pod.
111111
The way Kubernetes processes multiple taints and tolerations is like a filter: start
@@ -293,15 +293,15 @@ decisions. This ensures that node conditions don't directly affect scheduling.
293293
For example, if the `DiskPressure` node condition is active, the control plane
294294
adds the `node.kubernetes.io/disk-pressure` taint and does not schedule new pods
295295
onto the affected node. If the `MemoryPressure` node condition is active, the
296-
control plane adds the `node.kubernetes.io/memory-pressure` taint.
296+
control plane adds the `node.kubernetes.io/memory-pressure` taint.
297297

298298
You can ignore node conditions for newly created pods by adding the corresponding
299-
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
300-
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
301-
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
299+
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
300+
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
301+
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
302302
or `Burstable` QoS classes (even pods with no memory request set) as if they are
303303
able to cope with memory pressure, while new `BestEffort` pods are not scheduled
304-
onto the affected node.
304+
onto the affected node.
305305

306306
The DaemonSet controller automatically adds the following `NoSchedule`
307307
tolerations to all daemons, to prevent DaemonSets from breaking.

0 commit comments

Comments
 (0)