Skip to content

Commit e143e9d

Browse files
authored
Merge pull request #45172 from alexis974/spelling-mistake-kube-scheduler
Fix spelling mistake in kube scheduler
2 parents d488e6d + 2f298d2 commit e143e9d

12 files changed

+73
-73
lines changed

content/en/docs/concepts/scheduling-eviction/api-eviction.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ using a client of the {{<glossary_tooltip term_id="kube-apiserver" text="API ser
1111
creates an `Eviction` object, which causes the API server to terminate the Pod.
1212

1313
API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
14-
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
14+
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
1515

1616
Using the API to create an Eviction object for a Pod is like performing a
1717
policy-controlled [`DELETE` operation](/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod)
18-
on the Pod.
18+
on the Pod.
1919

2020
## Calling the Eviction API
2121

@@ -75,13 +75,13 @@ checks and responds in one of the following ways:
7575
* `429 Too Many Requests`: the eviction is not currently allowed because of the
7676
configured {{<glossary_tooltip term_id="pod-disruption-budget" text="PodDisruptionBudget">}}.
7777
You may be able to attempt the eviction again later. You might also see this
78-
response because of API rate limiting.
78+
response because of API rate limiting.
7979
* `500 Internal Server Error`: the eviction is not allowed because there is a
8080
misconfiguration, like if multiple PodDisruptionBudgets reference the same Pod.
8181

8282
If the Pod you want to evict isn't part of a workload that has a
8383
PodDisruptionBudget, the API server always returns `200 OK` and allows the
84-
eviction.
84+
eviction.
8585

8686
If the API server allows the eviction, the Pod is deleted as follows:
8787

@@ -103,12 +103,12 @@ If the API server allows the eviction, the Pod is deleted as follows:
103103
## Troubleshooting stuck evictions
104104

105105
In some cases, your applications may enter a broken state, where the Eviction
106-
API will only return `429` or `500` responses until you intervene. This can
107-
happen if, for example, a ReplicaSet creates pods for your application but new
106+
API will only return `429` or `500` responses until you intervene. This can
107+
happen if, for example, a ReplicaSet creates pods for your application but new
108108
pods do not enter a `Ready` state. You may also notice this behavior in cases
109109
where the last evicted Pod had a long termination grace period.
110110

111-
If you notice stuck evictions, try one of the following solutions:
111+
If you notice stuck evictions, try one of the following solutions:
112112

113113
* Abort or pause the automated operation causing the issue. Investigate the stuck
114114
application before you restart the operation.

content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 31 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ define. Some of the benefits of affinity and anti-affinity include:
9696
The affinity feature consists of two types of affinity:
9797

9898
- *Node affinity* functions like the `nodeSelector` field but is more expressive and
99-
allows you to specify soft rules.
99+
allows you to specify soft rules.
100100
- *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
101101
on other Pods.
102102

@@ -254,13 +254,13 @@ the node label that the system uses to denote the domain. For examples, see
254254
[Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/).
255255

256256
{{< note >}}
257-
Inter-pod affinity and anti-affinity require substantial amount of
257+
Inter-pod affinity and anti-affinity require substantial amounts of
258258
processing which can slow down scheduling in large clusters significantly. We do
259259
not recommend using them in clusters larger than several hundred nodes.
260260
{{< /note >}}
261261

262262
{{< note >}}
263-
Pod anti-affinity requires nodes to be consistently labelled, in other words,
263+
Pod anti-affinity requires nodes to be consistently labeled, in other words,
264264
every node in the cluster must have an appropriate label matching `topologyKey`.
265265
If some or all nodes are missing the specified `topologyKey` label, it can lead
266266
to unintended behavior.
@@ -305,22 +305,22 @@ Pod affinity rule uses the "hard"
305305
`requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule
306306
uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`.
307307

308-
The affinity rule specifies that the scheduler is allowed to place the example Pod
308+
The affinity rule specifies that the scheduler is allowed to place the example Pod
309309
on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
310-
where other Pods have been labeled with `security=S1`.
311-
For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
312-
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
313-
assign the Pod to any node within Zone V, as long as there is at least one Pod within
314-
Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1`
310+
where other Pods have been labeled with `security=S1`.
311+
For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
312+
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
313+
assign the Pod to any node within Zone V, as long as there is at least one Pod within
314+
Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1`
315315
labels in Zone V, the scheduler will not assign the example Pod to any node in that zone.
316316

317-
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
317+
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
318318
on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
319-
where other Pods have been labeled with `security=S2`.
320-
For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
321-
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
322-
assigning the Pod to any node within Zone R, as long as there is at least one Pod within
323-
Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact
319+
where other Pods have been labeled with `security=S2`.
320+
For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
321+
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
322+
assigning the Pod to any node within Zone R, as long as there is at least one Pod within
323+
Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact
324324
scheduling into Zone R if there are no Pods with `security=S2` labels.
325325

326326
To get yourself more familiar with the examples of Pod affinity and anti-affinity,
@@ -364,19 +364,19 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
364364

365365
{{< note >}}
366366
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
367-
The `matchLabelKeys` field is a alpha-level field and is disabled by default in
367+
The `matchLabelKeys` field is an alpha-level field and is disabled by default in
368368
Kubernetes {{< skew currentVersion >}}.
369369
When you want to use it, you have to enable it via the
370370
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
371371
{{< /note >}}
372372

373373
Kubernetes includes an optional `matchLabelKeys` field for Pod affinity
374-
or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,
374+
or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,
375375
when satisfying the Pod (anti)affinity.
376376

377377
The keys are used to look up values from the pod labels; those key-value labels are combined
378378
(using `AND`) with the match restrictions defined using the `labelSelector` field. The combined
379-
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
379+
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
380380

381381
A common use case is to use `matchLabelKeys` with `pod-template-hash` (set on Pods
382382
managed as part of a Deployment, where the value is unique for each revision).
@@ -405,7 +405,7 @@ spec:
405405
# Only Pods from a given rollout are taken into consideration when calculating pod affinity.
406406
# If you update the Deployment, the replacement Pods follow their own affinity rules
407407
# (if there are any defined in the new Pod template)
408-
matchLabelKeys:
408+
matchLabelKeys:
409409
- pod-template-hash
410410
```
411411

@@ -415,14 +415,14 @@ spec:
415415

416416
{{< note >}}
417417
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
418-
The `mismatchLabelKeys` field is a alpha-level field and is disabled by default in
418+
The `mismatchLabelKeys` field is an alpha-level field and is disabled by default in
419419
Kubernetes {{< skew currentVersion >}}.
420420
When you want to use it, you have to enable it via the
421421
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
422422
{{< /note >}}
423423

424424
Kubernetes includes an optional `mismatchLabelKeys` field for Pod affinity
425-
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
425+
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
426426
when satisfying the Pod (anti)affinity.
427427

428428
One example use case is to ensure Pods go to the topology domain (node, zone, etc) where only Pods from the same tenant or team are scheduled in.
@@ -438,22 +438,22 @@ metadata:
438438
...
439439
spec:
440440
affinity:
441-
podAffinity:
441+
podAffinity:
442442
requiredDuringSchedulingIgnoredDuringExecution:
443443
# ensure that pods associated with this tenant land on the correct node pool
444444
- matchLabelKeys:
445445
- tenant
446446
topologyKey: node-pool
447-
podAntiAffinity:
447+
podAntiAffinity:
448448
requiredDuringSchedulingIgnoredDuringExecution:
449449
# ensure that pods associated with this tenant can't schedule to nodes used for another tenant
450450
- mismatchLabelKeys:
451-
- tenant # whatever the value of the "tenant" label for this Pod, prevent
451+
- tenant # whatever the value of the "tenant" label for this Pod, prevent
452452
# scheduling to nodes in any pool where any Pod from a different
453453
# tenant is running.
454454
labelSelector:
455455
# We have to have the labelSelector which selects only Pods with the tenant label,
456-
# otherwise this Pod would hate Pods from daemonsets as well, for example,
456+
# otherwise this Pod would hate Pods from daemonsets as well, for example,
457457
# which aren't supposed to have the tenant label.
458458
matchExpressions:
459459
- key: tenant
@@ -561,7 +561,7 @@ where each web server is co-located with a cache, on three separate nodes.
561561
| *webserver-1* | *webserver-2* | *webserver-3* |
562562
| *cache-1* | *cache-2* | *cache-3* |
563563

564-
The overall effect is that each cache instance is likely to be accessed by a single client, that
564+
The overall effect is that each cache instance is likely to be accessed by a single client that
565565
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
566566

567567
You might have other reasons to use Pod anti-affinity.
@@ -589,7 +589,7 @@ Some of the limitations of using `nodeName` to select nodes are:
589589
{{< note >}}
590590
`nodeName` is intended for use by custom schedulers or advanced use cases where
591591
you need to bypass any configured schedulers. Bypassing the schedulers might lead to
592-
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
592+
failed Pods if the assigned Nodes get oversubscribed. You can use the [node affinity](#node-affinity) or the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
593593
{{</ note >}}
594594

595595
Here is an example of a Pod spec using the `nodeName` field:
@@ -633,13 +633,13 @@ The following operators can only be used with `nodeAffinity`.
633633

634634
| Operator | Behaviour |
635635
| :------------: | :-------------: |
636-
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
637-
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
636+
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
637+
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
638638

639639

640640
{{<note>}}
641-
`Gt` and `Lt` operators will not work with non-integer values. If the given value
642-
doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt`
641+
`Gt` and `Lt` operators will not work with non-integer values. If the given value
642+
doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt`
643643
are not available for `podAffinity`.
644644
{{</note>}}
645645

content/en/docs/concepts/scheduling-eviction/dynamic-resource-allocation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,14 @@ ResourceClass
4141
driver.
4242

4343
ResourceClaim
44-
: Defines a particular resource instances that is required by a
44+
: Defines a particular resource instance that is required by a
4545
workload. Created by a user (lifecycle managed manually, can be shared
4646
between different Pods) or for individual Pods by the control plane based on
4747
a ResourceClaimTemplate (automatic lifecycle, typically used by just one
4848
Pod).
4949

5050
ResourceClaimTemplate
51-
: Defines the spec and some meta data for creating
51+
: Defines the spec and some metadata for creating
5252
ResourceClaims. Created by a user when deploying a workload.
5353

5454
PodSchedulingContext

content/en/docs/concepts/scheduling-eviction/kube-scheduler.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ kube-scheduler selects a node for the pod in a 2-step operation:
6262

6363
The _filtering_ step finds the set of Nodes where it's feasible to
6464
schedule the Pod. For example, the PodFitsResources filter checks whether a
65-
candidate Node has enough available resource to meet a Pod's specific
65+
candidate Node has enough available resources to meet a Pod's specific
6666
resource requests. After this step, the node list contains any suitable
6767
Nodes; often, there will be more than one. If the list is empty, that
6868
Pod isn't (yet) schedulable.

content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ The kubelet has the following default hard eviction thresholds:
171171
- `nodefs.inodesFree<5%` (Linux nodes)
172172

173173
These default values of hard eviction thresholds will only be set if none
174-
of the parameters is changed. If you changed the value of any parameter,
174+
of the parameters is changed. If you change the value of any parameter,
175175
then the values of other parameters will not be inherited as the default
176176
values and will be set to zero. In order to provide custom values, you
177177
should provide all the thresholds respectively.

content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ and it cannot be prefixed with `system-`.
6464

6565
A PriorityClass object can have any 32-bit integer value smaller than or equal
6666
to 1 billion. This means that the range of values for a PriorityClass object is
67-
from -2147483648 to 1000000000 inclusive. Larger numbers are reserved for
67+
from -2147483648 to 1000000000 inclusive. Larger numbers are reserved for
6868
built-in PriorityClasses that represent critical system Pods. A cluster
6969
admin should create one PriorityClass object for each such mapping that they want.
7070

@@ -182,8 +182,8 @@ When Pod priority is enabled, the scheduler orders pending Pods by
182182
their priority and a pending Pod is placed ahead of other pending Pods
183183
with lower priority in the scheduling queue. As a result, the higher
184184
priority Pod may be scheduled sooner than Pods with lower priority if
185-
its scheduling requirements are met. If such Pod cannot be scheduled,
186-
scheduler will continue and tries to schedule other lower priority Pods.
185+
its scheduling requirements are met. If such Pod cannot be scheduled, the
186+
scheduler will continue and try to schedule other lower priority Pods.
187187

188188
## Preemption
189189

@@ -199,7 +199,7 @@ the Pods are gone, P can be scheduled on the Node.
199199
### User exposed information
200200

201201
When Pod P preempts one or more Pods on Node N, `nominatedNodeName` field of Pod
202-
P's status is set to the name of Node N. This field helps scheduler track
202+
P's status is set to the name of Node N. This field helps the scheduler track
203203
resources reserved for Pod P and also gives users information about preemptions
204204
in their clusters.
205205

@@ -209,8 +209,8 @@ After victim Pods are preempted, they get their graceful termination period. If
209209
another node becomes available while scheduler is waiting for the victim Pods to
210210
terminate, scheduler may use the other node to schedule Pod P. As a result
211211
`nominatedNodeName` and `nodeName` of Pod spec are not always the same. Also, if
212-
scheduler preempts Pods on Node N, but then a higher priority Pod than Pod P
213-
arrives, scheduler may give Node N to the new higher priority Pod. In such a
212+
the scheduler preempts Pods on Node N, but then a higher priority Pod than Pod P
213+
arrives, the scheduler may give Node N to the new higher priority Pod. In such a
214214
case, scheduler clears `nominatedNodeName` of Pod P. By doing this, scheduler
215215
makes Pod P eligible to preempt Pods on another Node.
216216

@@ -256,9 +256,9 @@ the Node is not considered for preemption.
256256

257257
If a pending Pod has inter-pod {{< glossary_tooltip text="affinity" term_id="affinity" >}}
258258
to one or more of the lower-priority Pods on the Node, the inter-Pod affinity
259-
rule cannot be satisfied in the absence of those lower-priority Pods. In this case,
259+
rule cannot be satisfied in the absence of those lower-priority Pods. In this case,
260260
the scheduler does not preempt any Pods on the Node. Instead, it looks for another
261-
Node. The scheduler might find a suitable Node or it might not. There is no
261+
Node. The scheduler might find a suitable Node or it might not. There is no
262262
guarantee that the pending Pod can be scheduled.
263263

264264
Our recommended solution for this problem is to create inter-Pod affinity only
@@ -288,7 +288,7 @@ enough demand and if we find an algorithm with reasonable performance.
288288

289289
## Troubleshooting
290290

291-
Pod priority and pre-emption can have unwanted side effects. Here are some
291+
Pod priority and preemption can have unwanted side effects. Here are some
292292
examples of potential problems and ways to deal with them.
293293

294294
### Pods are preempted unnecessarily
@@ -361,7 +361,7 @@ to get evicted. The kubelet ranks pods for eviction based on the following facto
361361

362362
1. Whether the starved resource usage exceeds requests
363363
1. Pod Priority
364-
1. Amount of resource usage relative to requests
364+
1. Amount of resource usage relative to requests
365365

366366
See [Pod selection for kubelet eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)
367367
for more details.

0 commit comments

Comments
 (0)