Skip to content

Commit 07526a1

Browse files
committed
Clean up page assign-pod-node
1 parent bfd636a commit 07526a1

File tree

1 file changed

+50
-53
lines changed

1 file changed

+50
-53
lines changed

content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 50 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -8,16 +8,15 @@ content_type: concept
88
weight: 20
99
---
1010

11-
1211
<!-- overview -->
1312

14-
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
13+
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
1514
_restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}},
1615
or to _prefer_ to run on particular nodes.
1716
There are several ways to do this and the recommended approaches all use
1817
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
1918
Often, you do not need to set any such constraints; the
20-
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
19+
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
2120
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
2221
However, there are some circumstances where you may want to control which node
2322
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
@@ -28,10 +27,10 @@ or to co-locate Pods from two different services that communicate a lot into the
2827
You can use any of the following methods to choose where Kubernetes schedules
2928
specific Pods:
3029

31-
* [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
32-
* [Affinity and anti-affinity](#affinity-and-anti-affinity)
33-
* [nodeName](#nodename) field
34-
* [Pod topology spread constraints](#pod-topology-spread-constraints)
30+
- [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
31+
- [Affinity and anti-affinity](#affinity-and-anti-affinity)
32+
- [nodeName](#nodename) field
33+
- [Pod topology spread constraints](#pod-topology-spread-constraints)
3534

3635
## Node labels {#built-in-node-labels}
3736

@@ -51,15 +50,15 @@ and a different value in other environments.
5150
Adding labels to nodes allows you to target Pods for scheduling on specific
5251
nodes or groups of nodes. You can use this functionality to ensure that specific
5352
Pods only run on nodes with certain isolation, security, or regulatory
54-
properties.
53+
properties.
5554

5655
If you use labels for node isolation, choose label keys that the {{<glossary_tooltip text="kubelet" term_id="kubelet">}}
5756
cannot modify. This prevents a compromised node from setting those labels on
5857
itself so that the scheduler schedules workloads onto the compromised node.
5958

6059
The [`NodeRestriction` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
6160
prevents the kubelet from setting or modifying labels with a
62-
`node-restriction.kubernetes.io/` prefix.
61+
`node-restriction.kubernetes.io/` prefix.
6362

6463
To make use of that label prefix for node isolation:
6564

@@ -73,7 +72,7 @@ To make use of that label prefix for node isolation:
7372
You can add the `nodeSelector` field to your Pod specification and specify the
7473
[node labels](#built-in-node-labels) you want the target node to have.
7574
Kubernetes only schedules the Pod onto nodes that have each of the labels you
76-
specify.
75+
specify.
7776

7877
See [Assign Pods to Nodes](/docs/tasks/configure-pod-container/assign-pods-nodes) for more
7978
information.
@@ -84,20 +83,20 @@ information.
8483
labels. Affinity and anti-affinity expands the types of constraints you can
8584
define. Some of the benefits of affinity and anti-affinity include:
8685

87-
* The affinity/anti-affinity language is more expressive. `nodeSelector` only
86+
- The affinity/anti-affinity language is more expressive. `nodeSelector` only
8887
selects nodes with all the specified labels. Affinity/anti-affinity gives you
8988
more control over the selection logic.
90-
* You can indicate that a rule is *soft* or *preferred*, so that the scheduler
89+
- You can indicate that a rule is *soft* or *preferred*, so that the scheduler
9190
still schedules the Pod even if it can't find a matching node.
92-
* You can constrain a Pod using labels on other Pods running on the node (or other topological domain),
91+
- You can constrain a Pod using labels on other Pods running on the node (or other topological domain),
9392
instead of just node labels, which allows you to define rules for which Pods
9493
can be co-located on a node.
9594

9695
The affinity feature consists of two types of affinity:
9796

98-
* *Node affinity* functions like the `nodeSelector` field but is more expressive and
97+
- *Node affinity* functions like the `nodeSelector` field but is more expressive and
9998
allows you to specify soft rules.
100-
* *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
99+
- *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
101100
on other Pods.
102101

103102
### Node affinity
@@ -106,12 +105,12 @@ Node affinity is conceptually similar to `nodeSelector`, allowing you to constra
106105
Pod can be scheduled on based on node labels. There are two types of node
107106
affinity:
108107

109-
* `requiredDuringSchedulingIgnoredDuringExecution`: The scheduler can't
110-
schedule the Pod unless the rule is met. This functions like `nodeSelector`,
111-
but with a more expressive syntax.
112-
* `preferredDuringSchedulingIgnoredDuringExecution`: The scheduler tries to
113-
find a node that meets the rule. If a matching node is not available, the
114-
scheduler still schedules the Pod.
108+
- `requiredDuringSchedulingIgnoredDuringExecution`: The scheduler can't
109+
schedule the Pod unless the rule is met. This functions like `nodeSelector`,
110+
but with a more expressive syntax.
111+
- `preferredDuringSchedulingIgnoredDuringExecution`: The scheduler tries to
112+
find a node that meets the rule. If a matching node is not available, the
113+
scheduler still schedules the Pod.
115114

116115
{{<note>}}
117116
In the preceding types, `IgnoredDuringExecution` means that if the node labels
@@ -127,17 +126,17 @@ For example, consider the following Pod spec:
127126

128127
In this example, the following rules apply:
129128

130-
* The node *must* have a label with the key `topology.kubernetes.io/zone` and
131-
the value of that label *must* be either `antarctica-east1` or `antarctica-west1`.
132-
* The node *preferably* has a label with the key `another-node-label-key` and
133-
the value `another-node-label-value`.
129+
- The node *must* have a label with the key `topology.kubernetes.io/zone` and
130+
the value of that label *must* be either `antarctica-east1` or `antarctica-west1`.
131+
- The node *preferably* has a label with the key `another-node-label-key` and
132+
the value `another-node-label-value`.
134133

135134
You can use the `operator` field to specify a logical operator for Kubernetes to use when
136135
interpreting the rules. You can use `In`, `NotIn`, `Exists`, `DoesNotExist`,
137136
`Gt` and `Lt`.
138137

139138
`NotIn` and `DoesNotExist` allow you to define node anti-affinity behavior.
140-
Alternatively, you can use [node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/)
139+
Alternatively, you can use [node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/)
141140
to repel Pods from specific nodes.
142141

143142
{{<note>}}
@@ -168,7 +167,7 @@ The final sum is added to the score of other priority functions for the node.
168167
Nodes with the highest total score are prioritized when the scheduler makes a
169168
scheduling decision for the Pod.
170169

171-
For example, consider the following Pod spec:
170+
For example, consider the following Pod spec:
172171

173172
{{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}}
174173

@@ -268,8 +267,8 @@ to unintended behavior.
268267
Similar to [node affinity](#node-affinity) are two types of Pod affinity and
269268
anti-affinity as follows:
270269

271-
* `requiredDuringSchedulingIgnoredDuringExecution`
272-
* `preferredDuringSchedulingIgnoredDuringExecution`
270+
- `requiredDuringSchedulingIgnoredDuringExecution`
271+
- `preferredDuringSchedulingIgnoredDuringExecution`
273272

274273
For example, you could use
275274
`requiredDuringSchedulingIgnoredDuringExecution` affinity to tell the scheduler to
@@ -297,7 +296,7 @@ The affinity rule says that the scheduler can only schedule a Pod onto a node if
297296
the node is in the same zone as one or more existing Pods with the label
298297
`security=S1`. More precisely, the scheduler must place the Pod on a node that has the
299298
`topology.kubernetes.io/zone=V` label, as long as there is at least one node in
300-
that zone that currently has one or more Pods with the Pod label `security=S1`.
299+
that zone that currently has one or more Pods with the Pod label `security=S1`.
301300

302301
The anti-affinity rule says that the scheduler should try to avoid scheduling
303302
the Pod onto a node that is in the same zone as one or more Pods with the label
@@ -314,9 +313,9 @@ You can use the `In`, `NotIn`, `Exists` and `DoesNotExist` values in the
314313
In principle, the `topologyKey` can be any allowed label key with the following
315314
exceptions for performance and security reasons:
316315

317-
* For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
316+
- For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
318317
and `preferredDuringSchedulingIgnoredDuringExecution`.
319-
* For `requiredDuringSchedulingIgnoredDuringExecution` Pod anti-affinity rules,
318+
- For `requiredDuringSchedulingIgnoredDuringExecution` Pod anti-affinity rules,
320319
the admission controller `LimitPodHardAntiAffinityTopology` limits
321320
`topologyKey` to `kubernetes.io/hostname`. You can modify or disable the
322321
admission controller if you want to allow custom topologies.
@@ -328,17 +327,18 @@ If omitted or empty, `namespaces` defaults to the namespace of the Pod where the
328327
affinity/anti-affinity definition appears.
329328

330329
#### Namespace selector
330+
331331
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
332332

333333
You can also select matching namespaces using `namespaceSelector`, which is a label query over the set of namespaces.
334334
The affinity term is applied to namespaces selected by both `namespaceSelector` and the `namespaces` field.
335-
Note that an empty `namespaceSelector` ({}) matches all namespaces, while a null or empty `namespaces` list and
335+
Note that an empty `namespaceSelector` ({}) matches all namespaces, while a null or empty `namespaces` list and
336336
null `namespaceSelector` matches the namespace of the Pod where the rule is defined.
337337

338338
#### More practical use-cases
339339

340340
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
341-
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
341+
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
342342
rules allow you to configure that a set of workloads should
343343
be co-located in the same defined topology; for example, preferring to place two related
344344
Pods onto the same node.
@@ -430,10 +430,10 @@ spec:
430430
Creating the two preceding Deployments results in the following cluster layout,
431431
where each web server is co-located with a cache, on three separate nodes.
432432

433-
| node-1 | node-2 | node-3 |
434-
|:--------------------:|:-------------------:|:------------------:|
435-
| *webserver-1* | *webserver-2* | *webserver-3* |
436-
| *cache-1* | *cache-2* | *cache-3* |
433+
| node-1 | node-2 | node-3 |
434+
| :-----------: | :-----------: | :-----------: |
435+
| *webserver-1* | *webserver-2* | *webserver-3* |
436+
| *cache-1* | *cache-2* | *cache-3* |
437437

438438
The overall effect is that each cache instance is likely to be accessed by a single client, that
439439
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
@@ -453,13 +453,12 @@ tries to place the Pod on that node. Using `nodeName` overrules using
453453

454454
Some of the limitations of using `nodeName` to select nodes are:
455455

456-
- If the named node does not exist, the Pod will not run, and in
457-
some cases may be automatically deleted.
458-
- If the named node does not have the resources to accommodate the
459-
Pod, the Pod will fail and its reason will indicate why,
460-
for example OutOfmemory or OutOfcpu.
461-
- Node names in cloud environments are not always predictable or
462-
stable.
456+
- If the named node does not exist, the Pod will not run, and in
457+
some cases may be automatically deleted.
458+
- If the named node does not have the resources to accommodate the
459+
Pod, the Pod will fail and its reason will indicate why,
460+
for example OutOfmemory or OutOfcpu.
461+
- Node names in cloud environments are not always predictable or stable.
463462

464463
{{< note >}}
465464
`nodeName` is intended for use by custom schedulers or advanced use cases where
@@ -495,12 +494,10 @@ to learn more about how these work.
495494

496495
## {{% heading "whatsnext" %}}
497496

498-
* Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .
499-
* Read the design docs for [node affinity](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md)
497+
- Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .
498+
- Read the design docs for [node affinity](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md)
500499
and for [inter-pod affinity/anti-affinity](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md).
501-
* Learn about how the [topology manager](/docs/tasks/administer-cluster/topology-manager/) takes part in node-level
502-
resource allocation decisions.
503-
* Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/).
504-
* Learn how to use [affinity and anti-affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
505-
506-
500+
- Learn about how the [topology manager](/docs/tasks/administer-cluster/topology-manager/) takes part in node-level
501+
resource allocation decisions.
502+
- Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/).
503+
- Learn how to use [affinity and anti-affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).

0 commit comments

Comments
 (0)