Skip to content

Commit 311cdc3

Browse files
Tim Bannisterahg-g
andcommitted
Reword topic Assigning Pods to Nodes
- Rewording - Tidying Co-authored-by: Abdullah Gharaibeh <[email protected]>
1 parent 3225a08 commit 311cdc3

File tree

1 file changed

+23
-15
lines changed

1 file changed

+23
-15
lines changed

content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 23 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -11,20 +11,22 @@ weight: 20
1111

1212
<!-- overview -->
1313

14-
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
15-
{{< glossary_tooltip text="node(s)" term_id="node" >}}.
14+
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
15+
_restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}},
16+
or to _prefer_ to run on particular nodes.
1617
There are several ways to do this and the recommended approaches all use
1718
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
18-
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
19+
Often, you do not need to set any such constraints; the
20+
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
1921
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
2022
However, there are some circumstances where you may want to control which node
21-
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different
22-
services that communicate a lot into the same availability zone.
23+
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
24+
or to co-locate Pods from two different services that communicate a lot into the same availability zone.
2325

2426
<!-- body -->
2527

2628
You can use any of the following methods to choose where Kubernetes schedules
27-
specific Pods:
29+
specific Pods:
2830

2931
* [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
3032
* [Affinity and anti-affinity](#affinity-and-anti-affinity)
@@ -338,13 +340,15 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
338340
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
339341
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
340342
rules allow you to configure that a set of workloads should
341-
be co-located in the same defined topology, eg., the same node.
343+
be co-located in the same defined topology; for example, preferring to place two related
344+
Pods onto the same node.
342345

343-
Take, for example, a three-node cluster running a web application with an
344-
in-memory cache like redis. You could use inter-pod affinity and anti-affinity
345-
to co-locate the web servers with the cache as much as possible.
346+
For example: imagine a three-node cluster. You use the cluster to run a web application
347+
and also an in-memory cache (such as Redis). For this example, also assume that latency between
348+
the web application and the memory cache should be as low as is practical. You could use inter-pod
349+
affinity and anti-affinity to co-locate the web servers with the cache as much as possible.
346350

347-
In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
351+
In the following example Deployment for the Redis cache, the replicas get the label `app=store`. The
348352
`podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas
349353
with the `app=store` label on a single node. This creates each cache in a
350354
separate node.
@@ -379,10 +383,10 @@ spec:
379383
image: redis:3.2-alpine
380384
```
381385

382-
The following Deployment for the web servers creates replicas with the label `app=web-store`. The
383-
Pod affinity rule tells the scheduler to place each replica on a node that has a
384-
Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler
385-
to avoid placing multiple `app=web-store` servers on a single node.
386+
The following example Deployment for the web servers creates replicas with the label `app=web-store`.
387+
The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod
388+
with the label `app=store`. The Pod anti-affinity rule tells the scheduler never to place
389+
multiple `app=web-store` servers on a single node.
386390

387391
```yaml
388392
apiVersion: apps/v1
@@ -431,6 +435,10 @@ where each web server is co-located with a cache, on three separate nodes.
431435
| *webserver-1* | *webserver-2* | *webserver-3* |
432436
| *cache-1* | *cache-2* | *cache-3* |
433437

438+
The overall effect is that each cache instance is likely to be accessed by a single client, that
439+
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
440+
441+
You might have other reasons to use Pod anti-affinity.
434442
See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
435443
for an example of a StatefulSet configured with anti-affinity for high
436444
availability, using the same technique as this example.

0 commit comments

Comments
 (0)