@@ -11,20 +11,22 @@ weight: 20
11
11
12
12
<!-- overview -->
13
13
14
- You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
15
- {{< glossary_tooltip text="node(s)" term_id="node" >}}.
14
+ You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
15
+ _ restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}},
16
+ or to _ prefer_ to run on particular nodes.
16
17
There are several ways to do this and the recommended approaches all use
17
18
[ label selectors] ( /docs/concepts/overview/working-with-objects/labels/ ) to facilitate the selection.
18
- Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
19
+ Often, you do not need to set any such constraints; the
20
+ {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
19
21
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
20
22
However, there are some circumstances where you may want to control which node
21
- the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different
22
- services that communicate a lot into the same availability zone.
23
+ the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
24
+ or to co-locate Pods from two different services that communicate a lot into the same availability zone.
23
25
24
26
<!-- body -->
25
27
26
28
You can use any of the following methods to choose where Kubernetes schedules
27
- specific Pods:
29
+ specific Pods:
28
30
29
31
* [ nodeSelector] ( #nodeselector ) field matching against [ node labels] ( #built-in-node-labels )
30
32
* [ Affinity and anti-affinity] ( #affinity-and-anti-affinity )
@@ -338,13 +340,15 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
338
340
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
339
341
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
340
342
rules allow you to configure that a set of workloads should
341
- be co-located in the same defined topology, eg., the same node.
343
+ be co-located in the same defined topology; for example, preferring to place two related
344
+ Pods onto the same node.
342
345
343
- Take, for example, a three-node cluster running a web application with an
344
- in-memory cache like redis. You could use inter-pod affinity and anti-affinity
345
- to co-locate the web servers with the cache as much as possible.
346
+ For example : imagine a three-node cluster. You use the cluster to run a web application
347
+ and also an in-memory cache (such as Redis). For this example, also assume that latency between
348
+ the web application and the memory cache should be as low as is practical. You could use inter-pod
349
+ affinity and anti-affinity to co-locate the web servers with the cache as much as possible.
346
350
347
- In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
351
+ In the following example Deployment for the Redis cache, the replicas get the label `app=store`. The
348
352
` podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas
349
353
with the `app=store` label on a single node. This creates each cache in a
350
354
separate node.
@@ -379,10 +383,10 @@ spec:
379
383
image: redis:3.2-alpine
380
384
` ` `
381
385
382
- The following Deployment for the web servers creates replicas with the label `app=web-store`. The
383
- Pod affinity rule tells the scheduler to place each replica on a node that has a
384
- Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler
385
- to avoid placing multiple `app=web-store` servers on a single node.
386
+ The following example Deployment for the web servers creates replicas with the label `app=web-store`.
387
+ The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod
388
+ with the label `app=store`. The Pod anti-affinity rule tells the scheduler never to place
389
+ multiple `app=web-store` servers on a single node.
386
390
387
391
` ` ` yaml
388
392
apiVersion: apps/v1
@@ -431,6 +435,10 @@ where each web server is co-located with a cache, on three separate nodes.
431
435
| *webserver-1* | *webserver-2* | *webserver-3* |
432
436
| *cache-1* | *cache-2* | *cache-3* |
433
437
438
+ The overall effect is that each cache instance is likely to be accessed by a single client, that
439
+ is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
440
+
441
+ You might have other reasons to use Pod anti-affinity.
434
442
See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
435
443
for an example of a StatefulSet configured with anti-affinity for high
436
444
availability, using the same technique as this example.
0 commit comments