You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/kubernetes-api/labels-annotations-taints/) that are created and populated automatically on most clusters.
42
52
@@ -80,17 +90,25 @@ You can read more about this field by running `kubectl explain Pod.spec.topology
80
90
81
91
### Example: One TopologySpreadConstraint
82
92
83
-
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
84
-
85
-
```
86
-
+---------------+---------------+
87
-
| zoneA | zoneB |
88
-
+-------+-------+-------+-------+
89
-
| node1 | node2 | node3 | node4 |
90
-
+-------+-------+-------+-------+
91
-
| P | P | P | |
92
-
+-------+-------+-------+-------+
93
-
```
93
+
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as:
96
114
@@ -100,15 +118,46 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones,
100
118
101
119
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
You can tweak the Pod spec to meet various kinds of requirements:
114
163
@@ -118,17 +167,26 @@ You can tweak the Pod spec to meet various kinds of requirements:
118
167
119
168
### Example: Multiple TopologySpreadConstraints
120
169
121
-
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
122
-
123
-
```
124
-
+---------------+---------------+
125
-
| zoneA | zoneB |
126
-
+-------+-------+-------+-------+
127
-
| node1 | node2 | node3 | node4 |
128
-
+-------+-------+-------+-------+
129
-
| P | P | P | |
130
-
+-------+-------+-------+-------+
131
-
```
170
+
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing.
152
219
@@ -169,15 +236,37 @@ There are some implicit conventions worth noting here:
169
236
170
237
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
0 commit comments