Skip to content

Commit 38b08c1

Browse files
authored
Merge pull request #24039 from shuuji3/en/concepts/workloads/pods/pod-topology-spread-constraints
Replace text diagrams with ones rendered by mermaid.js on concepts/workloads/pods/pod-topology-spread-constraints
2 parents e931e02 + 47efc81 commit 38b08c1

File tree

1 file changed

+145
-56
lines changed

1 file changed

+145
-56
lines changed

content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md

Lines changed: 145 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -30,13 +30,23 @@ node4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB
3030

3131
Then the cluster is logically viewed as below:
3232

33-
```
34-
+---------------+---------------+
35-
| zoneA | zoneB |
36-
+-------+-------+-------+-------+
37-
| node1 | node2 | node3 | node4 |
38-
+-------+-------+-------+-------+
39-
```
33+
{{<mermaid>}}
34+
graph TB
35+
subgraph "zoneB"
36+
n3(Node3)
37+
n4(Node4)
38+
end
39+
subgraph "zoneA"
40+
n1(Node1)
41+
n2(Node2)
42+
end
43+
44+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
45+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
46+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
47+
class n1,n2,n3,n4 k8s;
48+
class zoneA,zoneB cluster;
49+
{{< /mermaid >}}
4050

4151
Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/kubernetes-api/labels-annotations-taints/) that are created and populated automatically on most clusters.
4252

@@ -80,17 +90,25 @@ You can read more about this field by running `kubectl explain Pod.spec.topology
8090

8191
### Example: One TopologySpreadConstraint
8292

83-
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
84-
85-
```
86-
+---------------+---------------+
87-
| zoneA | zoneB |
88-
+-------+-------+-------+-------+
89-
| node1 | node2 | node3 | node4 |
90-
+-------+-------+-------+-------+
91-
| P | P | P | |
92-
+-------+-------+-------+-------+
93-
```
93+
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
94+
95+
{{<mermaid>}}
96+
graph BT
97+
subgraph "zoneB"
98+
p3(Pod) --> n3(Node3)
99+
n4(Node4)
100+
end
101+
subgraph "zoneA"
102+
p1(Pod) --> n1(Node1)
103+
p2(Pod) --> n2(Node2)
104+
end
105+
106+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
107+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
108+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
109+
class n1,n2,n3,n4,p1,p2,p3 k8s;
110+
class zoneA,zoneB cluster;
111+
{{< /mermaid >}}
94112

95113
If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as:
96114

@@ -100,15 +118,46 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones,
100118

101119
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
102120

103-
```
104-
+---------------+---------------+ +---------------+---------------+
105-
| zoneA | zoneB | | zoneA | zoneB |
106-
+-------+-------+-------+-------+ +-------+-------+-------+-------+
107-
| node1 | node2 | node3 | node4 | OR | node1 | node2 | node3 | node4 |
108-
+-------+-------+-------+-------+ +-------+-------+-------+-------+
109-
| P | P | P | P | | P | P | P P | |
110-
+-------+-------+-------+-------+ +-------+-------+-------+-------+
111-
```
121+
{{<mermaid>}}
122+
graph BT
123+
subgraph "zoneB"
124+
p3(Pod) --> n3(Node3)
125+
p4(mypod) --> n4(Node4)
126+
end
127+
subgraph "zoneA"
128+
p1(Pod) --> n1(Node1)
129+
p2(Pod) --> n2(Node2)
130+
end
131+
132+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
133+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
134+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
135+
class n1,n2,n3,n4,p1,p2,p3 k8s;
136+
class p4 plain;
137+
class zoneA,zoneB cluster;
138+
{{< /mermaid >}}
139+
140+
OR
141+
142+
{{<mermaid>}}
143+
graph BT
144+
subgraph "zoneB"
145+
p3(Pod) --> n3(Node3)
146+
p4(mypod) --> n3
147+
n4(Node4)
148+
end
149+
subgraph "zoneA"
150+
p1(Pod) --> n1(Node1)
151+
p2(Pod) --> n2(Node2)
152+
end
153+
154+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
155+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
156+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
157+
class n1,n2,n3,n4,p1,p2,p3 k8s;
158+
class p4 plain;
159+
class zoneA,zoneB cluster;
160+
{{< /mermaid >}}
112161

113162
You can tweak the Pod spec to meet various kinds of requirements:
114163

@@ -118,17 +167,26 @@ You can tweak the Pod spec to meet various kinds of requirements:
118167

119168
### Example: Multiple TopologySpreadConstraints
120169

121-
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
122-
123-
```
124-
+---------------+---------------+
125-
| zoneA | zoneB |
126-
+-------+-------+-------+-------+
127-
| node1 | node2 | node3 | node4 |
128-
+-------+-------+-------+-------+
129-
| P | P | P | |
130-
+-------+-------+-------+-------+
131-
```
170+
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
171+
172+
{{<mermaid>}}
173+
graph BT
174+
subgraph "zoneB"
175+
p3(Pod) --> n3(Node3)
176+
n4(Node4)
177+
end
178+
subgraph "zoneA"
179+
p1(Pod) --> n1(Node1)
180+
p2(Pod) --> n2(Node2)
181+
end
182+
183+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
184+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
185+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
186+
class n1,n2,n3,n4,p1,p2,p3 k8s;
187+
class p4 plain;
188+
class zoneA,zoneB cluster;
189+
{{< /mermaid >}}
132190

133191
You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node:
134192

@@ -138,15 +196,24 @@ In this case, to match the first constraint, the incoming Pod can only be placed
138196

139197
Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:
140198

141-
```
142-
+---------------+-------+
143-
| zoneA | zoneB |
144-
+-------+-------+-------+
145-
| node1 | node2 | node3 |
146-
+-------+-------+-------+
147-
| P P | P | P P |
148-
+-------+-------+-------+
149-
```
199+
{{<mermaid>}}
200+
graph BT
201+
subgraph "zoneB"
202+
p4(Pod) --> n3(Node3)
203+
p5(Pod) --> n3
204+
end
205+
subgraph "zoneA"
206+
p1(Pod) --> n1(Node1)
207+
p2(Pod) --> n1
208+
p3(Pod) --> n2(Node2)
209+
end
210+
211+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
212+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
213+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
214+
class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s;
215+
class zoneA,zoneB cluster;
216+
{{< /mermaid >}}
150217

151218
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing.
152219

@@ -169,15 +236,37 @@ There are some implicit conventions worth noting here:
169236

170237
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
171238

172-
```
173-
+---------------+---------------+-------+
174-
| zoneA | zoneB | zoneC |
175-
+-------+-------+-------+-------+-------+
176-
| node1 | node2 | node3 | node4 | node5 |
177-
+-------+-------+-------+-------+-------+
178-
| P | P | P | | |
179-
+-------+-------+-------+-------+-------+
180-
```
239+
{{<mermaid>}}
240+
graph BT
241+
subgraph "zoneB"
242+
p3(Pod) --> n3(Node3)
243+
n4(Node4)
244+
end
245+
subgraph "zoneA"
246+
p1(Pod) --> n1(Node1)
247+
p2(Pod) --> n2(Node2)
248+
end
249+
250+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
251+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
252+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
253+
class n1,n2,n3,n4,p1,p2,p3 k8s;
254+
class p4 plain;
255+
class zoneA,zoneB cluster;
256+
{{< /mermaid >}}
257+
258+
{{<mermaid>}}
259+
graph BT
260+
subgraph "zoneC"
261+
n5(Node5)
262+
end
263+
264+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
265+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
266+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
267+
class n5 k8s;
268+
class zoneC cluster;
269+
{{< /mermaid >}}
181270

182271
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
183272

0 commit comments

Comments
 (0)