Skip to content

Commit 69bb471

Browse files
committed
Fix identations in Conventions paragraph
1 parent ee6851e commit 69bb471

File tree

1 file changed

+32
-33
lines changed

1 file changed

+32
-33
lines changed

content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md

Lines changed: 32 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te
1818

1919
### Enable Feature Gate
2020

21-
The `EvenPodsSpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
21+
The `EvenPodsSpread` [feature gate] (/docs/reference/command-line-tools-reference/feature-gates/)
2222
must be enabled for the
2323
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} **and**
2424
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}.
@@ -62,19 +62,19 @@ metadata:
6262
name: mypod
6363
spec:
6464
topologySpreadConstraints:
65-
- maxSkew: <integer>
66-
topologyKey: <string>
67-
whenUnsatisfiable: <string>
68-
labelSelector: <object>
65+
- maxSkew: <integer>
66+
topologyKey: <string>
67+
whenUnsatisfiable: <string>
68+
labelSelector: <object>
6969
```
7070
7171
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
7272

7373
- **maxSkew** describes the degree to which Pods may be unevenly distributed. It's the maximum permitted difference between the number of matching Pods in any two topology domains of a given topology type. It must be greater than zero.
7474
- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
7575
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
76-
- `DoNotSchedule` (default) tells the scheduler not to schedule it.
77-
- `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
76+
- `DoNotSchedule` (default) tells the scheduler not to schedule it.
77+
- `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
7878
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
7979

8080
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
@@ -160,30 +160,29 @@ There are some implicit conventions worth noting here:
160160
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
161161

162162
- Nodes without `topologySpreadConstraints[*].topologyKey` present will be bypassed. It implies that:
163-
164-
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA".
165-
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
163+
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA".
164+
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
166165

167166
- Be aware of what will happen if the incomingPod’s `topologySpreadConstraints[*].labelSelector` doesn’t match its own labels. In the above example, if we remove the incoming Pod’s labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it’s still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload’s `topologySpreadConstraints[*].labelSelector` to match its own labels.
168167

169168
- If the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined, nodes not matching them will be bypassed.
170169

171-
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
172-
173-
```
174-
+---------------+---------------+-------+
175-
| zoneA | zoneB | zoneC |
176-
+-------+-------+-------+-------+-------+
177-
| node1 | node2 | node3 | node4 | node5 |
178-
+-------+-------+-------+-------+-------+
179-
| P | P | P | | |
180-
+-------+-------+-------+-------+-------+
181-
```
170+
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
182171

183-
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
172+
```
173+
+---------------+---------------+-------+
174+
| zoneA | zoneB | zoneC |
175+
+-------+-------+-------+-------+-------+
176+
| node1 | node2 | node3 | node4 | node5 |
177+
+-------+-------+-------+-------+-------+
178+
| P | P | P | | |
179+
+-------+-------+-------+-------+-------+
180+
```
184181

185-
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
182+
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
186183

184+
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
185+
187186
### Cluster-level default constraints
188187

189188
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
@@ -208,16 +207,16 @@ kind: KubeSchedulerConfiguration
208207
209208
profiles:
210209
pluginConfig:
211-
- name: PodTopologySpread
212-
args:
213-
defaultConstraints:
214-
- maxSkew: 1
215-
topologyKey: failure-domain.beta.kubernetes.io/zone
216-
whenUnsatisfiable: ScheduleAnyway
210+
- name: PodTopologySpread
211+
args:
212+
defaultConstraints:
213+
- maxSkew: 1
214+
topologyKey: failure-domain.beta.kubernetes.io/zone
215+
whenUnsatisfiable: ScheduleAnyway
217216
```
218217

219218
{{< note >}}
220-
The score produced by default scheduling constraints might conflict with the
219+
The score produced by default scheduling constraints might conflict with the
221220
score produced by the
222221
[`DefaultPodTopologySpread` plugin](/docs/reference/scheduling/profiles/#scheduling-plugins).
223222
It is recommended that you disable this plugin in the scheduling profile when
@@ -230,14 +229,14 @@ In Kubernetes, directives related to "Affinity" control how Pods are
230229
scheduled - more packed or more scattered.
231230

232231
- For `PodAffinity`, you can try to pack any number of Pods into qualifying
233-
topology domain(s)
232+
topology domain(s)
234233
- For `PodAntiAffinity`, only one Pod can be scheduled into a
235-
single topology domain.
234+
single topology domain.
236235

237236
The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different
238237
topology domains - to achieve high availability or cost-saving. This can also help on rolling update
239238
workloads and scaling out replicas smoothly.
240-
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details.
239+
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-even-pods-spreading.md#motivation) for more details.
241240

242241
## Known Limitations
243242

0 commit comments

Comments
 (0)