You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
72
72
73
73
- **maxSkew** describes the degree to which Pods may be unevenly distributed. It's the maximum permitted difference between the number of matching Pods in any two topology domains of a given topology type. It must be greater than zero.
74
74
- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
75
75
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
76
-
- `DoNotSchedule`(default) tells the scheduler not to schedule it.
77
-
- `ScheduleAnyway`tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
76
+
- `DoNotSchedule`(default) tells the scheduler not to schedule it.
77
+
- `ScheduleAnyway`tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
78
78
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
79
79
80
80
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
@@ -160,8 +160,9 @@ There are some implicit conventions worth noting here:
160
160
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
161
161
162
162
- Nodes without `topologySpreadConstraints[*].topologyKey` present will be bypassed. It implies that:
163
-
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA".
164
-
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
163
+
164
+
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA".
165
+
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
165
166
166
167
- Be aware of what will happen if the incomingPod’s `topologySpreadConstraints[*].labelSelector` doesn’t match its own labels. In the above example, if we remove the incoming Pod’s labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it’s still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload’s `topologySpreadConstraints[*].labelSelector` to match its own labels.
167
168
@@ -182,7 +183,7 @@ There are some implicit conventions worth noting here:
182
183
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
It is recommended that you disable this plugin in the scheduling profile when
@@ -229,14 +230,14 @@ In Kubernetes, directives related to "Affinity" control how Pods are
229
230
scheduled - more packed or more scattered.
230
231
231
232
- For `PodAffinity`, you can try to pack any number of Pods into qualifying
232
-
topology domain(s)
233
+
topology domain(s)
233
234
- For `PodAntiAffinity`, only one Pod can be scheduled into a
234
-
single topology domain.
235
+
single topology domain.
235
236
236
237
The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different
237
238
topology domains - to achieve high availability or cost-saving. This can also help on rolling update
238
239
workloads and scaling out replicas smoothly.
239
-
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-even-pods-spreading.md#motivation) for more details.
240
+
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details.
0 commit comments