You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -82,12 +82,11 @@ spec:
82
82
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
83
83
84
84
- **maxSkew** describes the degree to which Pods may be unevenly distributed.
85
-
It's the maximum permitted difference between the number of matching Pods in
86
-
any two topology domains of a given topology type. It must be greater than
87
-
zero. Its semantics differs according to the value of `whenUnsatisfiable`:
85
+
It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`:
88
86
- when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum
89
87
permitted difference between the number of matching pods in the target
90
-
topology and the global minimum.
88
+
topology and the global minimum
89
+
(the minimum number of pods that match the label selector in a topology domain. For example, if you have 3 zones with 0, 2 and 3 matching pods respectively, The global minimum is 0).
91
90
- when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher
92
91
precedence to topologies that would help reduce the skew.
93
92
- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
@@ -96,6 +95,8 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
96
95
- `ScheduleAnyway`tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
97
96
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
98
97
98
+
When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints.
99
+
99
100
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
100
101
101
102
### Example: One TopologySpreadConstraint
@@ -387,7 +388,8 @@ for more details.
387
388
388
389
## Known Limitations
389
390
390
-
- Scaling down a Deployment may result in imbalanced Pods distribution.
391
+
- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.
392
+
You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution.
391
393
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
0 commit comments