Skip to content

Commit 8c9e504

Browse files
authored
Merge pull request #28495 from chenxuc/update-topo-constraints
clarify behavior of topo constraints
2 parents 1c673f0 + c6bf5d1 commit 8c9e504

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -82,12 +82,11 @@ spec:
8282
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
8383

8484
- **maxSkew** describes the degree to which Pods may be unevenly distributed.
85-
It's the maximum permitted difference between the number of matching Pods in
86-
any two topology domains of a given topology type. It must be greater than
87-
zero. Its semantics differs according to the value of `whenUnsatisfiable`:
85+
It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`:
8886
- when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum
8987
permitted difference between the number of matching pods in the target
90-
topology and the global minimum.
88+
topology and the global minimum
89+
(the minimum number of pods that match the label selector in a topology domain. For example, if you have 3 zones with 0, 2 and 3 matching pods respectively, The global minimum is 0).
9190
- when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher
9291
precedence to topologies that would help reduce the skew.
9392
- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
@@ -96,6 +95,8 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
9695
- `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
9796
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
9897

98+
When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints.
99+
99100
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
100101

101102
### Example: One TopologySpreadConstraint
@@ -387,7 +388,8 @@ for more details.
387388

388389
## Known Limitations
389390

390-
- Scaling down a Deployment may result in imbalanced Pods distribution.
391+
- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.
392+
You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution.
391393
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
392394

393395
## {{% heading "whatsnext" %}}

0 commit comments

Comments
 (0)