Skip to content

Commit bfff661

Browse files
author
Tim Bannister
committed
Clarify known limitation of Pod topology spread constraints
The limitation is more around cluster autoscaling; nonetheless it seems to belong under Known limitations.
1 parent 72a070e commit bfff661

File tree

1 file changed

+10
-6
lines changed

1 file changed

+10
-6
lines changed

content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -406,12 +406,6 @@ Similarly, Kubernetes also respects `spec.nodeSelector`.
406406

407407
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
408408

409-
The scheduler doesn't have prior knowledge of all the zones or other topology domains
410-
that a cluster has. They are determined from the existing nodes in the cluster. This
411-
could lead to a problem in autoscaled clusters, when a node pool (or node group) is
412-
scaled to zero nodes and the user is expecting them to scale up, because, in this case,
413-
those topology domains won't be considered until there is at least one node in them.
414-
415409
## Implicit conventions
416410

417411
There are some implicit conventions worth noting here:
@@ -557,6 +551,16 @@ section of the enhancement proposal about Pod topology spread constraints.
557551
to rebalance the Pods distribution.
558552
- Pods matched on tainted nodes are respected.
559553
See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921).
554+
- The scheduler doesn't have prior knowledge of all the zones or other topology
555+
domains that a cluster has. They are determined from the existing nodes in the
556+
cluster. This could lead to a problem in autoscaled clusters, when a node pool (or
557+
node group) is scaled to zero nodes, and you're expecting the cluster to scale up,
558+
because, in this case, those topology domains won't be considered until there is
559+
at least one node in them.
560+
You can work around this by using an cluster autoscaling tool that is aware of
561+
Pod topology spread constraints and is also aware of the overall set of topology
562+
domains.
563+
560564

561565
## {{% heading "whatsnext" %}}
562566

0 commit comments

Comments
 (0)