@@ -406,12 +406,6 @@ Similarly, Kubernetes also respects `spec.nodeSelector`.
406
406
407
407
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
408
408
409
- The scheduler doesn't have prior knowledge of all the zones or other topology domains
410
- that a cluster has. They are determined from the existing nodes in the cluster. This
411
- could lead to a problem in autoscaled clusters, when a node pool (or node group) is
412
- scaled to zero nodes and the user is expecting them to scale up, because, in this case,
413
- those topology domains won't be considered until there is at least one node in them.
414
-
415
409
# # Implicit conventions
416
410
417
411
There are some implicit conventions worth noting here :
@@ -557,6 +551,16 @@ section of the enhancement proposal about Pod topology spread constraints.
557
551
to rebalance the Pods distribution.
558
552
- Pods matched on tainted nodes are respected.
559
553
See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921).
554
+ - The scheduler doesn't have prior knowledge of all the zones or other topology
555
+ domains that a cluster has. They are determined from the existing nodes in the
556
+ cluster. This could lead to a problem in autoscaled clusters, when a node pool (or
557
+ node group) is scaled to zero nodes, and you're expecting the cluster to scale up,
558
+ because, in this case, those topology domains won't be considered until there is
559
+ at least one node in them.
560
+ You can work around this by using an cluster autoscaling tool that is aware of
561
+ Pod topology spread constraints and is also aware of the overall set of topology
562
+ domains.
563
+
560
564
561
565
# # {{% heading "whatsnext" %}}
562
566
0 commit comments