Skip to content

Commit c5428eb

Browse files
committed
feat: GA feature gate DefaultPodTopologySpread
Signed-off-by: kerthcet <[email protected]>
1 parent 0aa00b6 commit c5428eb

File tree

1 file changed

+10
-24
lines changed

1 file changed

+10
-24
lines changed

content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md

Lines changed: 10 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -4,21 +4,11 @@ content_type: concept
44
weight: 40
55
---
66

7-
{{< feature-state for_k8s_version="v1.19" state="stable" >}}
8-
<!-- leave this shortcode in place until the note about EvenPodsSpread is
9-
obsolete -->
107

118
<!-- overview -->
129

1310
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.
1411

15-
{{< note >}}
16-
In versions of Kubernetes before v1.18, you must enable the `EvenPodsSpread`
17-
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on
18-
the [API server](/docs/concepts/overview/components/#kube-apiserver) and the
19-
[scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) in order to use Pod
20-
topology spread constraints.
21-
{{< /note >}}
2212

2313
<!-- body -->
2414

@@ -85,7 +75,7 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
8575
It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`:
8676
- when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum
8777
permitted difference between the number of matching pods in the target
88-
topology and the global minimum
78+
topology and the global minimum
8979
(the minimum number of pods that match the label selector in a topology domain. For example, if you have 3 zones with 0, 2 and 3 matching pods respectively, The global minimum is 0).
9080
- when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher
9181
precedence to topologies that would help reduce the skew.
@@ -319,21 +309,17 @@ profiles:
319309
```
320310

321311
{{< note >}}
322-
The score produced by default scheduling constraints might conflict with the
323-
score produced by the
324-
[`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins).
325-
It is recommended that you disable this plugin in the scheduling profile when
326-
using default constraints for `PodTopologySpread`.
312+
[`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
313+
is disabled by default. It's recommended to use `PodTopologySpread` to achieve similar
314+
behavior.
327315
{{< /note >}}
328316

329-
#### Internal default constraints
317+
#### Built-in default constraints {#internal-default-constraints}
330318

331-
{{< feature-state for_k8s_version="v1.20" state="beta" >}}
319+
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
332320

333-
With the `DefaultPodTopologySpread` feature gate, enabled by default, the
334-
legacy `SelectorSpread` plugin is disabled.
335-
kube-scheduler uses the following default topology constraints for the
336-
`PodTopologySpread` plugin configuration:
321+
If you don't configure any cluster-level default constraints for pod topology spreading,
322+
then kube-scheduler acts as if you specified the following default topology constraints:
337323

338324
```yaml
339325
defaultConstraints:
@@ -346,7 +332,7 @@ defaultConstraints:
346332
```
347333

348334
Also, the legacy `SelectorSpread` plugin, which provides an equivalent behavior,
349-
is disabled.
335+
is disabled by default.
350336

351337
{{< note >}}
352338
If your nodes are not expected to have **both** `kubernetes.io/hostname` and
@@ -392,7 +378,7 @@ for more details.
392378

393379
## Known Limitations
394380

395-
- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.
381+
- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.
396382
You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution.
397383
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
398384

0 commit comments

Comments
 (0)