You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/labels-annotations-taints/) that are created and populated automatically on most clusters.
When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints.
172
+
-->
173
+
当 Pod 定义了不止一个 `topologySpreadConstraint`,这些约束之间是逻辑与的关系。
174
+
kube-scheduler 会为新的 Pod 寻找一个能够满足所有约束的节点。
175
+
172
176
<!--
173
177
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
174
178
-->
@@ -353,7 +357,6 @@ graph BT
353
357
class zoneA,zoneB cluster;
354
358
{{< /mermaid >}}
355
359
356
-
357
360
<!--
358
361
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing.
359
362
-->
@@ -374,54 +377,59 @@ The scheduler will skip the non-matching nodes from the skew calculations if the
如果 Pod 定义了 `spec.nodeSelector` 或 `spec.affinity.nodeAffinity`,调度器将从倾斜计算中跳过不匹配的节点。
380
+
如果 Pod 定义了 `spec.nodeSelector` 或 `spec.affinity.nodeAffinity`,
381
+
调度器将在偏差计算中跳过不匹配的节点。
378
382
379
383
<!--
380
-
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
381
-
382
-
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
384
+
### Example: TopologySpreadConstraints with NodeAffinity
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
The scheduler doesn't have prior knowledge of all the zones or other topology domains that a cluster has. They are determined from the existing nodes in the cluster. This could lead to a problem in autoscaled clusters, when a node pool (or node group) is scaled to zero nodes and the user is expecting them to scale up, because, in this case, those topology domains won't be considered until there is at least one node in them.
@@ -503,7 +511,7 @@ An example configuration might look like follows:
503
511
配置的示例可能看起来像下面这个样子:
504
512
505
513
```yaml
506
-
apiVersion: kubescheduler.config.k8s.io/v1beta1
514
+
apiVersion: kubescheduler.config.k8s.io/v1beta3
507
515
kind: KubeSchedulerConfiguration
508
516
509
517
profiles:
@@ -565,32 +573,36 @@ is disabled.
565
573
-->
566
574
此外,原来用于提供等同行为的 `SelectorSpread` 插件也会被禁用。
567
575
576
+
{{< note >}}
577
+
<!--
578
+
The `PodTopologySpread` plugin does not score the nodes that don't have
579
+
the topology keys specified in the spreading constraints. This might result
580
+
in a different default behavior compared to the legacy `SelectorSpread` plugin when
The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different
633
-
topology domains - to achieve high availability or cost-saving. This can also help on rolling update
634
-
workloads and scaling out replicas smoothly.
635
-
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details.
636
642
-->
637
643
要实现更细粒度的控制,你可以设置拓扑分布约束来将 Pod 分布到不同的拓扑域下,
638
644
从而实现高可用性或节省成本。这也有助于工作负载的滚动更新和平稳地扩展副本规模。
@@ -642,13 +648,19 @@ See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig
642
648
<!--
643
649
## Known Limitations
644
650
645
-
- Scaling down a Deployment may result in imbalanced Pods distribution.
651
+
- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.
652
+
You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution.
653
+
646
654
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
0 commit comments