Skip to content

Commit f01be79

Browse files
committed
[zh] Resync pod topology spread constraints page
1 parent ee6bd56 commit f01be79

File tree

1 file changed

+76
-64
lines changed

1 file changed

+76
-64
lines changed

content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md

Lines changed: 76 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ node4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB
7575
<!--
7676
Then the cluster is logically viewed as below:
7777
-->
78-
然后从逻辑上看集群如下
78+
那么,从逻辑上看集群如下
7979

8080
{{<mermaid>}}
8181
graph TB
@@ -96,11 +96,9 @@ graph TB
9696
{{< /mermaid >}}
9797

9898
<!--
99-
10099
Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/labels-annotations-taints/) that are created and populated automatically on most clusters.
101100
-->
102-
你可以复用在大多数集群上自动创建和填充的
103-
[常用标签](/zh/docs/reference/labels-annotations-taints/)
101+
你可以复用在大多数集群上自动创建和填充的[常用标签](/zh/docs/reference/labels-annotations-taints/)
104102
而不是手动添加标签。
105103

106104
<!--
@@ -169,6 +167,12 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
169167
拓扑域中 Pod 的数量。
170168
有关详细信息,请参考[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)。
171169

170+
<!--
171+
When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints.
172+
-->
173+
当 Pod 定义了不止一个 `topologySpreadConstraint`,这些约束之间是逻辑与的关系。
174+
kube-scheduler 会为新的 Pod 寻找一个能够满足所有约束的节点。
175+
172176
<!--
173177
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
174178
-->
@@ -353,7 +357,6 @@ graph BT
353357
class zoneA,zoneB cluster;
354358
{{< /mermaid >}}
355359

356-
357360
<!--
358361
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing.
359362
-->
@@ -374,54 +377,59 @@ The scheduler will skip the non-matching nodes from the skew calculations if the
374377
-->
375378
### 节点亲和性与节点选择器的相互作用 {#interaction-with-node-affinity-and-node-selectors}
376379

377-
如果 Pod 定义了 `spec.nodeSelector` 或 `spec.affinity.nodeAffinity`,调度器将从倾斜计算中跳过不匹配的节点。
380+
如果 Pod 定义了 `spec.nodeSelector` 或 `spec.affinity.nodeAffinity`,
381+
调度器将在偏差计算中跳过不匹配的节点。
378382

379383
<!--
380-
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
381-
382-
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
384+
### Example: TopologySpreadConstraints with NodeAffinity
383385

384-
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
386+
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
385387
-->
386-
假设你有一个跨越 zoneA 到 zoneC 的 5 节点集群:
388+
### 示例:TopologySpreadConstraints 与 NodeAffinity
387389

388-
{{<mermaid>}}
389-
graph BT
390-
subgraph "zoneB"
391-
p3(Pod) --> n3(Node3)
392-
n4(Node4)
393-
end
394-
subgraph "zoneA"
395-
p1(Pod) --> n1(Node1)
396-
p2(Pod) --> n2(Node2)
397-
end
390+
假设你有一个跨越 zoneA 到 zoneC 的 5 节点集群:
398391

399-
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
400-
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
401-
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
402-
class n1,n2,n3,n4,p1,p2,p3 k8s;
403-
class p4 plain;
404-
class zoneA,zoneB cluster;
405-
{{< /mermaid >}}
392+
{{<mermaid>}}
393+
graph BT
394+
subgraph "zoneB"
395+
p3(Pod) --> n3(Node3)
396+
n4(Node4)
397+
end
398+
subgraph "zoneA"
399+
p1(Pod) --> n1(Node1)
400+
p2(Pod) --> n2(Node2)
401+
end
406402

407-
{{<mermaid>}}
408-
graph BT
409-
subgraph "zoneC"
410-
n5(Node5)
411-
end
403+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
404+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
405+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
406+
class n1,n2,n3,n4,p1,p2,p3 k8s;
407+
class p4 plain;
408+
class zoneA,zoneB cluster;
409+
{{< /mermaid >}}
410+
411+
{{<mermaid>}}
412+
graph BT
413+
subgraph "zoneC"
414+
n5(Node5)
415+
end
416+
417+
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
418+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
419+
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
420+
class n5 k8s;
421+
class zoneC cluster;
422+
{{< /mermaid >}}
412423

413-
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
414-
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
415-
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
416-
class n5 k8s;
417-
class zoneC cluster;
418-
{{< /mermaid >}}
419424

420-
而且你知道 "zoneC" 必须被排除在外。在这种情况下,可以按如下方式编写 yaml,
421-
以便将 "mypod" 放置在 "zoneB" 上,而不是 "zoneC" 上。同样,`spec.nodeSelector`
422-
也要一样处理。
425+
<!--
426+
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
427+
-->
428+
而且你知道 "zoneC" 必须被排除在外。在这种情况下,可以按如下方式编写 YAML,
429+
以便将 "mypod" 放置在 "zoneB" 上,而不是 "zoneC" 上。同样,`spec.nodeSelector`
430+
也要一样处理。
423431

424-
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
432+
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
425433

426434
<!--
427435
The scheduler doesn't have prior knowledge of all the zones or other topology domains that a cluster has. They are determined from the existing nodes in the cluster. This could lead to a problem in autoscaled clusters, when a node pool (or node group) is scaled to zero nodes and the user is expecting them to scale up, because, in this case, those topology domains won't be considered until there is at least one node in them.
@@ -503,7 +511,7 @@ An example configuration might look like follows:
503511
配置的示例可能看起来像下面这个样子:
504512

505513
```yaml
506-
apiVersion: kubescheduler.config.k8s.io/v1beta1
514+
apiVersion: kubescheduler.config.k8s.io/v1beta3
507515
kind: KubeSchedulerConfiguration
508516
509517
profiles:
@@ -565,32 +573,36 @@ is disabled.
565573
-->
566574
此外,原来用于提供等同行为的 `SelectorSpread` 插件也会被禁用。
567575

576+
{{< note >}}
577+
<!--
578+
The `PodTopologySpread` plugin does not score the nodes that don't have
579+
the topology keys specified in the spreading constraints. This might result
580+
in a different default behavior compared to the legacy `SelectorSpread` plugin when
581+
using the default topology constraints.
582+
-->
583+
对于分布约束中所指定的拓扑键而言,`PodTopologySpread` 插件不会为不包含这些主键的节点评分。
584+
这可能导致在使用默认拓扑约束时,其行为与原来的 `SelectorSpread` 插件的默认行为不同,
585+
568586
<!--
569587
If your nodes are not expected to have **both** `kubernetes.io/hostname` and
570588
`topology.kubernetes.io/zone` labels set, define your own constraints
571589
instead of using the Kubernetes defaults.
572-
573-
The `PodTopologySpread` plugin does not score the nodes that don't have
574-
the topology keys specified in the spreading constraints.
575590
-->
576-
{{< note >}}
577591
如果你的节点不会 **同时** 设置 `kubernetes.io/hostname` 和
578592
`topology.kubernetes.io/zone` 标签,你应该定义自己的约束而不是使用
579593
Kubernetes 的默认约束。
580-
581-
插件 `PodTopologySpread` 不会为未设置分布约束中所给拓扑键的节点评分。
582594
{{< /note >}}
583595

584596
<!--
585597
If you don't want to use the default Pod spreading constraints for your cluster,
586598
you can disable those defaults by setting `defaultingType` to `List` and leaving
587599
empty `defaultConstraints` in the `PodTopologySpread` plugin configuration:
588600
-->
589-
如果你不想为集群使用默认的 Pod 分布约束,你可以通过设置 `defaultingType` 参数为 `List`
590-
`PodTopologySpread` 插件配置中的 `defaultConstraints` 参数置空来禁用默认 Pod 分布约束。
601+
如果你不想为集群使用默认的 Pod 分布约束,你可以通过设置 `defaultingType` 参数为 `List`
602+
并将 `PodTopologySpread` 插件配置中的 `defaultConstraints` 参数置空来禁用默认 Pod 分布约束。
591603

592604
```yaml
593-
apiVersion: kubescheduler.config.k8s.io/v1beta1
605+
apiVersion: kubescheduler.config.k8s.io/v1beta3
594606
kind: KubeSchedulerConfiguration
595607
596608
profiles:
@@ -613,9 +625,9 @@ scheduled - more packed or more scattered.
613625

614626
<!--
615627
- For `PodAffinity`, you can try to pack any number of Pods into qualifying
616-
topology domain(s)
628+
topology domain(s)
617629
- For `PodAntiAffinity`, only one Pod can be scheduled into a
618-
single topology domain.
630+
single topology domain.
619631
-->
620632
- 对于 `PodAffinity`,你可以尝试将任意数量的 Pod 集中到符合条件的拓扑域中。
621633
- 对于 `PodAntiAffinity`,只能将一个 Pod 调度到某个拓扑域中。
@@ -627,12 +639,6 @@ cost-saving. This can also help on rolling update workloads and scaling out
627639
replicas smoothly. See
628640
[Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation)
629641
for more details.
630-
631-
632-
The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different
633-
topology domains - to achieve high availability or cost-saving. This can also help on rolling update
634-
workloads and scaling out replicas smoothly.
635-
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details.
636642
-->
637643
要实现更细粒度的控制,你可以设置拓扑分布约束来将 Pod 分布到不同的拓扑域下,
638644
从而实现高可用性或节省成本。这也有助于工作负载的滚动更新和平稳地扩展副本规模。
@@ -642,13 +648,19 @@ See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig
642648
<!--
643649
## Known Limitations
644650

645-
- Scaling down a Deployment may result in imbalanced Pods distribution.
651+
- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.
652+
You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution.
653+
646654
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
647655
-->
648656
## 已知局限性
649657

650-
- Deployment 缩容操作可能导致 Pod 分布不平衡。
651-
- 具有污点的节点上的 Pods 也会被统计。
658+
- 当 Pod 被移除时,无法保证约束仍被满足。例如,缩减某 Deployment 的规模时,
659+
Pod 的分布可能不再均衡。
660+
你可以使用 [Descheduler](https://github.com/kubernetes-sigs/descheduler)
661+
来重新实现 Pod 分布的均衡。
662+
663+
- 具有污点的节点上匹配的 Pods 也会被统计。
652664
参考 [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)。
653665

654666
## {{% heading "whatsnext" %}}

0 commit comments

Comments
 (0)