You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,13 +5,14 @@ weight: 50
5
5
---
6
6
7
7
<!--
8
-
---
8
+
9
9
title: Pod Topology Spread Constraints
10
10
content_template: templates/concept
11
11
weight: 50
12
+
12
13
---
13
-
-->
14
14
15
+
-->
15
16
16
17
{{% capture overview %}}
17
18
@@ -21,7 +22,7 @@ weight: 50
21
22
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.
- **labelSelector** 用于查找匹配的 pod。匹配此标签的 pod 将被统计,以确定相应拓扑域中 pod 的数量。有关详细信息,请参考[标签选择器](/docs/concepts/overview/working-with-objects/labels/#label-selectors)。
146
147
147
148
<!--
@@ -215,7 +216,7 @@ You can tweak the Pod spec to meet various kinds of requirements:
215
216
-->
216
217
217
218
- 将 `maxSkew` 更改为更大的值,比如 "2",这样传入的 pod 也可以放在 "zoneA" 上。
218
-
- 将 `topologyKey` 更改为 "node",以便将 pod 均匀分布在节点上而不是区域中。在上面的例子中,如果 `maxSkew` 保持为 "1",那么传入的 pod 只能放在 "node4" 上。
219
+
- 将 `topologyKey` 更改为 "node",以便将 pod 均匀分布在节点上而不是区域中。在上面的例子中,如果 `maxSkew` 保持为 "1",那么传入的 pod 只能放在 "node4" 上。
219
220
- 将 `whenUnsatisfiable: DoNotSchedule` 更改为 `whenUnsatisfiable: ScheduleAnyway`,以确保传入的 pod 始终可以调度(假设满足其他的调度 API)。但是,最好将其放置在具有较少匹配 pod 的拓扑域中。(请注意,此优先性与其他内部调度优先级(如资源使用率等)一起进行标准化。)
220
221
221
222
<!--
@@ -307,8 +308,8 @@ There are some implicit conventions worth noting here:
- Be aware of what will happen if the incomingPod’s `topologySpreadConstraints[*].labelSelector` doesn’t match its own labels. In the above example, if we remove the incoming Pod’s labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it’s still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload’s `topologySpreadConstraints[*].labelSelector` to match its own labels.
@@ -326,8 +327,7 @@ There are some implicit conventions worth noting here:
326
327
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
327
328
-->
328
329
329
-
假设有一个从 zonea 到 zonec 的 5 节点集群:
330
-
330
+
假设有一个从 zonea 到 zonec 的 5 节点集群:
331
331
332
332
```
333
333
+---------------+---------------+-------+
@@ -343,7 +343,7 @@ There are some implicit conventions worth noting here:
343
343
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different
375
375
topology domains - to achieve high availability or cost-saving. This can also help on rolling update
376
376
workloads and scaling out replicas smoothly.
377
-
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-even-pods-spreading.md#motivation) for more details.
377
+
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details.
378
378
-->
379
379
380
-
"EvenPodsSpread" 功能提供灵活的选项来将 pod 均匀分布到不同的拓扑域中,以实现高可用性或节省成本。这也有助于滚动更新工作负载和平滑扩展副本。有关详细信息,请参考[动机](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-even-pods-spreading.md#motivation)。
380
+
"EvenPodsSpread" 功能提供灵活的选项来将 pod 均匀分布到不同的拓扑域中,以实现高可用性或节省成本。这也有助于滚动更新工作负载和平滑扩展副本。有关详细信息,请参考[动机](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation)。
0 commit comments