@@ -4,23 +4,23 @@ content_type: concept
4
4
weight : 40
5
5
---
6
6
7
- <!--
7
+ <!--
8
8
title: Pod Topology Spread Constraints
9
9
content_type: concept
10
- weight: 40
10
+ weight: 40
11
11
-->
12
12
13
13
<!-- overview -->
14
14
15
- <!--
15
+ <!--
16
16
You can use _topology spread constraints_ to control how
17
17
{{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster
18
18
among failure-domains such as regions, zones, nodes, and other user-defined topology
19
19
domains. This can help to achieve high availability as well as efficient resource
20
20
utilization.
21
21
22
22
You can set [cluster-level constraints](#cluster-level-default-constraints) as a default,
23
- or configure topology spread constraints for individual workloads.
23
+ or configure topology spread constraints for individual workloads.
24
24
-->
25
25
你可以使用 ** 拓扑分布约束(Topology Spread Constraints)** 来控制
26
26
{{< glossary_tooltip text="Pod" term_id="Pod" >}} 在集群内故障域之间的分布,
@@ -31,7 +31,7 @@ or configure topology spread constraints for individual workloads.
31
31
32
32
<!-- body -->
33
33
34
- <!--
34
+ <!--
35
35
## Motivation
36
36
37
37
Imagine that you have a cluster of up to twenty nodes, and you want to run a
@@ -43,7 +43,7 @@ same node: you would run the risk that a single node failure takes your workload
43
43
offline.
44
44
45
45
In addition to this basic usage, there are some advanced usage examples that
46
- enable your workloads to benefit on high availability and cluster utilization.
46
+ enable your workloads to benefit on high availability and cluster utilization.
47
47
-->
48
48
## 动机 {#motivation}
49
49
@@ -55,7 +55,7 @@ enable your workloads to benefit on high availability and cluster utilization.
55
55
56
56
除了这个基本的用法之外,还有一些高级的使用案例,能够让你的工作负载受益于高可用性并提高集群利用率。
57
57
58
- <!--
58
+ <!--
59
59
As you scale up and run more Pods, a different concern becomes important. Imagine
60
60
that you have three nodes running five Pods each. The nodes have enough capacity
61
61
to run that many replicas; however, the clients that interact with this workload
@@ -81,7 +81,7 @@ Pod topology spread constraints offer you a declarative way to configure that.
81
81
82
82
Pod 拓扑分布约束使你能够以声明的方式进行配置。
83
83
84
- <!--
84
+ <!--
85
85
## `topologySpreadConstraints` field
86
86
87
87
The Pod API includes a field, `spec.topologySpreadConstraints`. The usage of this field looks like
@@ -111,15 +111,15 @@ spec:
111
111
# ## 其他 Pod 字段置于此处
112
112
```
113
113
114
- <!--
114
+ <!--
115
115
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints` or
116
116
refer to [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of the API reference for Pod.
117
117
-->
118
118
你可以运行 ` kubectl explain Pod.spec.topologySpreadConstraints ` 或参阅 Pod API
119
119
参考的[ 调度] ( /zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling ) 一节,
120
120
了解有关此字段的更多信息。
121
121
122
- <!--
122
+ <!--
123
123
### Spread constraint definition
124
124
125
125
You can define one or multiple `topologySpreadConstraints` entries to instruct the
@@ -147,7 +147,7 @@ your cluster. Those fields are:
147
147
-->
148
148
- ** maxSkew** 描述这些 Pod 可能被均匀分布的程度。你必须指定此字段且该数值必须大于零。
149
149
其语义将随着 ` whenUnsatisfiable ` 的值发生变化:
150
-
150
+
151
151
- 如果你选择 ` whenUnsatisfiable: DoNotSchedule ` ,则 ` maxSkew ` 定义目标拓扑中匹配 Pod 的数量与
152
152
** 全局最小值** (符合条件的域中匹配的最小 Pod 数量,如果符合条件的域数量小于 MinDomains 则为零)
153
153
之间的最大允许差值。例如,如果你有 3 个可用区,分别有 2、2 和 1 个匹配的 Pod,则 ` MaxSkew ` 设为 1,
@@ -161,7 +161,7 @@ your cluster. Those fields are:
161
161
-->
162
162
- ** minDomains** 表示符合条件的域的最小数量。此字段是可选的。域是拓扑的一个特定实例。
163
163
符合条件的域是其节点与节点选择器匹配的域。
164
-
164
+
165
165
{{< note >}}
166
166
<!--
167
167
The `minDomains` field is a beta field and disabled by default in 1.25. You can enable it by enabling the
@@ -171,7 +171,7 @@ your cluster. Those fields are:
171
171
你可以通过启用 ` MinDomainsInPodTopologySpread `
172
172
[ 特性门控] ( /zh-cn/docs/reference/command-line-tools-reference/feature-gates/ ) 来启用该字段。
173
173
{{< /note >}}
174
-
174
+
175
175
<!--
176
176
- The value of `minDomains` must be greater than 0, when specified.
177
177
You can only specify `minDomains` in conjunction with `whenUnsatisfiable: DoNotSchedule`.
@@ -276,10 +276,10 @@ your cluster. Those fields are:
276
276
277
277
{{< note >}}
278
278
<!--
279
- The `nodeAffinityPolicy` is an alpha -level field added in 1.25 . You can disable it by disabling the
279
+ The `nodeAffinityPolicy` is a beta -level field and enabled by default in 1.26 . You can disable it by disabling the
280
280
` NodeInclusionPolicyInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
281
281
-->
282
- ` nodeAffinityPolicy` 是 1.25 中新增的一个 Alpha 级别字段。
282
+ ` nodeAffinityPolicy` 是 1.26 中默认启用的一个 Beta 级别字段。
283
283
你可以通过禁用 `NodeInclusionPolicyInPodTopologySpread`
284
284
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来禁用此字段。
285
285
{{< /note >}}
@@ -727,7 +727,7 @@ There are some implicit conventions worth noting here:
727
727
- 只有与新来的 Pod 具有相同命名空间的 Pod 才能作为匹配候选者。
728
728
729
729
- 调度器会忽略没有任何 `topologySpreadConstraints[*].topologyKey` 的节点。这意味着:
730
-
730
+
731
731
1. 位于这些节点上的 Pod 不影响 `maxSkew` 计算,在上面的例子中,假设节点 `node1` 没有标签 "zone",
732
732
则 2 个 Pod 将被忽略,因此新来的 Pod 将被调度到可用区 `A` 中。
733
733
2. 新的 Pod 没有机会被调度到这类节点上。在上面的例子中,
@@ -904,8 +904,8 @@ Pod 彼此的调度方式(更密集或更分散)。
904
904
905
905
` podAntiAffinity`
906
906
: 驱逐 Pod。如果将此设为 `requiredDuringSchedulingIgnoredDuringExecution` 模式,
907
- 则只有单个 Pod 可以调度到单个拓扑域;如果你选择 `preferredDuringSchedulingIgnoredDuringExecution`,
908
- 则你将丢失强制执行此约束的能力。
907
+ 则只有单个 Pod 可以调度到单个拓扑域;如果你选择 `preferredDuringSchedulingIgnoredDuringExecution`,
908
+ 则你将丢失强制执行此约束的能力。
909
909
910
910
<!--
911
911
For finer control, you can specify topology spread constraints to distribute
@@ -937,7 +937,7 @@ section of the enhancement proposal about Pod topology spread constraints.
937
937
# # 已知局限性 {#known-limitations}
938
938
939
939
- 当 Pod 被移除时,无法保证约束仍被满足。例如,缩减某 Deployment 的规模时,Pod 的分布可能不再均衡。
940
-
940
+
941
941
你可以使用 [Descheduler](https://github.com/kubernetes-sigs/descheduler) 来重新实现 Pod 分布的均衡。
942
942
943
943
- 具有污点的节点上匹配的 Pod 也会被统计。
0 commit comments