Skip to content

Commit 7f3efab

Browse files
committed
[zh] sync blog: 2023-04-17-topology-spread-features.md
1 parent 9273545 commit 7f3efab

File tree

1 file changed

+306
-0
lines changed

1 file changed

+306
-0
lines changed
Lines changed: 306 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,306 @@
1+
---
2+
layout: blog
3+
title: "Kubernetes 1.27:更多精细粒度的 Pod 拓扑分布策略进阶至 Beta"
4+
date: 2023-04-17
5+
slug: fine-grained-pod-topology-spread-features-beta
6+
---
7+
<!--
8+
layout: blog
9+
title: "Kubernetes 1.27: More fine-grained pod topology spread policies reached beta"
10+
date: 2023-04-17
11+
slug: fine-grained-pod-topology-spread-features-beta
12+
-->
13+
14+
<!--
15+
**Authors:** [Alex Wang](https://github.com/denkensk) (Shopee), [Kante Yin](https://github.com/kerthcet) (DaoCloud), [Kensei Nakada](https://github.com/sanposhiho) (Mercari)
16+
-->
17+
**作者:** [Alex Wang](https://github.com/denkensk) (Shopee),
18+
[Kante Yin](https://github.com/kerthcet) (DaoCloud),
19+
[Kensei Nakada](https://github.com/sanposhiho) (Mercari)
20+
21+
**译者:** [Michael Yao](https://github.com/windsonsea) (DaoCloud)
22+
23+
<!--
24+
In Kubernetes v1.19, [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
25+
went to general availability (GA).
26+
27+
As time passed, we - SIG Scheduling - received feedback from users,
28+
and, as a result, we're actively working on improving the Topology Spread feature via three KEPs.
29+
All of these features have reached beta in Kubernetes v1.27 and are enabled by default.
30+
31+
This blog post introduces each feature and the use case behind each of them.
32+
-->
33+
在 Kubernetes v1.19 中,
34+
[Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)进阶至正式发布 (GA)。
35+
36+
随着时间的流逝,SIG Scheduling 收到了许多用户的反馈,
37+
随后通过 3 个 KEP 积极改进了 Topology Spread(拓扑分布)特性。
38+
所有这些特性在 Kubernetes v1.27 中已进阶至 Beta 且默认被启用。
39+
40+
这篇博文介绍了这些特性及其背后的使用场景。
41+
42+
<!--
43+
## KEP-3022: min domains in Pod Topology Spread
44+
45+
Pod Topology Spread has the `maxSkew` parameter to define the degree to which Pods may be unevenly distributed.
46+
-->
47+
## KEP-3022:Pod 拓扑分布中的最小域数
48+
49+
Pod 拓扑分布使用 `maxSkew` 参数来定义 Pod 可以不均匀分布的程度。
50+
51+
<!--
52+
But, there wasn't a way to control the number of domains over which we should spread.
53+
Some users want to force spreading Pods over a minimum number of domains, and if there aren't enough already present, make the cluster-autoscaler provision them.
54+
55+
Kubernetes v1.24 introduced the `minDomains` parameter for pod topology spread constraints,
56+
as an alpha feature.
57+
Via `minDomains` parameter, you can define the minimum number of domains.
58+
59+
For example, assume there are 3 Nodes with the enough capacity,
60+
and a newly created ReplicaSet has the following `topologySpreadConstraints` in its Pod template.
61+
-->
62+
但还有没有一种方式可以控制应分布到的域数。
63+
一些用户想要强制在至少指定个数的若干域内分布 Pod;并且如果目前存在的域个数还不够,
64+
则使用 cluster-autoscaler 制备新的域。
65+
66+
Kubernetes v1.24 以 Alpha 特性的形式为 Pod 拓扑分布约束引入了 `minDomains` 参数。
67+
通过 `minDomains` 参数,你可以定义最小域数。
68+
69+
例如,假设有 3 个 Node 具有足够的容量,
70+
且新建的 ReplicaSet 在其 Pod 模板中带有以下 `topologySpreadConstraints`
71+
72+
<!--
73+
```yaml
74+
...
75+
topologySpreadConstraints:
76+
- maxSkew: 1
77+
minDomains: 5 # requires 5 Nodes at least (because each Node has a unique hostname).
78+
whenUnsatisfiable: DoNotSchedule # minDomains is valid only when DoNotSchedule is used.
79+
topologyKey: kubernetes.io/hostname
80+
labelSelector:
81+
matchLabels:
82+
foo: bar
83+
```
84+
-->
85+
```yaml
86+
...
87+
topologySpreadConstraints:
88+
- maxSkew: 1
89+
minDomains: 5 # 需要至少 5 个 Node(因为每个 Node 都有唯一的 hostname)
90+
whenUnsatisfiable: DoNotSchedule # 只有使用 DoNotSchedule 时 minDomains 才有效
91+
topologyKey: kubernetes.io/hostname
92+
labelSelector:
93+
matchLabels:
94+
foo: bar
95+
```
96+
97+
<!--
98+
In this case, 3 Pods will be scheduled to those 3 Nodes,
99+
but other 2 Pods from this replicaset will be unschedulable until more Nodes join the cluster.
100+
101+
You can imagine that the cluster autoscaler provisions new Nodes based on these unschedulable Pods,
102+
and as a result, the replicas are finally spread over 5 Nodes.
103+
-->
104+
在这个场景中,3 个 Pod 将被分别调度到这 3 个 Node 上,
105+
但 ReplicaSet 中的其它 2 个 Pod 在更多 Node 接入到集群之前将无法被调度。
106+
107+
你可以想象集群自动扩缩器基于这些不可调度的 Pod 制备新的 Node,
108+
因此这些副本最后将分布到 5 个 Node 上。
109+
110+
<!--
111+
## KEP-3094: Take taints/tolerations into consideration when calculating podTopologySpread skew
112+
113+
Before this enhancement, when you deploy a pod with `podTopologySpread` configured, kube-scheduler would
114+
take the Nodes that satisfy the Pod's nodeAffinity and nodeSelector into consideration
115+
in filtering and scoring, but would not care about whether the node taints are tolerated by the incoming pod or not.
116+
This may lead to a node with untolerated taint as the only candidate for spreading, and as a result,
117+
the pod will stuck in Pending if it doesn't tolerate the taint.
118+
-->
119+
## KEP-3094:计算 podTopologySpread 偏差时考虑污点/容忍度
120+
121+
在本次增强之前,当你部署已经配置了 `topologySpreadConstraints` 的 Pod 时,kube-scheduler
122+
将在过滤和评分时考虑满足 Pod 的 nodeAffinity 和 nodeSelector 的节点,
123+
但不关心传入的 Pod 是否容忍节点污点。这可能会导致具有不可容忍污点的节点成为唯一的分布候选,
124+
因此如果 Pod 不容忍污点,则该 Pod 将卡在 Pending 状态。
125+
126+
<!--
127+
To allow more fine-gained decisions about which Nodes to account for when calculating spreading skew,
128+
Kubernetes 1.25 introduced two new fields within `topologySpreadConstraints` to define node inclusion policies:
129+
`nodeAffinityPolicy` and `nodeTaintPolicy`.
130+
131+
A manifest that applies these policies looks like the following:
132+
-->
133+
为了能够在计算分布偏差时针对要考虑的节点作出粒度更精细的决策,Kubernetes 1.25
134+
在 `topologySpreadConstraints` 中引入了两个新字段 `nodeAffinityPolicy`
135+
和 `nodeTaintPolicy` 来定义节点包含策略。
136+
137+
应用这些策略的清单如下所示:
138+
139+
<!--
140+
```yaml
141+
apiVersion: v1
142+
kind: Pod
143+
metadata:
144+
name: example-pod
145+
spec:
146+
# Configure a topology spread constraint
147+
topologySpreadConstraints:
148+
- maxSkew: <integer>
149+
# ...
150+
nodeAffinityPolicy: [Honor|Ignore]
151+
nodeTaintsPolicy: [Honor|Ignore]
152+
# other Pod fields go here
153+
```
154+
-->
155+
```yaml
156+
apiVersion: v1
157+
kind: Pod
158+
metadata:
159+
name: example-pod
160+
spec:
161+
# 配置拓扑分布约束
162+
topologySpreadConstraints:
163+
- maxSkew: <integer>
164+
# ...
165+
nodeAffinityPolicy: [Honor|Ignore]
166+
nodeTaintsPolicy: [Honor|Ignore]
167+
# 在此处添加其他 Pod 字段
168+
```
169+
170+
<!--
171+
The `nodeAffinityPolicy` field indicates how Kubernetes treats a Pod's `nodeAffinity` or `nodeSelector` for
172+
pod topology spreading.
173+
If `Honor`, kube-scheduler filters out nodes not matching `nodeAffinity`/`nodeSelector` in the calculation of
174+
spreading skew.
175+
If `Ignore`, all nodes will be included, regardless of whether they match the Pod's `nodeAffinity`/`nodeSelector`
176+
or not.
177+
178+
For backwards compatibility, `nodeAffinityPolicy` defaults to `Honor`.
179+
-->
180+
`nodeAffinityPolicy` 字段指示 Kubernetes 如何处理 Pod 的 `nodeAffinity` 或 `nodeSelector`
181+
以计算 Pod 拓扑分布。如果是 `Honor`,则 kube-scheduler 在计算分布偏差时会过滤掉不匹配
182+
`nodeAffinity`/`nodeSelector` 的节点。如果是 `Ignore`,则会包括所有节点,不会管它们是否与
183+
Pod 的 `nodeAffinity`/`nodeSelector` 匹配。
184+
185+
为了向后兼容,`nodeAffinityPolicy` 默认为 `Honor`。
186+
187+
<!--
188+
The `nodeTaintsPolicy` field defines how Kubernetes considers node taints for pod topology spreading.
189+
If `Honor`, only tainted nodes for which the incoming pod has a toleration, will be included in the calculation of spreading skew.
190+
If `Ignore`, kube-scheduler will not consider the node taints at all in the calculation of spreading skew, so a node with
191+
pod untolerated taint will also be included.
192+
193+
For backwards compatibility, `nodeTaintsPolicy` defaults to `Ignore`.
194+
-->
195+
`nodeTaintsPolicy` 字段定义 Kubernetes 计算 Pod 拓扑分布时如何对待节点污点。
196+
如果是 `Honor`,则只有配置了污点的节点上的传入 Pod 带有容忍标签时该节点才会被包括在分布偏差的计算中。
197+
如果是 `Ignore`,则在计算分布偏差时 kube-scheduler 根本不会考虑节点污点,
198+
因此带有未容忍污点的 Pod 的节点也会被包括进去。
199+
200+
为了向后兼容,`nodeTaintsPolicy` 默认为 `Ignore`。
201+
202+
<!--
203+
The feature was introduced in v1.25 as alpha. By default, it was disabled, so if you want to use this feature in v1.25,
204+
you had to explictly enable the feature gate `NodeInclusionPolicyInPodTopologySpread`. In the following v1.26
205+
release, that associated feature graduated to beta and is enabled by default.
206+
-->
207+
该特性在 v1.25 中作为 Alpha 引入。默认被禁用,因此如果要在 v1.25 中使用此特性,
208+
则必须显式启用特性门控 `NodeInclusionPolicyInPodTopologySpread`。
209+
在接下来的 v1.26 版本中,相关特性进阶至 Beta 并默认被启用。
210+
211+
<!--
212+
## KEP-3243: Respect Pod topology spread after rolling upgrades
213+
-->
214+
## KEP-3243:滚动升级后关注 Pod 拓扑分布
215+
216+
<!--
217+
Pod Topology Spread uses the field `labelSelector` to identify the group of pods over which
218+
spreading will be calculated. When using topology spreading with Deployments, it is common
219+
practice to use the `labelSelector` of the Deployment as the `labelSelector` in the topology
220+
spread constraints. However, this implies that all pods of a Deployment are part of the spreading
221+
calculation, regardless of whether they belong to different revisions. As a result, when a new revision
222+
is rolled out, spreading will apply across pods from both the old and new ReplicaSets, and so by the
223+
time the new ReplicaSet is completely rolled out and the old one is rolled back, the actual spreading
224+
we are left with may not match expectations because the deleted pods from the older ReplicaSet will cause
225+
skewed distribution for the remaining pods. To avoid this problem, in the past users needed to add a
226+
revision label to Deployment and update it manually at each rolling upgrade (both the label on the
227+
pod template and the `labelSelector` in the `topologySpreadConstraints`).
228+
-->
229+
Pod 拓扑分布使用 `labelSelector` 字段来标识要计算分布的 Pod 组。
230+
在针对 Deployment 进行拓扑分布时,通常会使用 Deployment 的
231+
`labelSelector` 作为拓扑分布约束中的 `labelSelector`。
232+
但这意味着 Deployment 的所有 Pod 都参与分布计算,与这些 Pod 是否属于不同的版本无关。
233+
因此,在发布新版本时,分布将同时应用到新旧 ReplicaSet 的 Pod,
234+
并在新的 ReplicaSet 完全发布且旧的 ReplicaSet 被下线时,我们留下的实际分布可能与预期不符,
235+
这是因为旧 ReplicaSet 中已删除的 Pod 将导致剩余 Pod 的分布不均匀。
236+
为了避免这个问题,过去用户需要向 Deployment 添加修订版标签,并在每次滚动升级时手动更新
237+
(包括 Pod 模板上的标签和 `topologySpreadConstraints` 中的 `labelSelector`)。
238+
239+
<!--
240+
To solve this problem with a simpler API, Kubernetes v1.25 introduced a new field named
241+
`matchLabelKeys` to `topologySpreadConstraints`. `matchLabelKeys` is a list of pod label keys to select
242+
the pods over which spreading will be calculated. The keys are used to lookup values from the labels of
243+
the Pod being scheduled, those key-value labels are ANDed with `labelSelector` to select the group of
244+
existing pods over which spreading will be calculated for the incoming pod.
245+
-->
246+
为了通过更简单的 API 解决这个问题,Kubernetes v1.25 引入了一个名为 `matchLabelKeys`
247+
的新字段到 `topologySpreadConstraints` 中。`matchLabelKeys` 是一个 Pod 标签键列表,
248+
用于选择计算分布方式的 Pod。这些键用于查找 Pod 被调度时的标签值,
249+
这些键值标签与 `labelSelector` 进行逻辑与运算,为新增 Pod 计算分布方式选择现有 Pod 组。
250+
251+
<!--
252+
With `matchLabelKeys`, you don't need to update the `pod.spec` between different revisions.
253+
The controller or operator managing rollouts just needs to set different values to the same label key for different revisions.
254+
The scheduler will assume the values automatically based on `matchLabelKeys`.
255+
For example, if you are configuring a Deployment, you can use the label keyed with
256+
[pod-template-hash](/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label),
257+
which is added automatically by the Deployment controller, to distinguish between different
258+
revisions in a single Deployment.
259+
-->
260+
借助 `matchLabelKeys`,你无需在修订版变化时更新 `pod.spec`。
261+
控制器或 Operator 管理滚动升级时只需针对不同修订版为相同的标签键设置不同的值即可。
262+
调度程序将基于 `matchLabelKeys` 自动完成赋值。例如,如果你正配置 Deployment,
263+
则可以使用由 Deployment 控制器自动添加的
264+
[pod-template-hash](/zh-cn/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label)
265+
的标签键来区分单个 Deployment 中的不同修订版。
266+
267+
```yaml
268+
topologySpreadConstraints:
269+
- maxSkew: 1
270+
topologyKey: kubernetes.io/hostname
271+
whenUnsatisfiable: DoNotSchedule
272+
labelSelector:
273+
matchLabels:
274+
app: foo
275+
matchLabelKeys:
276+
- pod-template-hash
277+
```
278+
279+
<!--
280+
## Getting involved
281+
282+
These features are managed by Kubernetes [SIG Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling).
283+
284+
Please join us and share your feedback. We look forward to hearing from you!
285+
-->
286+
## 参与其中
287+
288+
这些特性归
289+
Kubernetes [SIG Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling) 管理。
290+
291+
请加入我们分享反馈。我们期待聆听你的声音!
292+
293+
<!--
294+
## How can I learn more?
295+
296+
- [Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) in the Kubernetes documentation
297+
- [KEP-3022: min domains in Pod Topology Spread](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3022-min-domains-in-pod-topology-spread)
298+
- [KEP-3094: Take taints/tolerations into consideration when calculating PodTopologySpread skew](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3094-pod-topology-spread-considering-taints)
299+
- [KEP-3243: Respect PodTopologySpread after rolling upgrades](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades)
300+
-->
301+
## 了解更多
302+
303+
- Kubernetes 文档中的 [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)
304+
- [KEP-3022:Pod 拓扑分布中的最小域数](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3022-min-domains-in-pod-topology-spread)
305+
- [KEP-3094:计算 podTopologySpread 偏差时考虑污点/容忍度](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3094-pod-topology-spread-considering-taints)
306+
- [KEP-3243:滚动升级后关注 Pod 拓扑分布](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades)

0 commit comments

Comments
 (0)