Skip to content

Commit e676d9a

Browse files
authored
Merge pull request #25016 from tengqm/zh-sync-endpointslice
[zh] Resync endpoint-slice page
2 parents 184777e + 062378d commit e676d9a

File tree

1 file changed

+276
-51
lines changed

1 file changed

+276
-51
lines changed
Lines changed: 276 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,13 @@
11
---
22
title: 端点切片(Endpoint Slices)
3-
feature:
4-
title: 端点切片
5-
description: >
6-
Kubernetes 集群中网络端点的可扩展跟踪。
7-
83
content_type: concept
9-
weight: 10
4+
weight: 35
105
---
116

127
<!--
138
title: Endpoint Slices
14-
feature:
15-
title: Endpoint Slices
16-
description: >
17-
Scalable tracking of network endpoints in a Kubernetes cluster.
18-
199
content_type: concept
20-
weight: 10
10+
weight: 35
2111
-->
2212

2313
<!-- overview -->
@@ -34,25 +24,65 @@ _端点切片(Endpoint Slices)_ 提供了一种简单的方法来跟踪 Kube
3424

3525
<!-- body -->
3626

27+
<!--
28+
## Motivation
29+
30+
The Endpoints API has provided a simple and straightforward way of
31+
tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
32+
and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle and
33+
send more traffic to more backend Pods, limitations of that original API became
34+
more visible.
35+
Most notably, those included challenges with scaling to larger numbers of
36+
network endpoints.
37+
-->
38+
## 动机 {#motivation}
39+
40+
Endpoints API 提供了在 Kubernetes 跟踪网络端点的一种简单而直接的方法。
41+
不幸的是,随着 Kubernetes 集群和 {{< glossary_tooltip text="服务" term_id="service" >}}
42+
逐渐开始为更多的后端 Pods 处理和发送请求,原来的 API 的局限性变得越来越明显。
43+
最重要的是那些因为要处理大量网络端点而带来的挑战。
44+
45+
<!--
46+
Since all network endpoints for a Service were stored in a single Endpoints
47+
resource, those resources could get quite large. That affected the performance
48+
of Kubernetes components (notably the master control plane) and resulted in
49+
significant amounts of network traffic and processing when Endpoints changed.
50+
EndpointSlices help you mitigate those issues as well as provide an extensible
51+
platform for additional features such as topological routing.
52+
-->
53+
由于任一服务的所有网络端点都保存在同一个 Endpoints 资源中,这类资源可能变得
54+
非常巨大,而这一变化会影响到 Kubernetes 组件(比如主控组件)的性能,并
55+
在 Endpoints 变化时需要处理大量的网络流量和处理。
56+
EndpointSlice 能够帮助你缓解这一问题,还能为一些诸如拓扑路由这类的额外
57+
功能提供一个可扩展的平台。
58+
3759
<!--
3860
## Endpoint Slice resources {#endpointslice-resource}
3961
4062
In Kubernetes, an EndpointSlice contains references to a set of network
41-
endpoints. The EndpointSlice controller automatically creates Endpoint Slices
42-
for a Kubernetes Service when a selector is specified. These Endpoint Slices
43-
will include references to any Pods that match the Service selector. Endpoint
44-
Slices group network endpoints together by unique Service and Port combinations.
63+
endpoints. The control plane automatically creates EndpointSlices
64+
for any Kubernetes Service that has a {{< glossary_tooltip text="selector"
65+
term_id="selector" >}} specified. These EndpointSlices include
66+
references to any Pods that match the Service selector. EndpointSlices group
67+
network endpoints together by unique combinations of protocol, port number, and
68+
Service name.
69+
The name of a EndpointSlice object must be a valid
70+
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
4571
4672
As an example, here's a sample EndpointSlice resource for the `example`
4773
Kubernetes Service.
4874
-->
4975
## Endpoint Slice 资源 {#endpointslice-resource}
5076

5177
在 Kubernetes 中,`EndpointSlice` 包含对一组网络端点的引用。
52-
指定选择器后,EndpointSlice 控制器会自动为 Kubernetes 服务创建 EndpointSlice。
53-
这些 EndpointSlice 将包含对与服务选择器匹配的所有 Pod 的引用。EndpointSlice 通过唯一的服务和端口组合将网络端点组织在一起。
78+
指定选择器后控制面会自动为设置了 {{< glossary_tooltip text="选择算符" term_id="selector" >}}
79+
的 Kubernetes 服务创建 EndpointSlice。
80+
这些 EndpointSlice 将包含对与服务选择算符匹配的所有 Pod 的引用。
81+
EndpointSlice 通过唯一的协议、端口号和服务名称将网络端点组织在一起。
82+
EndpointSlice 的名称必须是合法的
83+
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
5484

55-
例如,这里是 Kubernetes服务 `example` 的示例 EndpointSlice 资源
85+
例如,下面是 Kubernetes 服务 `example` EndpointSlice 资源示例
5686

5787
```yaml
5888
apiVersion: discovery.k8s.io/v1beta1
@@ -78,19 +108,25 @@ endpoints:
78108
```
79109
80110
<!--
81-
By default, Endpoint Slices managed by the EndpointSlice controller will have no
82-
more than 100 endpoints each. Below this scale, Endpoint Slices should map 1:1
83-
with Endpoints and Services and have similar performance.
111+
By default, the control plane creates and manages EndpointSlices to have no
112+
more than 100 endpoints each. You can configure this with the
113+
`-max-endpoints-per-slice`
114+
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}
115+
flag, up to a maximum of 1000.
84116

85-
Endpoint Slices can act as the source of truth for kube-proxy when it comes to
117+
EndpointSlices can act as the source of truth for
118+
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}} when it comes to
86119
how to route internal traffic. When enabled, they should provide a performance
87120
improvement for services with large numbers of endpoints.
88121
-->
89-
默认情况下,由 EndpointSlice 控制器管理的 Endpoint Slice 将有不超过 100 个端点。
90-
低于此比例时,Endpoint Slices 应与 Endpoints 和服务进行 1:1 映射,并具有相似的性能。
122+
默认情况下,控制面创建和管理的 EndpointSlice 将包含不超过 100 个端点。
123+
你可以使用 {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}
124+
的 `--max-endpoints-per-slice` 标志设置此值,最大值为 1000。
91125

92-
当涉及如何路由内部流量时,Endpoint Slices 可以充当 kube-proxy 的真实来源。
93-
启用该功能后,在服务的 endpoints 规模庞大时会有可观的性能提升。
126+
当涉及如何路由内部流量时,EndpointSlice 可以充当
127+
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}
128+
的决策依据。
129+
启用该功能后,在服务的端点数量庞大时会有可观的性能提升。
94130

95131
<!--
96132
## Address Types
@@ -110,40 +146,229 @@ EndpointSlice 支持三种地址类型:
110146
* FQDN (完全合格的域名)
111147

112148
<!--
113-
## Motivation
149+
### Topology information {#topology}
114150

115-
The Endpoints API has provided a simple and straightforward way of
116-
tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
117-
and Services have gotten larger, limitations of that API became more visible.
118-
Most notably, those included challenges with scaling to larger numbers of
119-
network endpoints.
151+
Each endpoint within an EndpointSlice can contain relevant topology information.
152+
This is used to indicate where an endpoint is, containing information about the
153+
corresponding Node, zone, and region. When the values are available, the
154+
control plane sets the following Topology labels for EndpointSlices:
155+
-->
156+
### 拓扑信息 {#topology}
120157

121-
Since all network endpoints for a Service were stored in a single Endpoints
122-
resource, those resources could get quite large. That affected the performance
123-
of Kubernetes components (notably the master control plane) and resulted in
124-
significant amounts of network traffic and processing when Endpoints changed.
125-
Endpoint Slices help you mitigate those issues as well as provide an extensible
126-
platform for additional features such as topological routing.
158+
EndpointSlice 中的每个端点都可以包含一定的拓扑信息。
159+
这一信息用来标明端点的位置,包含对应节点、可用区、区域的信息。
160+
当这些值可用时,控制面会为 EndpointSlice 设置如下拓扑标签:
161+
162+
<!--
163+
* `kubernetes.io/hostname` - The name of the Node this endpoint is on.
164+
* `topology.kubernetes.io/zone` - The zone this endpoint is in.
165+
* `topology.kubernetes.io/region` - The region this endpoint is in.
166+
-->
167+
* `kubernetes.io/hostname` - 端点所在的节点名称
168+
* `topology.kubernetes.io/zone` - 端点所处的可用区
169+
* `topology.kubernetes.io/region` - 端点所处的区域
170+
171+
<!--
172+
The values of these labels are derived from resources associated with each
173+
endpoint in a slice. The hostname label represents the value of the NodeName
174+
field on the corresponding Pod. The zone and region labels represent the value
175+
of the labels with the same names on the corresponding Node.
176+
-->
177+
这些标签的值时根据与切片中各个端点相关联的资源来生成的。
178+
标签 `hostname` 代表的是对应的 Pod 的 NodeName 字段的取值。
179+
`zone` 和 `region` 标签则代表的是对应的节点所拥有的同名标签的值。
180+
181+
<!--
182+
### Management
183+
184+
Most often, the control plane (specifically, the endpoint slice
185+
{{< glossary_tooltip text="controller" term_id="controller" >}}) creates and
186+
manages EndpointSlice objects. There are a variety of other use cases for
187+
EndpointSlices, such as service mesh implementations, that could result in other
188+
entities or controllers managing additional sets of EndpointSlices.
189+
-->
190+
### 管理 {#management}
191+
192+
通常,控制面(尤其是端点切片的 {{< glossary_tooltip text="controller" term_id="controller" >}})
193+
会创建和管理 EndpointSlice 对象。EndpointSlice 对象还有一些其他使用场景,
194+
例如作为服务网格(Service Mesh)的实现。这些场景都会导致有其他实体
195+
或者控制器负责管理额外的 EndpointSlice 集合。
196+
197+
<!--
198+
To ensure that multiple entities can manage EndpointSlices without interfering
199+
with each other, Kubernetes defines the
200+
{{< glossary_tooltip term_id="label" text="label" >}}
201+
`endpointslice.kubernetes.io/managed-by`, which indicates the entity managing
202+
an EndpointSlice.
203+
The endpoint slice controller sets `endpointslice-controller.k8s.io` as the value
204+
for this label on all EndpointSlices it manages. Other entities managing
205+
EndpointSlices should also set a unique value for this label.
206+
-->
207+
为了确保多个实体可以管理 EndpointSlice 而且不会相互产生干扰,Kubernetes 定义了
208+
{{< glossary_tooltip term_id="label" text="标签" >}}
209+
`endpointslice.kubernetes.io/managed-by`,用来标明哪个实体在管理某个
210+
EndpointSlice。端点切片控制器会在自己所管理的所有 EndpointSlice 上将该标签值设置
211+
为 `endpointslice-controller.k8s.io`。
212+
管理 EndpointSlice 的其他实体也应该为此标签设置一个唯一值。
213+
214+
<!--
215+
### Ownership
216+
217+
In most use cases, EndpointSlices are owned by the Service that the endpoint
218+
slice object tracks endpoints for. This ownership is indicated by an owner
219+
reference on each EndpointSlice as well as a `kubernetes.io/service-name`
220+
label that enables simple lookups of all EndpointSlices belonging to a Service.
221+
-->
222+
### 属主关系 {#ownership}
223+
224+
在大多数场合下,EndpointSlice 都由某个 Service 所有,(因为)该端点切片正是
225+
为该服务跟踪记录其端点。这一属主关系是通过为每个 EndpointSlice 设置一个
226+
属主(owner)引用,同时设置 `kubernetes.io/service-name` 标签来标明的,
227+
目的是方便查找隶属于某服务的所有 EndpointSlice。
228+
229+
<!--
230+
### EndpointSlice mirroring
231+
232+
In some cases, applications create custom Endpoints resources. To ensure that
233+
these applications do not need to concurrently write to both Endpoints and
234+
EndpointSlice resources, the cluster's control plane mirrors most Endpoints
235+
resources to corresponding EndpointSlices.
236+
-->
237+
### EndpointSlice 镜像 {#endpointslice-mirroring}
238+
239+
在某些场合,应用会创建定制的 Endpoints 资源。为了保证这些应用不需要并发
240+
递更改 Endpoints 和 EndpointSlice 资源,集群的控制面将大多数 Endpoints
241+
映射到对应的 EndpointSlice 之上。
242+
243+
<!--
244+
The control plane mirrors Endpoints resources unless:
245+
246+
* the Endpoints resource has a `endpointslice.kubernetes.io/skip-mirror` label
247+
set to `true`.
248+
* the Endpoints resource has a `control-plane.alpha.kubernetes.io/leader`
249+
annotation.
250+
* the corresponding Service resource does not exist.
251+
* the corresponding Service resource has a non-nil selector.
252+
-->
253+
控制面对 Endpoints 资源进行映射的例外情况有:
254+
255+
* Endpoints 资源上标签 `endpointslice.kubernetes.io/skip-mirror` 值为 `true`。
256+
* Endpoints 资源包含标签 `control-plane.alpha.kubernetes.io/leader`。
257+
* 对应的 Service 资源不存在。
258+
* 对应的 Service 的选择算符不为空。
259+
260+
<!--
261+
Individual Endpoints resources may translate into multiple EndpointSlices. This
262+
will occur if an Endpoints resource has multiple subsets or includes endpoints
263+
with multiple IP families (IPv4 and IPv6). A maximum of 1000 addresses per
264+
subset will be mirrored to EndpointSlices.
265+
-->
266+
每个 Endpoints 资源可能会被翻译到多个 EndpointSlices 中去。
267+
当 Endpoints 资源中包含多个子网或者包含多个 IP 地址族(IPv4 和 IPv6)的端点时,
268+
就有可能发生这种状况。
269+
每个子网最多有 1000 个地址会被镜像到 EndpointSlice 中。
270+
271+
<!--
272+
### Distribution of EndpointSlices
273+
274+
Each EndpointSlice has a set of ports that applies to all endpoints within the
275+
resource. When named ports are used for a Service, Pods may end up with
276+
different target port numbers for the same named port, requiring different
277+
EndpointSlices. This is similar to the logic behind how subsets are grouped
278+
with Endpoints.
127279
-->
128-
## 动机
280+
### EndpointSlices 的分布问题 {#distribution-of-endpointslices}
281+
282+
每个 EndpointSlice 都有一组端口值,适用于资源内的所有端点。
283+
当为服务使用命名端口时,Pod 可能会就同一命名端口获得不同的端口号,因而需要
284+
不同的 EndpointSlice。这有点像 Endpoints 用来对子网进行分组的逻辑。
285+
286+
<!--
287+
The control plane tries to fill EndpointSlices as full as possible, but does not
288+
actively rebalance them. The logic is fairly straightforward:
129289

130-
Endpoints API 提供了一种简单明了的方法在 Kubernetes 中跟踪网络端点。
131-
不幸的是,随着 Kubernetes 集群与服务的增长,该 API 的局限性变得更加明显。
132-
最值得注意的是,这包含了扩展到更多网络端点的挑战。
290+
1. Iterate through existing EndpointSlices, remove endpoints that are no longer
291+
desired and update matching endpoints that have changed.
292+
2. Iterate through EndpointSlices that have been modified in the first step and
293+
fill them up with any new endpoints needed.
294+
3. If there's still new endpoints left to add, try to fit them into a previously
295+
unchanged slice and/or create new ones.
296+
-->
297+
控制面尝试尽量将 EndpointSlice 填满,不过不会主动地在若干 EndpointSlice 之间
298+
执行再平衡操作。这里的逻辑也是相对直接的:
299+
300+
1. 列举所有现有的 EndpointSlices,移除那些不再需要的端点并更新那些已经
301+
变化的端点。
302+
2. 列举所有在第一步中被更改过的 EndpointSlices,用新增加的端点将其填满。
303+
3. 如果还有新的端点未被添加进去,尝试将这些端点添加到之前未更改的切片中,
304+
或者创建新切片。
305+
306+
<!--
307+
Importantly, the third step prioritizes limiting EndpointSlice updates over a
308+
perfectly full distribution of EndpointSlices. As an example, if there are 10
309+
new endpoints to add and 2 EndpointSlices with room for 5 more endpoints each,
310+
this approach will create a new EndpointSlice instead of filling up the 2
311+
existing EndpointSlices. In other words, a single EndpointSlice creation is
312+
preferrable to multiple EndpointSlice updates.
313+
-->
314+
这里比较重要的是,与在 EndpointSlice 之间完成最佳的分布相比,第三步中更看重
315+
限制 EndpointSlice 更新的操作次数。例如,如果有 10 个端点待添加,有两个
316+
EndpointSlice 中各有 5 个空位,上述方法会创建一个新的 EndpointSlice 而不是
317+
将现有的两个 EndpointSlice 都填满。换言之,与执行多个 EndpointSlice 更新操作
318+
相比较,方法会优先考虑执行一个 EndpointSlice 创建操作。
319+
320+
<!--
321+
With kube-proxy running on each Node and watching EndpointSlices, every change
322+
to an EndpointSlice becomes relatively expensive since it will be transmitted to
323+
every Node in the cluster. This approach is intended to limit the number of
324+
changes that need to be sent to every Node, even if it may result with multiple
325+
EndpointSlices that are not full.
326+
-->
327+
由于 kube-proxy 在每个节点上运行并监视 EndpointSlice 状态,EndpointSlice 的
328+
每次变更都变得相对代价较高,因为这些状态变化要传递到集群中每个节点上。
329+
这一方法尝试限制要发送到所有节点上的变更消息个数,即使这样做可能会导致有
330+
多个 EndpointSlice 没有被填满。
331+
332+
<!--
333+
In practice, this less than ideal distribution should be rare. Most changes
334+
processed by the EndpointSlice controller will be small enough to fit in an
335+
existing EndpointSlice, and if not, a new EndpointSlice is likely going to be
336+
necessary soon anyway. Rolling updates of Deployments also provide a natural
337+
repacking of EndpointSlices with all Pods and their corresponding endpoints
338+
getting replaced.
339+
-->
340+
在实践中,上面这种并非最理想的分布是很少出现的。大多数被 EndpointSlice 控制器
341+
处理的变更都是足够小的,可以添加到某已有 EndpointSlice 中去的。并且,假使无法
342+
添加到已有的切片中,不管怎样都会快就会需要一个新的 EndpointSlice 对象。
343+
Deployment 的滚动更新为重新为 EndpointSlice 打包提供了一个自然的机会,所有
344+
Pod 及其对应的端点在这一期间都会被替换掉。
345+
346+
<!--
347+
### Duplicate endpoints
348+
349+
Due to the nature of EndpointSlice changes, endpoints may be represented in more
350+
than one EndpointSlice at the same time. This naturally occurs as changes to
351+
different EndpointSlice objects can arrive at the Kubernetes client watch/cache
352+
at different times. Implementations using EndpointSlice must be able to have the
353+
endpoint appear in more than one slice. A reference implementation of how to
354+
perform endpoint deduplication can be found in the `EndpointSliceCache`
355+
implementation in `kube-proxy`.
356+
-->
357+
### 重复的端点 {#duplicate-endpoints}
133358

134-
由于服务的所有网络端点都存储在单个 Endpoints 资源中,
135-
因此这些资源可能会变得很大。
136-
这影响了 Kubernetes 组件(尤其是主控制平面)的性能,并在 Endpoints
137-
发生更改时导致大量网络流量和处理
138-
Endpoint Slices 可帮助您缓解这些问题并提供可扩展的
139-
附加特性(例如拓扑路由)平台
359+
由于 EndpointSlice 变化的自身特点,端点可能会同时出现在不止一个 EndpointSlice
360+
中。鉴于不同的 EndpointSlice 对象在不同时刻到达 Kubernetes 的监视/缓存中,
361+
这种情况的出现是很自然的。
362+
使用 EndpointSlice 的实现必须能够处理端点出现在多个切片中的状况
363+
关于如何执行端点去重(deduplication)的参考实现,你可以在 `kube-proxy` 的
364+
`EndpointSlice` 实现中找到
140365

141366
## {{% heading "whatsnext" %}}
142367

143368
<!--
144369
* [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpoint-slices)
145370
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
146371
-->
147-
* [启用端点切片](/zh/docs/tasks/administer-cluster/enabling-endpointslices)
148-
* 阅读[使用服务链接应用](/zh/docs/concepts/services-networking/connect-applications-service/)
372+
* 了解[启用 EndpointSlice](/zh/docs/tasks/administer-cluster/enabling-endpointslices)
373+
* 阅读[使用服务连接应用](/zh/docs/concepts/services-networking/connect-applications-service/)
149374

0 commit comments

Comments
 (0)