@@ -34,38 +34,7 @@ Endpoints.
34
34
<!-- body -->
35
35
36
36
<!--
37
- ## Motivation
38
-
39
- The Endpoints API has provided a simple and straightforward way of
40
- tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
41
- and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle and
42
- send more traffic to more backend Pods, limitations of that original API became
43
- more visible.
44
- Most notably, those included challenges with scaling to larger numbers of
45
- network endpoints.
46
- -->
47
- ## 动机 {#motivation}
48
-
49
- Endpoints API 提供了在 Kubernetes 中跟踪网络端点的一种简单而直接的方法。遗憾的是,随着 Kubernetes
50
- 集群和{{< glossary_tooltip text="服务" term_id="service" >}}逐渐开始为更多的后端 Pod 处理和发送请求,
51
- 原来的 API 的局限性变得越来越明显。最重要的是那些因为要处理大量网络端点而带来的挑战。
52
-
53
- <!--
54
- Since all network endpoints for a Service were stored in a single Endpoints
55
- resource, those resources could get quite large. That affected the performance
56
- of Kubernetes components (notably the master control plane) and resulted in
57
- significant amounts of network traffic and processing when Endpoints changed.
58
- EndpointSlices help you mitigate those issues as well as provide an extensible
59
- platform for additional features such as topological routing.
60
- -->
61
- 由于任一 Service 的所有网络端点都保存在同一个 Endpoints 资源中,
62
- 这类资源可能变得非常巨大,而这一变化会影响到 Kubernetes
63
- 组件(比如主控组件)的性能,并在 Endpoints 变化时产生大量的网络流量和额外的处理。
64
- EndpointSlice 能够帮助你缓解这一问题,
65
- 还能为一些诸如拓扑路由这类的额外功能提供一个可扩展的平台。
66
-
67
- <!--
68
- ## EndpointSlice resources {#endpointslice-resource}
37
+ ## EndpointSlice API {#endpointslice-resource}
69
38
70
39
In Kubernetes, an EndpointSlice contains references to a set of network
71
40
endpoints. The control plane automatically creates EndpointSlices
@@ -77,10 +46,10 @@ Service name.
77
46
The name of a EndpointSlice object must be a valid
78
47
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
79
48
80
- As an example, here's a sample EndpointSlice resource for the `example`
49
+ As an example, here's a sample EndpointSlice object, that's owned by the `example`
81
50
Kubernetes Service.
82
51
-->
83
- ## EndpointSlice 资源 {#endpointslice-resource}
52
+ ## EndpointSlice API {#endpointslice-resource}
84
53
85
54
在 Kubernetes 中,` EndpointSlice ` 包含对一组网络端点的引用。
86
55
控制面会自动为设置了{{< glossary_tooltip text="选择算符" term_id="selector" >}}的
@@ -90,7 +59,7 @@ EndpointSlice 通过唯一的协议、端口号和 Service 名称将网络端点
90
59
EndpointSlice 的名称必须是合法的
91
60
[ DNS 子域名] ( /zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names ) 。
92
61
93
- 例如,下面是 Kubernetes Service ` example ` 的 EndpointSlice 资源示例 。
62
+ 例如,下面是 Kubernetes Service ` example ` 所拥有的 EndpointSlice 对象示例 。
94
63
95
64
``` yaml
96
65
apiVersion : discovery.k8s.io/v1
@@ -123,8 +92,7 @@ flag, up to a maximum of 1000.
123
92
124
93
EndpointSlices can act as the source of truth for
125
94
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}} when it comes to
126
- how to route internal traffic. When enabled, they should provide a performance
127
- improvement for services with large numbers of endpoints.
95
+ how to route internal traffic.
128
96
-->
129
97
默认情况下,控制面创建和管理的 EndpointSlice 将包含不超过 100 个端点。
130
98
你可以使用 {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}
@@ -133,7 +101,6 @@ improvement for services with large numbers of endpoints.
133
101
当涉及如何路由内部流量时,EndpointSlice 可以充当
134
102
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}
135
103
的决策依据。
136
- 启用该功能后,在服务的端点数量庞大时会有可观的性能提升。
137
104
138
105
<!--
139
106
# ## Address types
@@ -143,6 +110,10 @@ EndpointSlices support three address types:
143
110
* IPv4
144
111
* IPv6
145
112
* FQDN (Fully Qualified Domain Name)
113
+
114
+ Each `EndpointSlice` object represents a specific IP address type. If you have
115
+ a Service that is available via IPv4 and IPv6, there will be at least two
116
+ ` EndpointSlice` objects (one for IPv4, and one for IPv6).
146
117
-->
147
118
# ## 地址类型
148
119
@@ -152,6 +123,9 @@ EndpointSlice 支持三种地址类型:
152
123
* IPv6
153
124
* FQDN (完全合格的域名)
154
125
126
+ 每个 `EndpointSlice` 对象代表一个特定的 IP 地址类型。如果你有一个支持 IPv4 和 IPv6 的 Service,
127
+ 那么将至少有两个 `EndpointSlice` 对象(一个用于 IPv4,一个用于 IPv6)。
128
+
155
129
<!--
156
130
# ## Conditions
157
131
@@ -434,7 +408,7 @@ getting replaced.
434
408
-->
435
409
在实践中,上面这种并非最理想的分布是很少出现的。大多数被 EndpointSlice
436
410
控制器处理的变更都是足够小的,可以添加到某已有 EndpointSlice 中去的。
437
- 并且,假使无法添加到已有的切片中,不管怎样都会快就会需要一个新的
411
+ 并且,假使无法添加到已有的切片中,不管怎样都很快就会创建一个新的
438
412
EndpointSlice 对象。Deployment 的滚动更新为重新为 EndpointSlice
439
413
打包提供了一个自然的机会,所有 Pod 及其对应的端点在这一期间都会被替换掉。
440
414
@@ -443,20 +417,82 @@ EndpointSlice 对象。Deployment 的滚动更新为重新为 EndpointSlice
443
417
444
418
Due to the nature of EndpointSlice changes, endpoints may be represented in more
445
419
than one EndpointSlice at the same time. This naturally occurs as changes to
446
- different EndpointSlice objects can arrive at the Kubernetes client watch/cache
447
- at different times. Implementations using EndpointSlice must be able to have the
448
- endpoint appear in more than one slice. A reference implementation of how to
449
- perform endpoint deduplication can be found in the `EndpointSliceCache`
450
- implementation in `kube-proxy`.
420
+ different EndpointSlice objects can arrive at the Kubernetes client watch / cache
421
+ at different times.
451
422
-->
452
423
# ## 重复的端点 {#duplicate-endpoints}
453
424
454
425
由于 EndpointSlice 变化的自身特点,端点可能会同时出现在不止一个 EndpointSlice
455
426
中。鉴于不同的 EndpointSlice 对象在不同时刻到达 Kubernetes 的监视/缓存中,
456
427
这种情况的出现是很自然的。
457
- 使用 EndpointSlice 的实现必须能够处理端点出现在多个切片中的状况。
458
- 关于如何执行端点去重(deduplication)的参考实现,你可以在 `kube-proxy` 的
459
- ` EndpointSlice` 实现中找到。
428
+
429
+ {{< note >}}
430
+
431
+ <!--
432
+ Clients of the EndpointSlice API must be able to handle the situation where
433
+ a particular endpoint address appears in more than one slice.
434
+
435
+ You can find a reference implementation for how to perform this endpoint deduplication
436
+ as part of the `EndpointSliceCache` code within `kube-proxy`.
437
+ -->
438
+ EndpointSlice API 的客户端必须能够处理特定端点地址出现在多个 EndpointSlice 中的情况。
439
+
440
+ 你可以在 `kube-proxy` 中的 `EndpointSliceCache` 代码中找到有关如何执行这个端点去重的参考实现。
441
+
442
+ {{< /note >}}
443
+
444
+ <!--
445
+ # # Comparison with Endpoints {#motivation}
446
+
447
+ The original Endpoints API provided a simple and straightforward way of
448
+ tracking network endpoints in Kubernetes. As Kubernetes clusters
449
+ and {{< glossary_tooltip text="Services" term_id="service" >}} grew to handle
450
+ more traffic and to send more traffic to more backend Pods, the
451
+ limitations of that original API became more visible.
452
+ Most notably, those included challenges with scaling to larger numbers of
453
+ network endpoints.
454
+ -->
455
+ # # 与 Endpoints 的比较 {#motivation}
456
+ 原来的 Endpoints API 提供了在 Kubernetes 中跟踪网络端点的一种简单而直接的方法。随着 Kubernetes
457
+ 集群和{{< glossary_tooltip text="服务" term_id="service" >}}逐渐开始为更多的后端 Pod 处理和发送请求,
458
+ 原来的 API 的局限性变得越来越明显。最明显的是那些因为要处理大量网络端点而带来的挑战。
459
+
460
+ <!--
461
+ Since all network endpoints for a Service were stored in a single Endpoints
462
+ object, those Endpoints objects could get quite large. For Services that stayed
463
+ stable (the same set of endpoints over a long period of time) the impact was
464
+ less noticeable; even then, some use cases of Kubernetes weren't well served.
465
+ -->
466
+ 由于任一 Service 的所有网络端点都保存在同一个 Endpoints 对象中,这些 Endpoints
467
+ 对象可能变得非常巨大。对于保持稳定的服务(长时间使用同一组端点),影响不太明显;
468
+ 即便如此,Kubernetes 的一些使用场景也没有得到很好的服务。
469
+
470
+
471
+ <!--
472
+ When a Service had a lot of backend endpoints and the workload was either
473
+ scaling frequently, or rolling out new changes frequently, each update to
474
+ the single Endpoints object for that Service meant a lot of traffic between
475
+ Kubernetes cluster components (within the control plane, and also between
476
+ nodes and the API server). This extra traffic also had a cost in terms of
477
+ CPU use.
478
+ -->
479
+ 当某 Service 存在很多后端端点并且该工作负载频繁扩缩或上线新更改时,对该 Service 的单个 Endpoints
480
+ 对象的每次更新都意味着(在控制平面内以及在节点和 API 服务器之间)Kubernetes 集群组件之间会出现大量流量。
481
+ 这种额外的流量在 CPU 使用方面也有开销。
482
+
483
+ <!--
484
+ With EndpointSlices, adding or removing a single Pod triggers the same _number_
485
+ of updates to clients that are watching for changes, but the size of those
486
+ update message is much smaller at large scale.
487
+ -->
488
+ 使用 EndpointSlices 时,添加或移除单个 Pod 对于正监视变更的客户端会触发相同数量的更新,
489
+ 但这些更新消息的大小在大规模场景下要小得多。
490
+
491
+ <!--
492
+ EndpointSlices also enabled innovation around new features such dual-stack
493
+ networking and topology-aware routing.
494
+ -->
495
+ EndpointSlices 还支持围绕双栈网络和拓扑感知路由等新功能的创新。
460
496
461
497
# # {{% heading "whatsnext" %}}
462
498
0 commit comments