@@ -39,7 +39,7 @@ Kubernetes 为 Pods 提供自己的 IP 地址,并为一组 Pod 提供相同的
39
39
## Motivation
40
40
41
41
Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are created and destroyed
42
- to match the state of your cluster. Pods are nonpermanent resources.
42
+ to match the desired state of your cluster. Pods are nonpermanent resources.
43
43
If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app,
44
44
it can create and destroy Pods dynamically.
45
45
@@ -57,7 +57,7 @@ Enter _Services_.
57
57
58
58
## 动机
59
59
60
- 创建和销毁 Kubernetes {{< glossary_tooltip term_id="pod" text="Pod" >}} 以匹配集群状态 。
60
+ 创建和销毁 Kubernetes {{< glossary_tooltip term_id="pod" text="Pod" >}} 以匹配集群的期望状态 。
61
61
Pod 是非永久性资源。
62
62
如果你使用 {{< glossary_tooltip term_id="deployment">}}
63
63
来运行你的应用程序,则它可以动态创建和销毁 Pod。
@@ -189,24 +189,63 @@ field.
189
189
190
190
<!--
191
191
Port definitions in Pods have names, and you can reference these names in the
192
- ` targetPort` attribute of a Service. This works even if there is a mixture
193
- of Pods in the Service using a single configured name, with the same network
194
- protocol available via different port numbers.
195
- This offers a lot of flexibility for deploying and evolving your Services.
196
- For example, you can change the port numbers that Pods expose in the next
197
- version of your backend software, without breaking clients.
192
+ ` targetPort` attribute of a Service. For example, we can bind the `targetPort`
193
+ of the Service to the Pod port in the following way :
194
+ -->
195
+ Pod 中的端口定义是有名字的,你可以在 Service 的 `targetPort` 属性中引用这些名称。
196
+ 例如,我们可以通过以下方式将 Service 的 `targetPort` 绑定到 Pod 端口:
197
+
198
+ ` ` ` yaml
199
+ apiVersion: v1
200
+ kind: Pod
201
+ metadata:
202
+ name: nginx
203
+ labels:
204
+ app.kubernetes.io/name: proxy
205
+ spec:
206
+ containers:
207
+ - name: nginx
208
+ image: nginx:11.14.2
209
+ ports:
210
+ - containerPort: 80
211
+ name: http-web-svc
212
+
213
+ ---
214
+ apiVersion: v1
215
+ kind: Service
216
+ metadata:
217
+ name: nginx-service
218
+ spec:
219
+ selector:
220
+ app.kubernetes.io/name: proxy
221
+ ports:
222
+ - name: name-of-service-port
223
+ protocol: TCP
224
+ port: 80
225
+ targetPort: http-web-svc
226
+ ` ` `
227
+
228
+ <!--
229
+ This works even if there is a mixture of Pods in the Service using a single
230
+ configured name, with the same network protocol available via different
231
+ port numbers. This offers a lot of flexibility for deploying and evolving
232
+ your Services. For example, you can change the port numbers that Pods expose
233
+ in the next version of your backend software, without breaking clients.
234
+ -->
235
+ 即使 Service 中使用同一配置名称混合使用多个 Pod,各 Pod 通过不同的端口号支持相同的网络协议,
236
+ 此功能也可以使用。这为 Service 的部署和演化提供了很大的灵活性。
237
+ 例如,你可以在新版本中更改 Pod 中后端软件公开的端口号,而不会破坏客户端。
238
+
198
239
240
+ <!--
199
241
The default protocol for Services is TCP; you can also use any other
200
242
[supported protocol](#protocol-support).
201
243
202
244
As many Services need to expose more than one port, Kubernetes supports multiple
203
245
port definitions on a Service object.
204
246
Each port definition can have the same `protocol`, or a different one.
205
247
-->
206
- Pod 中的端口定义是有名字的,你可以在服务的 `targetPort` 属性中引用这些名称。
207
- 即使服务中使用单个配置的名称混合使用 Pod,并且通过不同的端口号提供相同的网络协议,此功能也可以使用。
208
- 这为部署和发展服务提供了很大的灵活性。
209
- 例如,你可以更改 Pods 在新版本的后端软件中公开的端口号,而不会破坏客户端。
248
+
210
249
211
250
服务的默认协议是 TCP;你还可以使用任何其他[受支持的协议](#protocol-support)。
212
251
@@ -216,9 +255,9 @@ Pod 中的端口定义是有名字的,你可以在服务的 `targetPort` 属
216
255
<!--
217
256
# ## Services without selectors
218
257
219
- Services most commonly abstract access to Kubernetes Pods, but they can also
220
- abstract other kinds of backends.
221
- For example :
258
+ Services most commonly abstract access to Kubernetes Pods thanks to the selector,
259
+ but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends,
260
+ including ones that run outside the cluster. For example :
222
261
223
262
* You want to have an external database cluster in production, but in your
224
263
test environment you use your own databases.
@@ -232,8 +271,10 @@ For example:
232
271
-->
233
272
# ## 没有选择算符的 Service {#services-without-selectors}
234
273
235
- 服务最常见的是抽象化对 Kubernetes Pod 的访问,但是它们也可以抽象化其他种类的后端。
236
- 实例 :
274
+ 由于选择器的存在,服务最常见的用法是为 Kubernetes Pod 的访问提供抽象,
275
+ 但是当与相应的 Endpoints 对象一起使用且没有选择器时,
276
+ 服务也可以为其他类型的后端提供抽象,包括在集群外运行的后端。
277
+ 例如:
237
278
238
279
* 希望在生产环境中使用外部的数据库集群,但测试环境使用自己的数据库。
239
280
* 希望服务指向另一个 {{< glossary_tooltip term_id="namespace" >}} 中或其它集群中的服务。
@@ -590,6 +631,14 @@ You can also set the maximum session sticky time by setting
590
631
来设置最大会话停留时间。
591
632
(默认值为 10800 秒,即 3 小时)。
592
633
634
+ <!--
635
+ On Windows, setting the maximum session sticky time for Services is not supported.
636
+ -->
637
+ {{< note >}}
638
+ 在 Windows 上,不支持为服务设置最大会话停留时间。
639
+ {{< /note >}}
640
+
641
+
593
642
<!--
594
643
# # Multi-Port Services
595
644
@@ -674,7 +723,7 @@ server will return a 422 HTTP status code to indicate that there's a problem.
674
723
<!--
675
724
You can set the `spec.externalTrafficPolicy` field to control how traffic from external sources is routed.
676
725
Valid values are `Cluster` and `Local`. Set the field to `Cluster` to route external traffic to all ready endpoints
677
- and `Local` to only route to ready node-local endpoints. If the traffic policy is `Local` and there are are no node-local
726
+ and `Local` to only route to ready node-local endpoints. If the traffic policy is `Local` and there are no node-local
678
727
endpoints, the kube-proxy does not forward any traffic for the relevant Service.
679
728
-->
680
729
@@ -751,11 +800,7 @@ Kubernetes 支持两种基本的服务发现模式 —— 环境变量和 DNS。
751
800
# ## Environment variables
752
801
753
802
When a Pod is run on a Node, the kubelet adds a set of environment variables
754
- for each active Service. It supports both [Docker links
755
- compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
756
- [makeLinkVariables](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))
757
- and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
758
- where the Service name is upper-cased and dashes are converted to underscores.
803
+ for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) that are compatible with Docker Engine's "_[legacy container links](https://docs.docker.com/network/links/)_" feature.
759
804
760
805
For example, the Service `redis-master` which exposes TCP port 6379 and has been
761
806
allocated cluster IP address 10.0.0.11, produces the following environment
@@ -764,10 +809,10 @@ variables:
764
809
# ## 环境变量 {#environment-variables}
765
810
766
811
当 Pod 运行在 `Node` 上,kubelet 会为每个活跃的 Service 添加一组环境变量。
767
- 它同时支持 [Docker links兼容](https://docs.docker.com/userguide/dockerlinks/) 变量
768
- (查看 [makeLinkVariables](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))、
769
- 简单的 `{SVCNAME}_SERVICE_HOST` 和 `{SVCNAME}_SERVICE_PORT` 变量。
812
+ kubelet 为 Pod 添加环境变量 `{SVCNAME}_SERVICE_HOST` 和 `{SVCNAME}_SERVICE_PORT`。
770
813
这里 Service 的名称需大写,横线被转换成下划线。
814
+ 它还支持与 Docker Engine 的 "_[legacy container links](https://docs.docker.com/network/links/)_" 特性兼容的变量
815
+ (参阅 [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) 。
771
816
772
817
举个例子,一个名称为 `redis-master` 的 Service 暴露了 TCP 端口 6379,
773
818
同时给它分配了 Cluster IP 地址 10.0.0.11,这个 Service 生成了如下环境变量:
@@ -1145,13 +1190,15 @@ securityGroupName。
1145
1190
<!--
1146
1191
# ### Load balancers with mixed protocol types
1147
1192
1148
- {{< feature-state for_k8s_version="v1.20 " state="alpha " >}}
1193
+ {{< feature-state for_k8s_version="v1.24 " state="beta " >}}
1149
1194
1150
1195
By default, for LoadBalancer type of Services, when there is more than one port defined, all
1151
1196
ports must have the same protocol, and the protocol must be one which is supported
1152
1197
by the cloud provider.
1153
1198
1154
- If the feature gate `MixedProtocolLBService` is enabled for the kube-apiserver it is allowed to use different protocols when there is more than one port defined.
1199
+ The feature gate `MixedProtocolLBService` (enabled by default for the kube-apiserver as of v1.24) allows the use of
1200
+ different protocols for LoadBalancer type of Services, when there is more than one port defined.
1201
+
1155
1202
-->
1156
1203
# ### 混合协议类型的负载均衡器
1157
1204
@@ -1160,51 +1207,51 @@ If the feature gate `MixedProtocolLBService` is enabled for the kube-apiserver i
1160
1207
默认情况下,对于 LoadBalancer 类型的服务,当定义了多个端口时,所有
1161
1208
端口必须具有相同的协议,并且该协议必须是受云提供商支持的协议。
1162
1209
1163
- 如果为 kube-apiserver 启用了 `MixedProtocolLBService` 特性门控,
1164
- 则当定义了多个端口时,允许使用不同的协议 。
1210
+ 当服务中定义了多个端口时,特性门控 `MixedProtocolLBService`(在 kube-apiserver 1.24 版本默认为启用)允许
1211
+ LoadBalancer 类型的服务使用不同的协议 。
1165
1212
1166
1213
<!--
1167
- The set of protocols that can be used for LoadBalancer type of Services is still defined by the cloud provider.
1214
+ The set of protocols that can be used for LoadBalancer type of Services is still defined by the cloud provider. If a
1215
+ cloud provider does not support mixed protocols they will provide only a single protocol.
1168
1216
-->
1169
1217
{{< note >}}
1170
1218
可用于 LoadBalancer 类型服务的协议集仍然由云提供商决定。
1219
+ 如果云提供商不支持混合协议,他们将只提供单一协议。
1171
1220
{{< /note >}}
1172
1221
1173
1222
<!--
1174
1223
# ### Disabling load balancer NodePort allocation {#load-balancer-nodeport-allocation}
1175
1224
-->
1176
1225
# ## 禁用负载均衡器节点端口分配 {#load-balancer-nodeport-allocation}
1177
1226
1178
- {{< feature-state for_k8s_version="v1.20 " state="alpha " >}}
1227
+ {{< feature-state for_k8s_version="v1.24 " state="stable " >}}
1179
1228
1180
1229
<!--
1181
1230
Starting in v1.20, you can optionally disable node port allocation for a Service Type=LoadBalancer by setting
1182
1231
the field `spec.allocateLoadBalancerNodePorts` to `false`. This should only be used for load balancer implementations
1183
1232
that route traffic directly to pods as opposed to using node ports. By default, `spec.allocateLoadBalancerNodePorts`
1184
1233
is `true` and type LoadBalancer Services will continue to allocate node ports. If `spec.allocateLoadBalancerNodePorts`
1185
- is set to `false` on an existing Service with allocated node ports, those node ports will NOT be de-allocated automatically.
1234
+ is set to `false` on an existing Service with allocated node ports, those node ports will **not** be de-allocated automatically.
1186
1235
You must explicitly remove the `nodePorts` entry in every Service port to de-allocate those node ports.
1187
- You must enable the `ServiceLBNodePortControl` feature gate to use this field.
1188
1236
-->
1189
- 从 v1.20 版本开始, 你可以通过设置 `spec.allocateLoadBalancerNodePorts` 为 `false`
1237
+ 你可以通过设置 `spec.allocateLoadBalancerNodePorts` 为 `false`
1190
1238
对类型为 LoadBalancer 的服务禁用节点端口分配。
1191
1239
这仅适用于直接将流量路由到 Pod 而不是使用节点端口的负载均衡器实现。
1192
1240
默认情况下,`spec.allocateLoadBalancerNodePorts` 为 `true`,
1193
1241
LoadBalancer 类型的服务继续分配节点端口。
1194
1242
如果现有服务已被分配节点端口,将参数 `spec.allocateLoadBalancerNodePorts`
1195
- 设置为 `false` 时,这些服务上已分配置的节点端口不会被自动释放 。
1243
+ 设置为 `false` 时,这些服务上已分配置的节点端口**不会**被自动释放 。
1196
1244
你必须显式地在每个服务端口中删除 `nodePorts` 项以释放对应端口。
1197
- 你必须启用 `ServiceLBNodePortControl` 特性门控才能使用该字段。
1198
1245
1199
1246
<!--
1200
1247
# ### Specifying class of load balancer implementation {#load-balancer-class}
1201
1248
-->
1202
1249
# ### 设置负载均衡器实现的类别 {#load-balancer-class}
1203
1250
1204
- {{< feature-state for_k8s_version="v1.22 " state="beta " >}}
1251
+ {{< feature-state for_k8s_version="v1.24 " state="stable " >}}
1205
1252
1206
1253
<!--
1207
- ` spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default. This feature is available from v1.21, you must enable the `ServiceLoadBalancerClass` feature gate to use this field in v1.21, and the feature gate is enabled by default from v1.22 onwards.
1254
+ ` spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default.
1208
1255
By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses
1209
1256
the cloud provider's default load balancer implementation if the cluster is configured with
1210
1257
a cloud provider using the `--cloud-provider` component flag.
@@ -1216,8 +1263,6 @@ the cloud provider) will ignore Services that have this field set.
1216
1263
Once set, it cannot be changed.
1217
1264
-->
1218
1265
` spec.loadBalancerClass` 允许你不使用云提供商的默认负载均衡器实现,转而使用指定的负载均衡器实现。
1219
- 这个特性从 v1.21 版本开始可以使用,你在 v1.21 版本中使用这个字段必须启用 `ServiceLoadBalancerClass`
1220
- 特性门控,这个特性门控从 v1.22 版本及以后默认打开。
1221
1266
默认情况下,`.spec.loadBalancerClass` 的取值是 `nil`,如果集群使用 `--cloud-provider` 配置了云提供商,
1222
1267
` LoadBalancer` 类型服务会使用云提供商的默认负载均衡器实现。
1223
1268
如果设置了 `.spec.loadBalancerClass`,则假定存在某个与所指定的类相匹配的
@@ -1972,7 +2017,8 @@ someone else's choice. That is an isolation failure.
1972
2017
1973
2018
In order to allow you to choose a port number for your Services, we must
1974
2019
ensure that no two Services can collide. Kubernetes does that by allocating each
1975
- Service its own IP address.
2020
+ Service its own IP address from within the `service-cluster-ip-range`
2021
+ CIDR range that is configured for the API server.
1976
2022
1977
2023
To ensure each Service receives a unique IP, an internal allocator atomically
1978
2024
updates a global allocation map in {{< glossary_tooltip term_id="etcd" >}}
@@ -1992,8 +2038,9 @@ Kubernetes 最主要的哲学之一,是用户不应该暴露那些能够导致
1992
2038
对于 Service 资源的设计,这意味着如果用户的选择有可能与他人冲突,那就不要让用户自行选择端口号。
1993
2039
这是一个隔离性的失败。
1994
2040
1995
- 为了使用户能够为他们的 Service 选择一个端口号,我们必须确保不能有2个 Service 发生冲突。
1996
- Kubernetes 通过为每个 Service 分配它们自己的 IP 地址来实现。
2041
+ 为了使用户能够为他们的 Service 选择一个端口号,我们必须确保不能有 2 个 Service 发生冲突。
2042
+ Kubernetes 通过在为 API 服务器配置的 `service-cluster-ip-range` CIDR
2043
+ 范围内为每个服务分配自己的 IP 地址来实现。
1997
2044
1998
2045
为了保证每个 Service 被分配到一个唯一的 IP,需要一个内部的分配器能够原子地更新
1999
2046
{{< glossary_tooltip term_id="etcd" >}} 中的一个全局分配映射表,
@@ -2006,6 +2053,42 @@ Kubernetes 通过为每个 Service 分配它们自己的 IP 地址来实现。
2006
2053
同时 Kubernetes 会通过控制器检查不合理的分配(如管理员干预导致的)
2007
2054
以及清理已被分配但不再被任何 Service 使用的 IP 地址。
2008
2055
2056
+ <!--
2057
+ # ### IP address ranges for `type: ClusterIP` Services {#service-ip-static-sub-range}
2058
+
2059
+ {{< feature-state for_k8s_version="v1.24" state="alpha" >}}
2060
+ However, there is a problem with this `ClusterIP` allocation strategy, because a user
2061
+ can also [choose their own address for the service](#choosing-your-own-ip-address).
2062
+ This could result in a conflict if the internal allocator selects the same IP address
2063
+ for another Service.
2064
+ -->
2065
+ # ### `type: ClusterIP` 服务的 IP 地址范围 {#service-ip-static-sub-range}
2066
+
2067
+ {{< feature-state for_k8s_version="v1.24" state="alpha" >}}
2068
+ 但是,这种 `ClusterIP` 分配策略存在一个问题,因为用户还可以[为服务选择自己的地址](#choosing-your-own-ip-address)。
2069
+ 如果内部分配器为另一个服务选择相同的 IP 地址,这可能会导致冲突。
2070
+
2071
+ <!--
2072
+ If you enable the `ServiceIPStaticSubrange`
2073
+ [feature gate](/docs/reference/command-line-tools-reference/feature-gates/),
2074
+ the allocation strategy divides the `ClusterIP` range into two bands, based on
2075
+ the size of the configured `service-cluster-ip-range` by using the following formula
2076
+ ` min(max(16, cidrSize / 16), 256)` , described as _never less than 16 or more than 256,
2077
+ with a graduated step function between them_. Dynamic IP allocations will be preferentially
2078
+ chosen from the upper band, reducing risks of conflicts with the IPs
2079
+ assigned from the lower band.
2080
+ This allows users to use the lower band of the `service-cluster-ip-range` for their
2081
+ Services with static IPs assigned with a very low risk of running into conflicts.
2082
+ -->
2083
+ 如果启用 `ServiceIPStaticSubrange`[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/),
2084
+ 分配策略根据配置的 `service-cluster-ip-range` 的大小,使用以下公式
2085
+ ` min(max(16, cidrSize / 16), 256)` 进行划分,该公式可描述为
2086
+ “在不小于 16 且不大于 256 之间有一个步进量(Graduated Step)”,将
2087
+ ` ClusterIP` 范围分成两段。动态 IP 分配将优先从上半段地址中选择,
2088
+ 从而降低与下半段地址分配的 IP 冲突的风险。
2089
+ 这允许用户将 `service-cluster-ip-range` 的下半段地址用于他们的服务,
2090
+ 与所分配的静态 IP 的冲突风险非常低。
2091
+
2009
2092
<!--
2010
2093
# ## Service IP addresses {#ips-and-vips}
2011
2094
0 commit comments