1
1
---
2
2
title : 服务(Service)
3
+ api_metadata :
4
+ - apiVersion : " v1"
5
+ kind : " Service"
3
6
feature :
4
7
title : 服务发现与负载均衡
5
8
description : >
@@ -14,6 +17,9 @@ weight: 10
14
17
reviewers:
15
18
- bprashanth
16
19
title: Service
20
+ api_metadata:
21
+ - apiVersion: "v1"
22
+ kind: "Service"
17
23
feature:
18
24
title: Service discovery and load balancing
19
25
description: >
@@ -199,7 +205,8 @@ spec:
199
205
selector :
200
206
app.kubernetes.io/name : MyApp
201
207
ports :
202
- - protocol : TCP
208
+ - name : http
209
+ protocol : TCP
203
210
port : 80
204
211
targetPort : 9376
205
212
` ` `
@@ -386,8 +393,7 @@ metadata:
386
393
kubernetes.io/service-name: my-service
387
394
addressType: IPv4
388
395
ports:
389
- - name: '' # empty because port 9376 is not assigned as a well-known
390
- # port (by IANA)
396
+ - name: http # should match with the name of the service port defined above
391
397
appProtocol: http
392
398
protocol: TCP
393
399
port: 9376
@@ -409,7 +415,7 @@ metadata:
409
415
kubernetes.io/service-name: my-service
410
416
addressType: IPv4
411
417
ports:
412
- - name: '' # 留空,因为 port 9376 未被 IANA 分配为已注册端口
418
+ - name: '' # 应与上面定义的服务端口的名称匹配
413
419
appProtocol: http
414
420
protocol: TCP
415
421
port: 9376
@@ -1095,6 +1101,25 @@ can define your own (provider specific) annotations on the Service that specify
1095
1101
或者你可以在 Service 上定义自己的(特定于提供商的)注解,以指定等效的细节。
1096
1102
{{< /note >}}
1097
1103
1104
+ <!--
1105
+ # ### Node liveness impact on load balancer traffic
1106
+
1107
+ Load balancer health checks are critical to modern applications. They are used to
1108
+ determine which server (virtual machine, or IP address) the load balancer should
1109
+ dispatch traffic to. The Kubernetes APIs do not define how health checks have to be
1110
+ implemented for Kubernetes managed load balancers, instead it's the cloud providers
1111
+ (and the people implementing integration code) who decide on the behavior. Load
1112
+ balancer health checks are extensively used within the context of supporting the
1113
+ ` externalTrafficPolicy` field for Services.
1114
+ -->
1115
+ # ### 节点存活态对负载均衡器流量的影响
1116
+
1117
+ 负载均衡器运行状态检查对于现代应用程序至关重要,
1118
+ 它们用于确定负载均衡器应将流量分派到哪个服务器(虚拟机或 IP 地址)。
1119
+ Kubernetes API 没有定义如何为 Kubernetes 托管负载均衡器实施运行状况检查,
1120
+ 而是由云提供商(以及集成代码的实现人员)决定其行为。
1121
+ 负载均衡器运行状态检查广泛用于支持 Service 的 `externalTrafficPolicy` 字段。
1122
+
1098
1123
<!--
1099
1124
# ### Load balancers with mixed protocol types
1100
1125
-->
@@ -1200,14 +1225,14 @@ Unprefixed names are reserved for end-users.
1200
1225
{{< feature-state feature_gate_name="LoadBalancerIPMode" >}}
1201
1226
1202
1227
<!--
1203
- Starting as Alpha in Kubernetes 1.29,
1228
+ As a Beta feature in Kubernetes 1.30,
1204
1229
a [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
1205
1230
named `LoadBalancerIPMode` allows you to set the `.status.loadBalancer.ingress.ipMode`
1206
1231
for a Service with `type` set to `LoadBalancer`.
1207
1232
The `.status.loadBalancer.ingress.ipMode` specifies how the load-balancer IP behaves.
1208
1233
It may be specified only when the `.status.loadBalancer.ingress.ip` field is also specified.
1209
1234
-->
1210
- 这是从 Kubernetes 1.29 开始的一个 Alpha 级别特性,通过名为 `LoadBalancerIPMode`
1235
+ 作为 Kubernetes 1.30 中的 Beta 级别特性,通过名为 `LoadBalancerIPMode`
1211
1236
的[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)允许你为
1212
1237
` type` 为 `LoadBalancer` 的服务设置 `.status.loadBalancer.ingress.ipMode`。
1213
1238
` .status.loadBalancer.ingress.ipMode` 指定负载均衡器 IP 的行为方式。
@@ -1452,9 +1477,7 @@ You can use a headless Service to interface with other service discovery mechani
1452
1477
without being tied to Kubernetes' implementation.
1453
1478
1454
1479
For headless Services, a cluster IP is not allocated, kube-proxy does not handle
1455
- these Services, and there is no load balancing or proxying done by the platform
1456
- for them. How DNS is automatically configured depends on whether the Service has
1457
- selectors defined :
1480
+ these Services, and there is no load balancing or proxying done by the platform for them.
1458
1481
-->
1459
1482
# # 无头服务(Headless Services) {#headless-services}
1460
1483
@@ -1465,7 +1488,33 @@ selectors defined:
1465
1488
1466
1489
无头 Service 不会获得集群 IP,kube-proxy 不会处理这类 Service,
1467
1490
而且平台也不会为它们提供负载均衡或路由支持。
1468
- 取决于 Service 是否定义了选择算符,DNS 会以不同的方式被自动配置。
1491
+
1492
+ <!--
1493
+ A headless Service allows a client to connect to whichever Pod it prefers, directly. Services that are headless don't
1494
+ configure routes and packet forwarding using
1495
+ [virtual IP addresses and proxies](/docs/reference/networking/virtual-ips/); instead, headless Services report the
1496
+ endpoint IP addresses of the individual pods via internal DNS records, served through the cluster's
1497
+ [DNS service](/docs/concepts/services-networking/dns-pod-service/).
1498
+ To define a headless Service, you make a Service with `.spec.type` set to ClusterIP (which is also the default for `type`),
1499
+ and you additionally set `.spec.clusterIP` to None.
1500
+ -->
1501
+ 无头 Service 允许客户端直接连接到它所偏好的任一 Pod。
1502
+ 无头 Service 不使用[虚拟 IP 地址和代理](/zh-cn/docs/reference/networking/virtual-ips/)
1503
+ 配置路由和数据包转发;相反,无头 Service 通过内部 DNS 记录报告各个
1504
+ Pod 的端点 IP 地址,这些 DNS 记录是由集群的
1505
+ [DNS 服务](/zh-cn/docs/concepts/services-networking/dns-pod-service/)所提供的。
1506
+ 这些 DNS 记录是由集群内部 DNS 服务所提供的
1507
+ 要定义无头 Service,你需要将 `.spec.type` 设置为 ClusterIP(这也是 `type`
1508
+ 的默认值),并进一步将 `.spec.clusterIP` 设置为 `None`。
1509
+
1510
+ <!--
1511
+ The string value None is a special case and is not the same as leaving the `.spec.clusterIP` field unset.
1512
+
1513
+ How DNS is automatically configured depends on whether the Service has selectors defined :
1514
+ -->
1515
+ 字符串值 None 是一种特殊情况,与未设置 `.spec.clusterIP` 字段不同。
1516
+
1517
+ DNS 如何自动配置取决于 Service 是否定义了选择器:
1469
1518
1470
1519
<!--
1471
1520
# ## With selectors
@@ -1655,6 +1704,56 @@ mechanism Kubernetes provides to expose a Service with a virtual IP address.
1655
1704
阅读[虚拟 IP 和 Service 代理](/zh-cn/docs/reference/networking/virtual-ips/)以了解
1656
1705
Kubernetes 提供的使用虚拟 IP 地址公开服务的机制。
1657
1706
1707
+ <!--
1708
+ # ## Traffic distribution
1709
+ -->
1710
+ # ## 流量分发
1711
+
1712
+ <!--
1713
+ The `.spec.trafficDistribution` field provides another way to influence traffic
1714
+ routing within a Kubernetes Service. While traffic policies focus on strict
1715
+ semantic guarantees, traffic distribution allows you to express _preferences_
1716
+ (such as routing to topologically closer endpoints). This can help optimize for
1717
+ performance, cost, or reliability. This optional field can be used if you have
1718
+ enabled the `ServiceTrafficDistribution` [feature
1719
+ gate](/docs/reference/command-line-tools-reference/feature-gates/) for your
1720
+ cluster and all of its nodes. In Kubernetes {{< skew currentVersion >}}, the
1721
+ following field value is supported :
1722
+ -->
1723
+ ` .spec.trafficDistribution` 字段提供了另一种影响 Kubernetes Service 内流量路由的方法。
1724
+ 虽然流量策略侧重于严格的语义保证,但流量分发允许你表达一定的**偏好**(例如路由到拓扑上更接近的端点)。
1725
+ 这一机制有助于优化性能、成本或可靠性。
1726
+ 如果你为集群及其所有节点启用了 `ServiceTrafficDistribution`
1727
+ [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/),
1728
+ 则可以使用此可选字段。
1729
+ Kubernetes {{< skew currentVersion >}} 支持以下字段值:
1730
+
1731
+ <!--
1732
+ ` PreferClose`
1733
+ : Indicates a preference for routing traffic to endpoints that are topologically
1734
+ proximate to the client. The interpretation of "topologically proximate" may
1735
+ vary across implementations and could encompass endpoints within the same
1736
+ node, rack, zone, or even region. Setting this value gives implementations
1737
+ permission to make different tradeoffs, e.g. optimizing for proximity rather
1738
+ than equal distribution of load. Users should not set this value if such
1739
+ tradeoffs are not acceptable.
1740
+ -->
1741
+ ` PreferClose`
1742
+ : 表示优先将流量路由到拓扑上最接近客户端的端点。
1743
+ “拓扑上邻近”的解释可能因实现而异,并且可能涵盖同一节点、机架、区域甚至区域内的端点。
1744
+ 设置此值允许实现进行不同的权衡,例如按距离优化而不是平均分配负载。
1745
+ 如果这种权衡不可接受,用户不应设置此值。
1746
+
1747
+ <!--
1748
+ If the field is not set, the implementation will apply its default routing strategy.
1749
+
1750
+ See [Traffic
1751
+ Distribution](/docs/reference/networking/virtual-ips/#traffic-distribution) for
1752
+ more details
1753
+ -->
1754
+ 如果未设置该字段,实现将应用其默认路由策略,
1755
+ 详见[流量分发](/zh-cn/docs/reference/networking/virtual-ips/#traffic-distribution)。
1756
+
1658
1757
<!--
1659
1758
# ## Traffic policies
1660
1759
0 commit comments