Skip to content

Commit ece2498

Browse files
committed
[zh-cn] sync networking/virtual-ips.md
Signed-off-by: Guangwen Feng <[email protected]>
1 parent 0bdf35c commit ece2498

File tree

1 file changed

+31
-20
lines changed

1 file changed

+31
-20
lines changed

content/zh-cn/docs/reference/networking/virtual-ips.md

Lines changed: 31 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,12 @@ weight: 50
1313
<!-- overview -->
1414
<!--
1515
Every {{< glossary_tooltip term_id="node" text="node" >}} in a Kubernetes
16-
cluster runs a [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/)
16+
{{< glossary_tooltip term_id="cluster" text="cluster" >}} runs a
17+
[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/)
1718
(unless you have deployed your own alternative component in place of `kube-proxy`).
1819
-->
19-
Kubernetes 集群中的每个{{< glossary_tooltip term_id="node" text="节点" >}}会运行一个
20+
Kubernetes {{< glossary_tooltip text="集群" term_id="cluster" >}}中的每个
21+
{{< glossary_tooltip text="节点" term_id="node" >}}会运行一个
2022
[kube-proxy](/zh-cn/docs/reference/command-line-tools-reference/kube-proxy/)
2123
(除非你已经部署了自己的替换组件来替代 `kube-proxy`)。
2224

@@ -77,15 +79,18 @@ to use as-is.
7779

7880
<!--
7981
<a id="example"></a>
80-
Some of the details in this reference refer to an example: the backend Pods for a stateless
81-
image-processing workload, running with three replicas. Those replicas are
82+
Some of the details in this reference refer to an example: the backend
83+
{{< glossary_tooltip term_id="pod" text="Pods" >}} for a stateless
84+
image-processing workloads, running with
85+
three replicas. Those replicas are
8286
fungible&mdash;frontends do not care which backend they use. While the actual Pods that
8387
compose the backend set may change, the frontend clients should not need to be aware of that,
8488
nor should they need to keep track of the set of backends themselves.
8589
-->
8690
<a id="example"></a>
8791
本文中的一些细节会引用这样一个例子:
88-
运行了 3 个 Pod 副本的无状态图像处理后端工作负载。
92+
运行了 3 个 {{< glossary_tooltip text="Pod" term_id="pod" >}}
93+
副本的无状态图像处理后端工作负载。
8994
这些副本是可互换的;前端不需要关心它们调用了哪个后端副本。
9095
即使组成这一组后端程序的 Pod 实际上可能会发生变化,
9196
前端客户端不应该也没必要知道,而且也不需要跟踪这一组后端的状态。
@@ -107,31 +112,32 @@ Note that the kube-proxy starts up in different modes, which are determined by i
107112
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
108113
For example, if your operating system doesn't allow you to run iptables commands,
109114
the standard kernel kube-proxy implementation will not work.
110-
Likewise, if you have an operating system which doesn't support `netsh`,
111-
it will not run in Windows userspace mode.
112115
-->
113116
注意,kube-proxy 会根据不同配置以不同的模式启动。
114117

115118
- kube-proxy 的配置是通过 ConfigMap 完成的,kube-proxy 的 ConfigMap 实际上弃用了 kube-proxy 大部分标志的行为。
116119
- kube-proxy 的 ConfigMap 不支持配置的实时重新加载。
117120
- kube-proxy 不能在启动时验证和检查所有的 ConfigMap 参数。
118121
例如,如果你的操作系统不允许你运行 iptables 命令,标准的 kube-proxy 内核实现将无法工作。
119-
同样,如果你的操作系统不支持 `netsh`,它也无法在 Windows 用户空间模式下运行。
120122

121123
<!--
122124
### `iptables` proxy mode {#proxy-mode-iptables}
123125
-->
124126
### `iptables` 代理模式 {#proxy-mode-iptables}
125127

126128
<!--
127-
In this mode, kube-proxy watches the Kubernetes control plane for the addition and
128-
removal of Service and EndpointSlice objects. For each Service, it installs
129+
In this mode, kube-proxy watches the Kubernetes
130+
{{< glossary_tooltip term_id="control-plane" text="control plane" >}} for the addition and
131+
removal of Service and EndpointSlice {{< glossary_tooltip term_id="object" text="objects." >}}
132+
For each Service, it installs
129133
iptables rules, which capture traffic to the Service's `clusterIP` and `port`,
130134
and redirect that traffic to one of the Service's
131135
backend sets. For each endpoint, it installs iptables rules which
132136
select a backend Pod.
133137
-->
134-
在这种模式下,kube-proxy 监视 Kubernetes 控制平面,获知对 Service 和 EndpointSlice 对象的添加和删除操作。
138+
在这种模式下,kube-proxy 监视 Kubernetes
139+
{{< glossary_tooltip text="控制平面" term_id="control-plane" >}},获知对 Service 和 EndpointSlice
140+
{{< glossary_tooltip text="对象" term_id="object" >}}的添加和删除操作。
135141
对于每个 Service,kube-proxy 会添加 iptables 规则,这些规则捕获流向 Service 的 `clusterIP``port` 的流量,
136142
并将这些流量重定向到 Service 后端集合中的其中之一。
137143
对于每个端点,它会添加指向一个特定后端 Pod 的 iptables 规则。
@@ -238,7 +244,7 @@ iptables 模式的 kube-proxy 在更新内核中的规则时可能要用较长
238244
[`iptables`](/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1/#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPTablesConfiguration)中的选项来调整
239245
kube-proxy 的同步行为:
240246

241-
```none
247+
```yaml
242248
...
243249
iptables:
244250
minSyncPeriod: 1s
@@ -255,19 +261,22 @@ attempts to resynchronize iptables rules with the kernel. If it is
255261
every time any Service or Endpoint changes. This works fine in very
256262
small clusters, but it results in a lot of redundant work when lots of
257263
things change in a small time period. For example, if you have a
258-
Service backed by a Deployment with 100 pods, and you delete the
264+
Service backed by a {{< glossary_tooltip term_id="deployment" text="Deployment" >}}
265+
with 100 pods, and you delete the
259266
Deployment, then with `minSyncPeriod: 0s`, kube-proxy would end up
260267
removing the Service's Endpoints from the iptables rules one by one,
261268
for a total of 100 updates. With a larger `minSyncPeriod`, multiple
262-
Pod deletion events would get aggregated together, so kube-proxy might
269+
Pod deletion events would get aggregated
270+
together, so kube-proxy might
263271
instead end up making, say, 5 updates, each removing 20 endpoints,
264272
which will be much more efficient in terms of CPU, and result in the
265273
full set of changes being synchronized faster.
266274
-->
267275
`minSyncPeriod` 参数设置尝试同步 iptables 规则与内核之间的最短时长。
268276
如果是 `0s`,那么每次有任一 Service 或 Endpoint 发生变更时,kube-proxy 都会立即同步这些规则。
269277
这种方式在较小的集群中可以工作得很好,但如果在很短的时间内很多东西发生变更时,它会导致大量冗余工作。
270-
例如,如果你有一个由 Deployment 支持的 Service,共有 100 个 Pod,你删除了这个 Deployment,
278+
例如,如果你有一个由 {{< glossary_tooltip text="Deployment" term_id="deployment" >}}
279+
支持的 Service,共有 100 个 Pod,你删除了这个 Deployment,
271280
且设置了 `minSyncPeriod: 0s`,kube-proxy 最终会从 iptables 规则中逐个删除 Service 的 Endpoint,
272281
总共更新 100 次。使用较大的 `minSyncPeriod` 值时,多个 Pod 删除事件将被聚合在一起,
273282
因此 kube-proxy 最终可能会进行例如 5 次更新,每次移除 20 个端点,
@@ -343,7 +352,8 @@ kube-proxy with `--feature-gates=MinimizeIPTablesRestore=true,…`.
343352
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
344353

345354
<!--
346-
If you enable that feature gate and you were previously overriding
355+
If you enable that feature gate and
356+
you were previously overriding
347357
`minSyncPeriod`, you should try removing that override and letting
348358
kube-proxy use the default value (`1s`) or at least a smaller value
349359
than you were using before.
@@ -523,11 +533,11 @@ Kubernetes 的主要哲学之一是,
523533
In order to allow you to choose a port number for your Services, we must
524534
ensure that no two Services can collide. Kubernetes does that by allocating each
525535
Service its own IP address from within the `service-cluster-ip-range`
526-
CIDR range that is configured for the API server.
536+
CIDR range that is configured for the {{< glossary_tooltip term_id="kube-apiserver" text="API Server" >}}.
527537
-->
528538
为了允许你为 Service 选择端口号,我们必须确保没有任何两个 Service 会发生冲突。
529-
Kubernetes 通过从为 API 服务器配置的 `service-cluster-ip-range`
530-
CIDR 范围内为每个 Service 分配自己的 IP 地址来实现这一点。
539+
Kubernetes 通过从为 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}
540+
配置的 `service-cluster-ip-range` CIDR 范围内为每个 Service 分配自己的 IP 地址来实现这一点。
531541

532542
<!--
533543
To ensure each Service receives a unique IP, an internal allocator atomically
@@ -677,7 +687,8 @@ N to 0 replicas of that deployment. In some cases, external load balancers can s
677687
a node with 0 replicas in between health check probes. Routing traffic to terminating endpoints
678688
ensures that Node's that are scaling down Pods can gracefully receive and drain traffic to
679689
those terminating Pods. By the time the Pod completes termination, the external load balancer
680-
should have seen the node's health check failing and fully removed the node from the backend pool.
690+
should have seen the node's health check failing and fully removed the node from the backend
691+
pool.
681692
-->
682693
这种对处于终止过程中的端点的转发行为使得 `NodePort``LoadBalancer` Service
683694
能有条不紊地腾空设置了 `externalTrafficPolicy: Local` 时的连接。

0 commit comments

Comments
 (0)