Skip to content

Commit 1ff44a9

Browse files
authored
Merge pull request #39199 from Zhuzhenghao/connect-applications-service.md
[zh] Resync connect-applications-service.md
2 parents 40efd29 + 389a1fb commit 1ff44a9

File tree

1 file changed

+70
-25
lines changed

1 file changed

+70
-25
lines changed

content/zh-cn/docs/tutorials/services/connect-applications-service.md

Lines changed: 70 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,12 @@ weight: 20
2020
2121
Now that you have a continuously running, replicated application you can expose it on a network.
2222
23-
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
23+
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on.
24+
Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly
25+
create links between pods or map container ports to host ports. This means that containers within
26+
a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other
27+
without NAT. The rest of this document elaborates on how you can run reliable services on such a
28+
networking model.
2429
2530
This tutorial uses a simple nginx web server to demonstrate the concept.
2631
-->
@@ -72,17 +77,26 @@ Check your pods' IPs:
7277
-->
7378
检查 Pod 的 IP 地址:
7479

75-
```
80+
```shell
7681
kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
7782
POD_IP
7883
[map[ip:10.244.3.4]]
7984
[map[ip:10.244.2.5]]
8085
```
8186
8287
<!--
83-
You should be able to ssh into any node in your cluster and use a tool such as `curl` to make queries against both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same `containerPort`, and access them from any other pod or node in your cluster using the assigned IP address for the Service. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so.
88+
You should be able to ssh into any node in your cluster and use a tool such as `curl`
89+
to make queries against both IPs. Note that the containers are *not* using port 80 on
90+
the node, nor are there any special NAT rules to route traffic to the pod. This means
91+
you can run multiple nginx pods on the same node all using the same `containerPort`,
92+
and access them from any other pod or node in your cluster using the assigned IP
93+
address for the Service. If you want to arrange for a specific port on the host
94+
Node to be forwarded to backing Pods, you can - but the networking model should
95+
mean that you do not need to do so.
8496
85-
You can read more about the [Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) if you're curious.
97+
You can read more about the
98+
[Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)
99+
if you're curious.
86100
-->
87101
你应该能够通过 ssh 登录到集群中的任何一个节点上,并使用诸如 `curl` 之类的工具向这两个 IP 地址发出查询请求。
88102
需要注意的是,容器 **不会** 使用该节点上的 80 端口,也不会使用任何特定的 NAT 规则去路由流量到 Pod 上。
@@ -95,9 +109,17 @@ Pod 或节点上使用 IP 的方式访问到它们。
95109
<!--
96110
## Creating a Service
97111
98-
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves.
112+
So we have pods running nginx in a flat, cluster wide, address space. In theory,
113+
you could talk to these pods directly, but what happens when a node dies? The pods
114+
die with it, and the Deployment will create new ones, with different IPs. This is
115+
the problem a Service solves.
99116
100-
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
117+
A Kubernetes Service is an abstraction which defines a logical set of Pods running
118+
somewhere in your cluster, that all provide the same functionality. When created,
119+
each Service is assigned a unique IP address (also called clusterIP). This address
120+
is tied to the lifespan of the Service, and will not change while the Service is alive.
121+
Pods can be configured to talk to the Service, and know that communication to the
122+
Service will be automatically load-balanced out to some pod that is a member of the Service.
101123
102124
You can create a Service for your 2 nginx replicas with `kubectl expose`:
103125
-->
@@ -113,7 +135,7 @@ Kubernetes Service 是集群中提供相同功能的一组 Pod 的抽象表达
113135
可以配置 Pod 使它与 Service 进行通信,Pod 知道与 Service 通信将被自动地负载均衡到该
114136
Service 中的某些 Pod 上。
115137
116-
可以使用 `kubectl expose` 命令为 2个 Nginx 副本创建一个 Service:
138+
可以使用 `kubectl expose` 命令为 2 个 Nginx 副本创建一个 Service:
117139
118140
```shell
119141
kubectl expose deployment/my-nginx
@@ -161,7 +183,7 @@ exposed through
161183
The Service's selector will be evaluated continuously and the results will be POSTed
162184
to an EndpointSlice that is connected to the Service using a
163185
{{< glossary_tooltip text="labels" term_id="label" >}}.
164-
When a Pod dies, it is automatically removed from the EndpointSlices that contain it
186+
When a Pod dies, it is automatically removed from the EndpointSlices that contain it
165187
as an endpoint. New Pods that match the Service's selector will automatically get added
166188
to an EndpointSlice for that Service.
167189
Check the endpoints, and note that the IPs are the same as the Pods created in
@@ -185,8 +207,12 @@ Labels: run=my-nginx
185207
Annotations: <none>
186208
Selector: run=my-nginx
187209
Type: ClusterIP
210+
IP Family Policy: SingleStack
211+
IP Families: IPv4
188212
IP: 10.0.162.149
213+
IPs: 10.0.162.149
189214
Port: <unset> 80/TCP
215+
TargetPort: 80/TCP
190216
Endpoints: 10.244.2.5:80,10.244.3.4:80
191217
Session Affinity: None
192218
Events: <none>
@@ -223,9 +249,10 @@ Kubernetes 支持两种查找服务的主要模式:环境变量和 DNS。前
223249
224250
{{< note >}}
225251
<!--
226-
If the service environment variables are not desired (because possible clashing with expected program ones,
227-
too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks`
228-
flag to `false` on the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
252+
If the service environment variables are not desired (because possible clashing
253+
with expected program ones, too many variables to process, only using DNS, etc)
254+
you can disable this mode by setting the `enableServiceLinks` flag to `false` on
255+
the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
229256
-->
230257
如果不需要服务环境变量(因为可能与预期的程序冲突,可能要处理的变量太多,或者仅使用DNS等),则可以通过在
231258
[pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
@@ -300,7 +327,8 @@ KUBERNETES_SERVICE_PORT_HTTPS=443
300327
### DNS
301328
302329
<!--
303-
Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:
330+
Kubernetes offers a DNS cluster addon Service that automatically assigns dns names
331+
to other Services. You can check if it's running on your cluster:
304332
-->
305333
Kubernetes 提供了一个自动为其它 Service 分配 DNS 名字的 DNS 插件 Service。
306334
你可以通过如下命令检查它是否在工作:
@@ -315,7 +343,13 @@ kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8m
315343
316344
<!--
317345
The rest of this section will assume you have a Service with a long lived IP
318-
(my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`). If CoreDNS isn't running, you can enable it referring to the [CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns). Let's run another curl application to test this:
346+
(my-nginx), and a DNS server that has assigned a name to that IP. Here we use
347+
the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the
348+
Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`).
349+
If CoreDNS isn't running, you can enable it referring to the
350+
[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes)
351+
or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns).
352+
Let's run another curl application to test this:
319353
-->
320354
本段剩余的内容假设你已经有一个拥有持久 IP 地址的 Service(my-nginx),以及一个为其
321355
IP 分配名称的 DNS 服务器。 这里我们使用 CoreDNS 集群插件(应用名为 `kube-dns`),
@@ -350,13 +384,18 @@ Address 1: 10.0.162.149
350384
<!--
351385
## Securing the Service
352386
353-
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
387+
Till now we have only accessed the nginx server from within the cluster. Before
388+
exposing the Service to the internet, you want to make sure the communication
389+
channel is secure. For this, you will need:
354390
355391
* Self signed certificates for https (unless you already have an identity certificate)
356392
* An nginx server configured to use the certificates
357393
* A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods
358394
359-
You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
395+
You can acquire all these from the
396+
[nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/).
397+
This requires having go and make tools installed. If you don't want to install those,
398+
then follow the manual steps later. In short:
360399
-->
361400
## 保护 Service {#securing-the-service}
362401
@@ -419,7 +458,8 @@ cat /d/tmp/nginx.key | base64
419458
```
420459
421460
<!--
422-
Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.
461+
Use the output from the previous commands to create a yaml file as follows.
462+
The base64 encoded value should all be on a single line.
423463
-->
424464
使用前面命令的输出来创建 yaml 文件,如下所示。 base64 编码的值应全部放在一行上。
425465
@@ -450,7 +490,8 @@ nginxsecret kubernetes.io/tls 2 1m
450490
```
451491
452492
<!--
453-
Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
493+
Now modify your nginx replicas to start an https server using the certificate
494+
in the secret, and the Service, to expose both ports (80 and 443):
454495
-->
455496
现在修改 Nginx 副本以启动一个使用 Secret 中的证书的 HTTPS 服务器以及相应的用于暴露其端口(80 和 443)的 Service:
456497
@@ -482,22 +523,25 @@ At this point you can reach the nginx server from any node.
482523
-->
483524
这时,你可以从任何节点访问到 Nginx 服务器。
484525
485-
```
526+
```shell
486527
kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
487528
POD_IP
488529
[map[ip:10.244.3.5]]
489530
```
490531
491-
```
532+
```shell
492533
node $ curl -k https://10.244.3.5
493534
...
494535
<h1>Welcome to nginx!</h1>
495536
```
496537
497538
<!--
498-
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
499-
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
500-
Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
539+
Note how we supplied the `-k` parameter to curl in the last step, this is because
540+
we don't know anything about the pods running nginx at certificate generation time,
541+
so we have to tell curl to ignore the CName mismatch. By creating a Service we
542+
linked the CName used in the certificate with the actual DNS name used by pods
543+
during Service lookup. Let's test this from a pod (the same secret is being reused
544+
for simplicity, the pod only needs nginx.crt to access the Service):
501545
-->
502546
注意最后一步我们是如何提供 `-k` 参数执行 curl 命令的,这是因为在证书生成时,
503547
我们不知道任何关于运行 nginx 的 Pod 的信息,所以不得不在执行 curl 命令时忽略 CName 不匹配的情况。
@@ -580,7 +624,8 @@ $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
580624
```
581625
582626
<!--
583-
Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
627+
Let's now recreate the Service to use a cloud load balancer.
628+
Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
584629
-->
585630
让我们重新创建一个 Service 以使用云负载均衡器。
586631
`my-nginx` Service 的 `Type``NodePort` 改成 `LoadBalancer`
@@ -600,8 +645,8 @@ curl https://<EXTERNAL-IP> -k
600645
```
601646
602647
<!--
603-
The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet. The `CLUSTER-IP` is only available inside your
604-
cluster/private cloud network.
648+
The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet.
649+
The `CLUSTER-IP` is only available inside your cluster/private cloud network.
605650
606651
Note that on AWS, type `LoadBalancer` creates an ELB, which uses a (long)
607652
hostname, not an IP. It's too long to fit in the standard `kubectl get svc`

0 commit comments

Comments
 (0)