You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/zh-cn/docs/tutorials/services/connect-applications-service.md
+70-25Lines changed: 70 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,12 @@ weight: 20
20
20
21
21
Now that you have a continuously running, replicated application you can expose it on a network.
22
22
23
-
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
23
+
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on.
24
+
Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly
25
+
create links between pods or map container ports to host ports. This means that containers within
26
+
a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other
27
+
without NAT. The rest of this document elaborates on how you can run reliable services on such a
28
+
networking model.
24
29
25
30
This tutorial uses a simple nginx web server to demonstrate the concept.
26
31
-->
@@ -72,17 +77,26 @@ Check your pods' IPs:
72
77
-->
73
78
检查 Pod 的 IP 地址:
74
79
75
-
```
80
+
```shell
76
81
kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
77
82
POD_IP
78
83
[map[ip:10.244.3.4]]
79
84
[map[ip:10.244.2.5]]
80
85
```
81
86
82
87
<!--
83
-
You should be able to ssh into any node in your cluster and use a tool such as `curl` to make queries against both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same `containerPort`, and access them from any other pod or node in your cluster using the assigned IP address for the Service. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so.
88
+
You should be able to ssh into any node in your cluster and use a tool such as `curl`
89
+
to make queries against both IPs. Note that the containers are *not* using port 80 on
90
+
the node, nor are there any special NAT rules to route traffic to the pod. This means
91
+
you can run multiple nginx pods on the same node all using the same `containerPort`,
92
+
and access them from any other pod or node in your cluster using the assigned IP
93
+
address for the Service. If you want to arrange for a specific port on the host
94
+
Node to be forwarded to backing Pods, you can - but the networking model should
95
+
mean that you do not need to do so.
84
96
85
-
You can read more about the [Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) if you're curious.
你应该能够通过 ssh 登录到集群中的任何一个节点上,并使用诸如 `curl` 之类的工具向这两个 IP 地址发出查询请求。
88
102
需要注意的是,容器 **不会** 使用该节点上的 80 端口,也不会使用任何特定的 NAT 规则去路由流量到 Pod 上。
@@ -95,9 +109,17 @@ Pod 或节点上使用 IP 的方式访问到它们。
95
109
<!--
96
110
## Creating a Service
97
111
98
-
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves.
112
+
So we have pods running nginx in a flat, cluster wide, address space. In theory,
113
+
you could talk to these pods directly, but what happens when a node dies? The pods
114
+
die with it, and the Deployment will create new ones, with different IPs. This is
115
+
the problem a Service solves.
99
116
100
-
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
117
+
A Kubernetes Service is an abstraction which defines a logical set of Pods running
118
+
somewhere in your cluster, that all provide the same functionality. When created,
119
+
each Service is assigned a unique IP address (also called clusterIP). This address
120
+
is tied to the lifespan of the Service, and will not change while the Service is alive.
121
+
Pods can be configured to talk to the Service, and know that communication to the
122
+
Service will be automatically load-balanced out to some pod that is a member of the Service.
101
123
102
124
You can create a Service for your 2 nginx replicas with `kubectl expose`:
103
125
-->
@@ -113,7 +135,7 @@ Kubernetes Service 是集群中提供相同功能的一组 Pod 的抽象表达
113
135
可以配置 Pod 使它与 Service 进行通信,Pod 知道与 Service 通信将被自动地负载均衡到该
The rest of this section will assume you have a Service with a long lived IP
318
-
(my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`). If CoreDNS isn't running, you can enable it referring to the [CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns). Let's run another curl application to test this:
346
+
(my-nginx), and a DNS server that has assigned a name to that IP. Here we use
347
+
the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the
348
+
Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`).
349
+
If CoreDNS isn't running, you can enable it referring to the
or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns).
352
+
Let's run another curl application to test this:
319
353
-->
320
354
本段剩余的内容假设你已经有一个拥有持久 IP 地址的 Service(my-nginx),以及一个为其
321
355
IP 分配名称的 DNS 服务器。 这里我们使用 CoreDNS 集群插件(应用名为 `kube-dns`),
@@ -350,13 +384,18 @@ Address 1: 10.0.162.149
350
384
<!--
351
385
## Securing the Service
352
386
353
-
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
387
+
Till now we have only accessed the nginx server from within the cluster. Before
388
+
exposing the Service to the internet, you want to make sure the communication
389
+
channel is secure. For this, you will need:
354
390
355
391
* Self signed certificates for https (unless you already have an identity certificate)
356
392
* An nginx server configured to use the certificates
357
393
* A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods
358
394
359
-
You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
@@ -482,22 +523,25 @@ At this point you can reach the nginx server from any node.
482
523
-->
483
524
这时,你可以从任何节点访问到 Nginx 服务器。
484
525
485
-
```
526
+
```shell
486
527
kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
487
528
POD_IP
488
529
[map[ip:10.244.3.5]]
489
530
```
490
531
491
-
```
532
+
```shell
492
533
node $ curl -k https://10.244.3.5
493
534
...
494
535
<h1>Welcome to nginx!</h1>
495
536
```
496
537
497
538
<!--
498
-
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
499
-
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
500
-
Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
539
+
Note how we supplied the `-k` parameter to curl in the last step, this is because
540
+
we don't know anything about the pods running nginx at certificate generation time,
541
+
so we have to tell curl to ignore the CName mismatch. By creating a Service we
542
+
linked the CName used in the certificate with the actual DNS name used by pods
543
+
during Service lookup. Let's test this from a pod (the same secret is being reused
544
+
for simplicity, the pod only needs nginx.crt to access the Service):
501
545
-->
502
546
注意最后一步我们是如何提供 `-k` 参数执行 curl 命令的,这是因为在证书生成时,
503
547
我们不知道任何关于运行 nginx 的 Pod 的信息,所以不得不在执行 curl 命令时忽略 CName 不匹配的情况。
0 commit comments