You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/tutorials/services/connect-applications-service.md
+63-21Lines changed: 63 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,12 @@ weight: 20
15
15
16
16
Now that you have a continuously running, replicated application you can expose it on a network.
17
17
18
-
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
18
+
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on.
19
+
Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly
20
+
create links between pods or map container ports to host ports. This means that containers within
21
+
a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other
22
+
without NAT. The rest of this document elaborates on how you can run reliable services on such a
23
+
networking model.
19
24
20
25
This tutorial uses a simple nginx web server to demonstrate the concept.
You should be able to ssh into any node in your cluster and use a tool such as `curl` to make queries against both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same `containerPort`, and access them from any other pod or node in your cluster using the assigned IP address for the Service. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so.
57
+
You should be able to ssh into any node in your cluster and use a tool such as `curl`
58
+
to make queries against both IPs. Note that the containers are *not* using port 80 on
59
+
the node, nor are there any special NAT rules to route traffic to the pod. This means
60
+
you can run multiple nginx pods on the same node all using the same `containerPort`,
61
+
and access them from any other pod or node in your cluster using the assigned IP
62
+
address for the Service. If you want to arrange for a specific port on the host
63
+
Node to be forwarded to backing Pods, you can - but the networking model should
64
+
mean that you do not need to do so.
53
65
54
-
55
-
You can read more about the [Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) if you're curious.
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves.
72
+
So we have pods running nginx in a flat, cluster wide, address space. In theory,
73
+
you could talk to these pods directly, but what happens when a node dies? The pods
74
+
die with it, and the Deployment will create new ones, with different IPs. This is
75
+
the problem a Service solves.
60
76
61
-
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
77
+
A Kubernetes Service is an abstraction which defines a logical set of Pods running
78
+
somewhere in your cluster, that all provide the same functionality. When created,
79
+
each Service is assigned a unique IP address (also called clusterIP). This address
80
+
is tied to the lifespan of the Service, and will not change while the Service is alive.
81
+
Pods can be configured to talk to the Service, and know that communication to the
82
+
Service will be automatically load-balanced out to some pod that is a member of the Service.
62
83
63
84
You can create a Service for your 2 nginx replicas with `kubectl expose`:
64
85
@@ -136,10 +157,12 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip
136
157
Kubernetes supports 2 primary modes of finding a Service - environment variables
137
158
and DNS. The former works out of the box while the latter requires the
The rest of this section will assume you have a Service with a long lived IP
207
-
(my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`). If CoreDNS isn't running, you can enable it referring to the [CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns). Let's run another curl application to test this:
231
+
(my-nginx), and a DNS server that has assigned a name to that IP. Here we use
232
+
the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the
233
+
Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`).
234
+
If CoreDNS isn't running, you can enable it referring to the
or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns).
237
+
Let's run another curl application to test this:
208
238
209
239
```shell
210
240
kubectl run curl --image=radial/busyboxplus:curl -i --tty
@@ -227,13 +257,18 @@ Address 1: 10.0.162.149
227
257
228
258
## Securing the Service
229
259
230
-
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
260
+
Till now we have only accessed the nginx server from within the cluster. Before
261
+
exposing the Service to the internet, you want to make sure the communication
262
+
channel is secure. For this, you will need:
231
263
232
264
* Self signed certificates for https (unless you already have an identity certificate)
233
265
* An nginx server configured to use the certificates
234
266
* A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods
235
267
236
-
You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
331
-
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
332
-
Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
368
+
Note how we supplied the `-k` parameter to curl in the last step, this is because
369
+
we don't know anything about the pods running nginx at certificate generation time,
370
+
so we have to tell curl to ignore the CName mismatch. By creating a Service we
371
+
linked the CName used in the certificate with the actual DNS name used by pods
372
+
during Service lookup. Let's test this from a pod (the same secret is being reused
373
+
for simplicity, the pod only needs nginx.crt to access the Service):
0 commit comments