Skip to content

Commit dd80522

Browse files
authored
Merge pull request #39100 from windsonsea/conser
Tweak long lines in connect-applications-service.md
2 parents 2f7ce07 + d57b56a commit dd80522

File tree

1 file changed

+63
-21
lines changed

1 file changed

+63
-21
lines changed

content/en/docs/tutorials/services/connect-applications-service.md

Lines changed: 63 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,12 @@ weight: 20
1515

1616
Now that you have a continuously running, replicated application you can expose it on a network.
1717

18-
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
18+
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on.
19+
Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly
20+
create links between pods or map container ports to host ports. This means that containers within
21+
a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other
22+
without NAT. The rest of this document elaborates on how you can run reliable services on such a
23+
networking model.
1924

2025
This tutorial uses a simple nginx web server to demonstrate the concept.
2126

@@ -49,16 +54,32 @@ kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
4954
[map[ip:10.244.2.5]]
5055
```
5156
52-
You should be able to ssh into any node in your cluster and use a tool such as `curl` to make queries against both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same `containerPort`, and access them from any other pod or node in your cluster using the assigned IP address for the Service. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so.
57+
You should be able to ssh into any node in your cluster and use a tool such as `curl`
58+
to make queries against both IPs. Note that the containers are *not* using port 80 on
59+
the node, nor are there any special NAT rules to route traffic to the pod. This means
60+
you can run multiple nginx pods on the same node all using the same `containerPort`,
61+
and access them from any other pod or node in your cluster using the assigned IP
62+
address for the Service. If you want to arrange for a specific port on the host
63+
Node to be forwarded to backing Pods, you can - but the networking model should
64+
mean that you do not need to do so.
5365
54-
55-
You can read more about the [Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) if you're curious.
66+
You can read more about the
67+
[Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)
68+
if you're curious.
5669
5770
## Creating a Service
5871
59-
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves.
72+
So we have pods running nginx in a flat, cluster wide, address space. In theory,
73+
you could talk to these pods directly, but what happens when a node dies? The pods
74+
die with it, and the Deployment will create new ones, with different IPs. This is
75+
the problem a Service solves.
6076
61-
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
77+
A Kubernetes Service is an abstraction which defines a logical set of Pods running
78+
somewhere in your cluster, that all provide the same functionality. When created,
79+
each Service is assigned a unique IP address (also called clusterIP). This address
80+
is tied to the lifespan of the Service, and will not change while the Service is alive.
81+
Pods can be configured to talk to the Service, and know that communication to the
82+
Service will be automatically load-balanced out to some pod that is a member of the Service.
6283
6384
You can create a Service for your 2 nginx replicas with `kubectl expose`:
6485
@@ -136,10 +157,12 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip
136157
Kubernetes supports 2 primary modes of finding a Service - environment variables
137158
and DNS. The former works out of the box while the latter requires the
138159
[CoreDNS cluster addon](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns).
160+
139161
{{< note >}}
140-
If the service environment variables are not desired (because possible clashing with expected program ones,
141-
too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks`
142-
flag to `false` on the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
162+
If the service environment variables are not desired (because possible clashing
163+
with expected program ones, too many variables to process, only using DNS, etc)
164+
you can disable this mode by setting the `enableServiceLinks` flag to `false` on
165+
the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
143166
{{< /note >}}
144167
145168
@@ -193,7 +216,8 @@ KUBERNETES_SERVICE_PORT_HTTPS=443
193216
194217
### DNS
195218
196-
Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:
219+
Kubernetes offers a DNS cluster addon Service that automatically assigns dns names
220+
to other Services. You can check if it's running on your cluster:
197221
198222
```shell
199223
kubectl get services kube-dns --namespace=kube-system
@@ -204,7 +228,13 @@ kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8m
204228
```
205229
206230
The rest of this section will assume you have a Service with a long lived IP
207-
(my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`). If CoreDNS isn't running, you can enable it referring to the [CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns). Let's run another curl application to test this:
231+
(my-nginx), and a DNS server that has assigned a name to that IP. Here we use
232+
the CoreDNS cluster addon (application name `kube-dns`), so you can talk to the
233+
Service from any pod in your cluster using standard methods (e.g. `gethostbyname()`).
234+
If CoreDNS isn't running, you can enable it referring to the
235+
[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes)
236+
or [Installing CoreDNS](/docs/tasks/administer-cluster/coredns/#installing-coredns).
237+
Let's run another curl application to test this:
208238
209239
```shell
210240
kubectl run curl --image=radial/busyboxplus:curl -i --tty
@@ -227,13 +257,18 @@ Address 1: 10.0.162.149
227257
228258
## Securing the Service
229259
230-
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
260+
Till now we have only accessed the nginx server from within the cluster. Before
261+
exposing the Service to the internet, you want to make sure the communication
262+
channel is secure. For this, you will need:
231263
232264
* Self signed certificates for https (unless you already have an identity certificate)
233265
* An nginx server configured to use the certificates
234266
* A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods
235267
236-
You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
268+
You can acquire all these from the
269+
[nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/).
270+
This requires having go and make tools installed. If you don't want to install those,
271+
then follow the manual steps later. In short:
237272
238273
```shell
239274
make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt
@@ -272,7 +307,9 @@ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -ou
272307
cat /d/tmp/nginx.crt | base64
273308
cat /d/tmp/nginx.key | base64
274309
```
275-
Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.
310+
311+
Use the output from the previous commands to create a yaml file as follows.
312+
The base64 encoded value should all be on a single line.
276313
277314
```yaml
278315
apiVersion: "v1"
@@ -296,7 +333,8 @@ NAME TYPE DATA AGE
296333
nginxsecret kubernetes.io/tls 2 1m
297334
```
298335
299-
Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
336+
Now modify your nginx replicas to start an https server using the certificate
337+
in the secret, and the Service, to expose both ports (80 and 443):
300338
301339
{{< codenew file="service/networking/nginx-secure-app.yaml" >}}
302340
@@ -327,9 +365,12 @@ node $ curl -k https://10.244.3.5
327365
<h1>Welcome to nginx!</h1>
328366
```
329367
330-
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
331-
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
332-
Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
368+
Note how we supplied the `-k` parameter to curl in the last step, this is because
369+
we don't know anything about the pods running nginx at certificate generation time,
370+
so we have to tell curl to ignore the CName mismatch. By creating a Service we
371+
linked the CName used in the certificate with the actual DNS name used by pods
372+
during Service lookup. Let's test this from a pod (the same secret is being reused
373+
for simplicity, the pod only needs nginx.crt to access the Service):
333374
334375
{{< codenew file="service/networking/curlpod.yaml" >}}
335376
@@ -391,7 +432,8 @@ $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
391432
<h1>Welcome to nginx!</h1>
392433
```
393434
394-
Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
435+
Let's now recreate the Service to use a cloud load balancer.
436+
Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
395437
396438
```shell
397439
kubectl edit svc my-nginx
@@ -407,8 +449,8 @@ curl https://<EXTERNAL-IP> -k
407449
<title>Welcome to nginx!</title>
408450
```
409451
410-
The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet. The `CLUSTER-IP` is only available inside your
411-
cluster/private cloud network.
452+
The IP address in the `EXTERNAL-IP` column is the one that is available on the public internet.
453+
The `CLUSTER-IP` is only available inside your cluster/private cloud network.
412454
413455
Note that on AWS, type `LoadBalancer` creates an ELB, which uses a (long)
414456
hostname, not an IP. It's too long to fit in the standard `kubectl get svc`

0 commit comments

Comments
 (0)