@@ -4,47 +4,44 @@ content_type: task
4
4
weight : 80
5
5
---
6
6
7
-
8
7
<!-- overview -->
9
8
10
- This page shows how to create an External Load Balancer.
11
-
12
- {{< note >}}
13
- This feature is only available for cloud providers or environments which support external load balancers.
14
- {{< /note >}}
9
+ This page shows how to create an external load balancer.
15
10
16
- When creating a service, you have the option of automatically creating a
17
- cloud network load balancer. This provides an externally-accessible IP address
18
- that sends traffic to the correct port on your cluster nodes
11
+ When creating a {{< glossary_tooltip text="Service" term_id="service" >}}, you have
12
+ the option of automatically creating a cloud load balancer. This provides an
13
+ externally-accessible IP address that sends traffic to the correct port on your cluster
14
+ nodes,
19
15
_ provided your cluster runs in a supported environment and is configured with
20
16
the correct cloud load balancer provider package_ .
21
17
22
- For information on provisioning and using an Ingress resource that can give
23
- services externally-reachable URLs, load balance the traffic, terminate SSL etc.,
24
- please check the [ Ingress] ( /docs/concepts/services-networking/ingress/ )
18
+ You can also use an {{< glossary_tooltip term_id="ingress" >}} in place of Service.
19
+ For more information, check the [ Ingress] ( /docs/concepts/services-networking/ingress/ )
25
20
documentation.
26
21
27
-
28
-
29
22
## {{% heading "prerequisites" %}}
30
23
31
24
32
- * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
25
+ {{< include "task-tutorial-prereqs.md" >}}
33
26
27
+ Your cluster must be running in a cloud or other environment that already has support
28
+ for configuring external load balancers.
34
29
35
30
36
31
<!-- steps -->
37
32
38
- ## Configuration file
33
+ ## Create a Service
34
+
35
+ ### Create a Service from a manifest
39
36
40
37
To create an external load balancer, add the following line to your
41
- [ service configuration file ] ( /docs/concepts/services-networking/service/#loadbalancer ) :
38
+ Service manifest :
42
39
43
40
``` yaml
44
41
type : LoadBalancer
45
42
` ` `
46
43
47
- Your configuration file might look like:
44
+ Your manifest might then look like:
48
45
49
46
` ` ` yaml
50
47
apiVersion : v1
@@ -60,19 +57,19 @@ spec:
60
57
type : LoadBalancer
61
58
` ` `
62
59
63
- ## Using kubectl
60
+ ### Create a Service using kubectl
64
61
65
62
You can alternatively create the service with the ` kubectl expose` command and
66
63
its `--type=LoadBalancer` flag :
67
64
68
65
` ` ` bash
69
- kubectl expose rc example --port=8765 --target-port=9376 \
66
+ kubectl expose deployment example --port=8765 --target-port=9376 \
70
67
--name=example-service --type=LoadBalancer
71
68
` ` `
72
69
73
- This command creates a new service using the same selectors as the referenced
74
- resource (in the case of the example above, a replication controller named
75
- ` example` ).
70
+ This command creates a new Service using the same selectors as the referenced
71
+ resource (in the case of the example above, a
72
+ {{< glossary_tooltip text="Deployment" term_id="deployment" >}} named `example`).
76
73
77
74
For more information, including optional flags, refer to the
78
75
[`kubectl expose` reference](/docs/reference/generated/kubectl/kubectl-commands/#expose).
@@ -86,59 +83,63 @@ information through `kubectl`:
86
83
kubectl describe services example-service
87
84
` ` `
88
85
89
- which should produce output like this :
86
+ which should produce output similar to :
90
87
91
- ` ` ` bash
92
- Name: example-service
93
- Namespace: default
94
- Labels: <none>
95
- Annotations: <none>
96
- Selector: app=example
97
- Type: LoadBalancer
98
- IP: 10.67.252.103
99
- LoadBalancer Ingress: 192.0.2.89
100
- Port: <unnamed> 80/TCP
101
- NodePort: <unnamed> 32445/TCP
102
- Endpoints: 10.64.0.4:80,10.64.1.5:80,10.64.2.4:80
103
- Session Affinity: None
104
- Events: <none>
88
+ ` ` `
89
+ Name: example-service
90
+ Namespace: default
91
+ Labels: app=example
92
+ Annotations: <none>
93
+ Selector: app=example
94
+ Type: LoadBalancer
95
+ IP Families: <none>
96
+ IP: 10.3.22.96
97
+ IPs: 10.3.22.96
98
+ LoadBalancer Ingress: 192.0.2.89
99
+ Port: <unset> 8765/TCP
100
+ TargetPort: 9376/TCP
101
+ NodePort: <unset> 30593/TCP
102
+ Endpoints: 172.17.0.3:9376
103
+ Session Affinity: None
104
+ External Traffic Policy: Cluster
105
+ Events: <none>
105
106
` ` `
106
107
107
- The IP address is listed next to `LoadBalancer Ingress`.
108
+ The load balancer's IP address is listed next to `LoadBalancer Ingress`.
108
109
109
110
{{< note >}}
110
111
If you are running your service on Minikube, you can find the assigned IP address and port with :
111
- {{< /note >}}
112
112
113
113
` ` ` bash
114
114
minikube service example-service --url
115
115
` ` `
116
+ {{< /note >}}
116
117
117
118
# # Preserving the client source IP
118
119
119
- Due to the implementation of this feature , the source IP seen in the target
120
- container is *not the original source IP* of the client. To enable
121
- preservation of the client IP, the following fields can be configured in the
122
- service spec (supported in GCE/Google Kubernetes Engine environments) :
123
-
124
- * `service.spec.externalTrafficPolicy` - denotes if this Service desires to route
125
- external traffic to node-local or cluster-wide endpoints. There are two available
126
- options : Cluster (default) and Local. Cluster obscures the client source
127
- IP and may cause a second hop to another node, but should have good overall
128
- load-spreading. Local preserves the client source IP and avoids a second hop
129
- for LoadBalancer and NodePort type services, but risks potentially imbalanced
130
- traffic spreading.
131
- * `service.spec.healthCheckNodePort` - specifies the health check node port
132
- (numeric port number) for the service. If `healthCheckNodePort` isn't specified,
133
- the service controller allocates a port from your cluster's NodePort range. You
134
- can configure that range by setting an API server command line option,
135
- ` --service-node-port-range` . It will use the
136
- user-specified `healthCheckNodePort` value if specified by the client. It only has an
137
- effect when `type` is set to LoadBalancer and `externalTrafficPolicy` is set
138
- to Local.
139
-
140
- Setting `externalTrafficPolicy` to Local in the Service configuration file
141
- activates this feature.
120
+ By default , the source IP seen in the target container is *not the original
121
+ source IP* of the client. To enable preservation of the client IP, the following
122
+ fields can be configured in the `.spec` of the Service :
123
+
124
+ * `.spec.externalTrafficPolicy` - denotes if this Service desires to route
125
+ external traffic to node-local or cluster-wide endpoints. There are two available
126
+ options : ` Cluster ` (default) and `Local`. `Cluster` obscures the client source
127
+ IP and may cause a second hop to another node, but should have good overall
128
+ load-spreading. `Local` preserves the client source IP and avoids a second hop
129
+ for LoadBalancer and NodePort type Services, but risks potentially imbalanced
130
+ traffic spreading.
131
+ * `.spec.healthCheckNodePort` - specifies the health check node port
132
+ (numeric port number) for the service. If you don't specify
133
+ ` healthCheckNodePort ` , the service controller allocates a port from your
134
+ cluster's NodePort range.
135
+ You can configure that range by setting an API server command line option,
136
+ ` --service-node-port-range` . The Service will use the user-specified
137
+ ` healthCheckNodePort` value if you specify it, provided that the
138
+ Service `type` is set to LoadBalancer and `externalTrafficPolicy` is set
139
+ to ` Local` .
140
+
141
+ Setting `externalTrafficPolicy` to Local in the Service manifest
142
+ activates this feature. For example :
142
143
143
144
` ` ` yaml
144
145
apiVersion: v1
@@ -155,7 +156,20 @@ spec:
155
156
type: LoadBalancer
156
157
` ` `
157
158
158
- # # Garbage Collecting Load Balancers
159
+ # ## Caveats and limitations when preserving source IPs
160
+
161
+ Load balancing services from some cloud providers do not let you configure different weights for each target.
162
+
163
+ With each target weighted equally in terms of sending traffic to Nodes, external
164
+ traffic is not equally load balanced across different Pods. The external load balancer
165
+ is unaware of the number of Pods on each node that are used as a target.
166
+
167
+ Where `NumServicePods << _NumNodes` or `NumServicePods >> NumNodes`, a fairly close-to-equal
168
+ distribution will be seen, even without weights.
169
+
170
+ Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
171
+
172
+ # # Garbage collecting load balancers
159
173
160
174
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
161
175
@@ -172,32 +186,18 @@ The finalizer will only be removed after the load balancer resource is cleaned u
172
186
This prevents dangling load balancer resources even in corner cases such as the
173
187
service controller crashing.
174
188
175
- # # External Load Balancer Providers
189
+ # # External load balancer providers
176
190
177
191
It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster.
178
192
179
193
When the Service `type` is set to LoadBalancer, Kubernetes provides functionality equivalent to `type` equals ClusterIP to pods
180
- within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the Kubernetes
181
- pods. The Kubernetes service controller automates the creation of the external load balancer, health checks (if needed),
182
- firewall rules (if needed) and retrieves the external IP allocated by the cloud provider and populates it in the service
183
- object.
184
-
185
- # # Caveats and Limitations when preserving source IPs
186
-
187
- GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB
188
- kube-proxy rules which would correctly balance across all endpoints.
189
-
190
- With the new functionality, the external traffic is not equally load balanced across pods, but rather
191
- equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability
192
- for specifying the weight per node, they balance equally across all target nodes, disregarding the number of
193
- pods on each node).
194
-
195
- We can, however, state that for NumServicePods << NumNodes or NumServicePods >> NumNodes, a fairly close-to-equal
196
- distribution will be seen, even without weights.
197
-
198
- Once the external load balancers provide weights, this functionality can be added to the LB programming path.
199
- *Future Work: No support for weights is provided for the 1.4 release, but may be added at a future date*
200
-
201
- Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
194
+ within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the nodes
195
+ hosting the relevant Kubernetes pods. The Kubernetes control plane automates the creation of the external load balancer,
196
+ health checks (if needed), and packet filtering rules (if needed). Once the cloud provider allocates an IP address for the load
197
+ balancer, the control plane looks up that external IP address and populates it into the Service object.
202
198
199
+ # # {{% heading "whatsnext" %}}
203
200
201
+ * Read about [Service](/docs/concepts/services-networking/service/)
202
+ * Read about [Ingress](/docs/concepts/services-networking/ingress/)
203
+ * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
0 commit comments