@@ -4,47 +4,44 @@ content_type: task
44weight : 80
55---
66
7-
87<!-- overview -->
98
10- This page shows how to create an External Load Balancer.
11-
12- {{< note >}}
13- This feature is only available for cloud providers or environments which support external load balancers.
14- {{< /note >}}
9+ This page shows how to create an external load balancer.
1510
16- When creating a service, you have the option of automatically creating a
17- cloud network load balancer. This provides an externally-accessible IP address
18- that sends traffic to the correct port on your cluster nodes
11+ When creating a {{< glossary_tooltip text="Service" term_id="service" >}}, you have
12+ the option of automatically creating a cloud load balancer. This provides an
13+ externally-accessible IP address that sends traffic to the correct port on your cluster
14+ nodes,
1915_ provided your cluster runs in a supported environment and is configured with
2016the correct cloud load balancer provider package_ .
2117
22- For information on provisioning and using an Ingress resource that can give
23- services externally-reachable URLs, load balance the traffic, terminate SSL etc.,
24- please check the [ Ingress] ( /docs/concepts/services-networking/ingress/ )
18+ You can also use an {{< glossary_tooltip term_id="ingress" >}} in place of Service.
19+ For more information, check the [ Ingress] ( /docs/concepts/services-networking/ingress/ )
2520documentation.
2621
27-
28-
2922## {{% heading "prerequisites" %}}
3023
3124
32- * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
25+ {{< include "task-tutorial-prereqs.md" >}}
3326
27+ Your cluster must be running in a cloud or other environment that already has support
28+ for configuring external load balancers.
3429
3530
3631<!-- steps -->
3732
38- ## Configuration file
33+ ## Create a Service
34+
35+ ### Create a Service from a manifest
3936
4037To create an external load balancer, add the following line to your
41- [ service configuration file ] ( /docs/concepts/services-networking/service/#loadbalancer ) :
38+ Service manifest :
4239
4340``` yaml
4441 type : LoadBalancer
4542` ` `
4643
47- Your configuration file might look like:
44+ Your manifest might then look like:
4845
4946` ` ` yaml
5047apiVersion : v1
@@ -60,19 +57,19 @@ spec:
6057 type : LoadBalancer
6158` ` `
6259
63- ## Using kubectl
60+ ### Create a Service using kubectl
6461
6562You can alternatively create the service with the ` kubectl expose` command and
6663its `--type=LoadBalancer` flag :
6764
6865` ` ` bash
69- kubectl expose rc example --port=8765 --target-port=9376 \
66+ kubectl expose deployment example --port=8765 --target-port=9376 \
7067 --name=example-service --type=LoadBalancer
7168` ` `
7269
73- This command creates a new service using the same selectors as the referenced
74- resource (in the case of the example above, a replication controller named
75- ` example` ).
70+ This command creates a new Service using the same selectors as the referenced
71+ resource (in the case of the example above, a
72+ {{< glossary_tooltip text="Deployment" term_id="deployment" >}} named `example`).
7673
7774For more information, including optional flags, refer to the
7875[`kubectl expose` reference](/docs/reference/generated/kubectl/kubectl-commands/#expose).
@@ -86,59 +83,63 @@ information through `kubectl`:
8683kubectl describe services example-service
8784` ` `
8885
89- which should produce output like this :
86+ which should produce output similar to :
9087
91- ` ` ` bash
92- Name: example-service
93- Namespace: default
94- Labels: <none>
95- Annotations: <none>
96- Selector: app=example
97- Type: LoadBalancer
98- IP: 10.67.252.103
99- LoadBalancer Ingress: 192.0.2.89
100- Port: <unnamed> 80/TCP
101- NodePort: <unnamed> 32445/TCP
102- Endpoints: 10.64.0.4:80,10.64.1.5:80,10.64.2.4:80
103- Session Affinity: None
104- Events: <none>
88+ ` ` `
89+ Name: example-service
90+ Namespace: default
91+ Labels: app=example
92+ Annotations: <none>
93+ Selector: app=example
94+ Type: LoadBalancer
95+ IP Families: <none>
96+ IP: 10.3.22.96
97+ IPs: 10.3.22.96
98+ LoadBalancer Ingress: 192.0.2.89
99+ Port: <unset> 8765/TCP
100+ TargetPort: 9376/TCP
101+ NodePort: <unset> 30593/TCP
102+ Endpoints: 172.17.0.3:9376
103+ Session Affinity: None
104+ External Traffic Policy: Cluster
105+ Events: <none>
105106` ` `
106107
107- The IP address is listed next to `LoadBalancer Ingress`.
108+ The load balancer's IP address is listed next to `LoadBalancer Ingress`.
108109
109110{{< note >}}
110111If you are running your service on Minikube, you can find the assigned IP address and port with :
111- {{< /note >}}
112112
113113` ` ` bash
114114minikube service example-service --url
115115` ` `
116+ {{< /note >}}
116117
117118# # Preserving the client source IP
118119
119- Due to the implementation of this feature , the source IP seen in the target
120- container is *not the original source IP* of the client. To enable
121- preservation of the client IP, the following fields can be configured in the
122- service spec (supported in GCE/Google Kubernetes Engine environments) :
123-
124- * `service.spec.externalTrafficPolicy` - denotes if this Service desires to route
125- external traffic to node-local or cluster-wide endpoints. There are two available
126- options : Cluster (default) and Local. Cluster obscures the client source
127- IP and may cause a second hop to another node, but should have good overall
128- load-spreading. Local preserves the client source IP and avoids a second hop
129- for LoadBalancer and NodePort type services, but risks potentially imbalanced
130- traffic spreading.
131- * `service.spec.healthCheckNodePort` - specifies the health check node port
132- (numeric port number) for the service. If `healthCheckNodePort` isn't specified,
133- the service controller allocates a port from your cluster's NodePort range. You
134- can configure that range by setting an API server command line option,
135- ` --service-node-port-range` . It will use the
136- user-specified `healthCheckNodePort` value if specified by the client. It only has an
137- effect when `type` is set to LoadBalancer and `externalTrafficPolicy` is set
138- to Local.
139-
140- Setting `externalTrafficPolicy` to Local in the Service configuration file
141- activates this feature.
120+ By default , the source IP seen in the target container is *not the original
121+ source IP* of the client. To enable preservation of the client IP, the following
122+ fields can be configured in the `.spec` of the Service :
123+
124+ * `.spec.externalTrafficPolicy` - denotes if this Service desires to route
125+ external traffic to node-local or cluster-wide endpoints. There are two available
126+ options : ` Cluster ` (default) and `Local`. `Cluster` obscures the client source
127+ IP and may cause a second hop to another node, but should have good overall
128+ load-spreading. `Local` preserves the client source IP and avoids a second hop
129+ for LoadBalancer and NodePort type Services, but risks potentially imbalanced
130+ traffic spreading.
131+ * `.spec.healthCheckNodePort` - specifies the health check node port
132+ (numeric port number) for the service. If you don't specify
133+ ` healthCheckNodePort ` , the service controller allocates a port from your
134+ cluster's NodePort range.
135+ You can configure that range by setting an API server command line option,
136+ ` --service-node-port-range` . The Service will use the user-specified
137+ ` healthCheckNodePort` value if you specify it, provided that the
138+ Service `type` is set to LoadBalancer and `externalTrafficPolicy` is set
139+ to ` Local` .
140+
141+ Setting `externalTrafficPolicy` to Local in the Service manifest
142+ activates this feature. For example :
142143
143144` ` ` yaml
144145apiVersion: v1
@@ -155,7 +156,20 @@ spec:
155156 type: LoadBalancer
156157` ` `
157158
158- # # Garbage Collecting Load Balancers
159+ # ## Caveats and limitations when preserving source IPs
160+
161+ Load balancing services from some cloud providers do not let you configure different weights for each target.
162+
163+ With each target weighted equally in terms of sending traffic to Nodes, external
164+ traffic is not equally load balanced across different Pods. The external load balancer
165+ is unaware of the number of Pods on each node that are used as a target.
166+
167+ Where `NumServicePods << _NumNodes` or `NumServicePods >> NumNodes`, a fairly close-to-equal
168+ distribution will be seen, even without weights.
169+
170+ Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
171+
172+ # # Garbage collecting load balancers
159173
160174{{< feature-state for_k8s_version="v1.17" state="stable" >}}
161175
@@ -172,32 +186,18 @@ The finalizer will only be removed after the load balancer resource is cleaned u
172186This prevents dangling load balancer resources even in corner cases such as the
173187service controller crashing.
174188
175- # # External Load Balancer Providers
189+ # # External load balancer providers
176190
177191It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster.
178192
179193When the Service `type` is set to LoadBalancer, Kubernetes provides functionality equivalent to `type` equals ClusterIP to pods
180- within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the Kubernetes
181- pods. The Kubernetes service controller automates the creation of the external load balancer, health checks (if needed),
182- firewall rules (if needed) and retrieves the external IP allocated by the cloud provider and populates it in the service
183- object.
184-
185- # # Caveats and Limitations when preserving source IPs
186-
187- GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB
188- kube-proxy rules which would correctly balance across all endpoints.
189-
190- With the new functionality, the external traffic is not equally load balanced across pods, but rather
191- equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability
192- for specifying the weight per node, they balance equally across all target nodes, disregarding the number of
193- pods on each node).
194-
195- We can, however, state that for NumServicePods << NumNodes or NumServicePods >> NumNodes, a fairly close-to-equal
196- distribution will be seen, even without weights.
197-
198- Once the external load balancers provide weights, this functionality can be added to the LB programming path.
199- *Future Work: No support for weights is provided for the 1.4 release, but may be added at a future date*
200-
201- Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
194+ within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the nodes
195+ hosting the relevant Kubernetes pods. The Kubernetes control plane automates the creation of the external load balancer,
196+ health checks (if needed), and packet filtering rules (if needed). Once the cloud provider allocates an IP address for the load
197+ balancer, the control plane looks up that external IP address and populates it into the Service object.
202198
199+ # # {{% heading "whatsnext" %}}
203200
201+ * Read about [Service](/docs/concepts/services-networking/service/)
202+ * Read about [Ingress](/docs/concepts/services-networking/ingress/)
203+ * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
0 commit comments