Skip to content

Commit 57edd60

Browse files
Merge pull request #65501 from dfitzmau/OCPBUGS-7621-overhaul
2 parents 62c942b + 96ad5b2 commit 57edd60

File tree

1 file changed

+267
-48
lines changed

1 file changed

+267
-48
lines changed

modules/nw-osp-configuring-external-load-balancer.adoc

Lines changed: 267 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -32,77 +32,159 @@ on {rh-openstack-first}
3232
endif::[]
3333
to use an external load balancer in place of the default load balancer.
3434

35-
You can also configure an {product-title} cluster to use an external load balancer that supports multiple subnets. If you use multiple subnets, you can explicitly list all the IP addresses in any networks that are used by your load balancer targets. This configuration can reduce maintenance overhead because you can create and destroy nodes within those networks without reconfiguring the load balancer targets.
35+
[IMPORTANT]
36+
====
37+
Configuring an external load balancer depends on your vendor's load balancer.
38+
39+
The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer.
40+
====
41+
42+
Red Hat supports the following services for an external load balancer:
43+
44+
* OpenShift API
45+
* Ingress Controller
3646
37-
If you deploy your ingress pods by using a machine set on a smaller network, such as a `/27` or `/28`, you can simplify your load balancer targets.
47+
You can choose to configure one or both of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option.
3848

39-
[NOTE]
49+
The following configuration options are supported for external load balancers:
50+
51+
* Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration.
52+
53+
* Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a `/27` or `/28`, you can simplify your load balancer targets.
54+
+
55+
[TIP]
4056
====
41-
You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet.
57+
You can list all IP addresses that exist in a network by checking the machine config pool's resources.
4258
====
4359
44-
.Prerequisites
60+
.Considerations
61+
62+
* For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability.
4563
46-
* On your load balancer, TCP over ports 6443, 443, and 80 must be reachable by all users of your system that are located outside the cluster.
64+
* For a back-end IP address, ensure that an IP address for an {product-title} control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions:
65+
** Assign a static IP address to each control plane node.
66+
** Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment.
4767
48-
* Load balance the application ports, 443 and 80, between all the compute nodes.
68+
* Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur.
4969
50-
* Load balance the API port, 6443, between each of the control plane nodes.
70+
.OpenShift API prerequisites
5171

52-
* On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster.
72+
* You defined a front-end IP address.
73+
* TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:
74+
** Port 6443 provides access to the OpenShift API service.
75+
** Port 22623 can provide ignition startup configurations to nodes.
76+
* The front-end IP address and port 6443 are reachable by all users of your system with a location external to your {product-title} cluster.
77+
* The front-end IP address and port 22623 are reachable only by {product-title} nodes.
78+
* The load balancer backend can communicate with {product-title} control plane nodes on port 6443 and 22623.
5379
54-
* Your load balancer can access the required ports on each node in your cluster. You can ensure this level of access by completing the following actions:
55-
** The API load balancer can access ports 22623 and 6443 on the control plane nodes.
56-
** The ingress load balancer can access ports 443 and 80 on the nodes where the ingress pods are located.
80+
.Ingress Controller prerequisites
81+
82+
* You defined a front-end IP address.
83+
* TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer.
84+
* The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your {product-title} cluster.
85+
* The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your {product-title} cluster.
86+
* The load balancer backend can communicate with {product-title} nodes that run the Ingress Controller on ports 80, 443, and 1936.
87+
88+
.Prerequisite for health check URL specifications
89+
90+
You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. {product-title} provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.
91+
92+
The following examples demonstrate health check specifications for the previously listed backend services:
93+
94+
.Example of a Kubernetes API health check specification
95+
96+
[source,terminal]
97+
----
98+
Path: HTTPS:6443/readyz
99+
Healthy threshold: 2
100+
Unhealthy threshold: 2
101+
Timeout: 10
102+
Interval: 10
103+
----
57104

58-
ifdef::vsphere[]
59-
* Optional: If you are using multiple networks, you can create targets for every IP address in the network that can host nodes. This configuration can reduce the maintenance overhead of your cluster.
60-
endif::vsphere[]
105+
.Example of a Machine Config API health check specification
106+
107+
[source,terminal]
108+
----
109+
Path: HTTPS:22623/healthz
110+
Healthy threshold: 2
111+
Unhealthy threshold: 2
112+
Timeout: 10
113+
Interval: 10
114+
----
115+
116+
.Example of an Ingress Controller health check specification
117+
118+
[source,terminal]
119+
----
120+
Path: HTTP:1936/healthz/ready
121+
Healthy threshold: 2
122+
Unhealthy threshold: 2
123+
Timeout: 5
124+
Interval: 10
125+
----
61126

62127
.Procedure
63128

64-
. Enable access to the cluster from your load balancer on ports 6443, 443, and 80.
129+
. Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80:
65130
+
66-
As an example, note this HAProxy configuration:
67-
+
68-
.A section of a sample HAProxy configuration
69-
[source,text]
131+
.Example HAProxy configuration
132+
[source,terminal]
70133
----
71-
...
134+
#...
72135
listen my-cluster-api-6443
73-
bind 0.0.0.0:6443
136+
bind 192.168.1.100:6443
137+
mode tcp
138+
balance roundrobin
139+
option httpchk
140+
http-check connect
141+
http-check send meth GET uri /readyz
142+
http-check expect status 200
143+
server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2
144+
server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2
145+
server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2
146+
147+
listen my-cluster-machine-config-api-22623
148+
bind 192.168.1.1000.0.0.0:22623
74149
mode tcp
75150
balance roundrobin
76-
server my-cluster-master-2 192.0.2.2:6443 check
77-
server my-cluster-master-0 192.0.2.3:6443 check
78-
server my-cluster-master-1 192.0.2.1:6443 check
151+
option httpchk
152+
http-check connect
153+
http-check send meth GET uri /healthz
154+
http-check expect status 200
155+
server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2
156+
server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2
157+
server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2
158+
79159
listen my-cluster-apps-443
80-
bind 0.0.0.0:443
160+
bind 192.168.1.100:443
81161
mode tcp
82162
balance roundrobin
83-
server my-cluster-worker-0 192.0.2.6:443 check
84-
server my-cluster-worker-1 192.0.2.5:443 check
85-
server my-cluster-worker-2 192.0.2.4:443 check
163+
option httpchk
164+
http-check connect
165+
http-check send meth GET uri /healthz/ready
166+
http-check expect status 200
167+
server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2
168+
server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2
169+
server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2
170+
86171
listen my-cluster-apps-80
87-
bind 0.0.0.0:80
172+
bind 192.168.1.100:80
88173
mode tcp
89174
balance roundrobin
90-
server my-cluster-worker-0 192.0.2.7:80 check
91-
server my-cluster-worker-1 192.0.2.9:80 check
92-
server my-cluster-worker-2 192.0.2.8:80 check
175+
option httpchk
176+
http-check connect
177+
http-check send meth GET uri /healthz/ready
178+
http-check expect status 200
179+
server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2
180+
server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2
181+
server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2
182+
# ...
93183
----
94184

95-
. Add records to your DNS server for the cluster API and apps over the load balancer. For example:
185+
. Use the `curl` CLI command to verify that the external load balancer and its resources are operational:
96186
+
97-
[source,dns]
98-
----
99-
<load_balancer_ip_address> api.<cluster_name>.<base_domain>
100-
<load_balancer_ip_address> apps.<cluster_name>.<base_domain>
101-
----
102-
103-
. From a command line, use `curl` to verify that the external load balancer and DNS configuration are operational.
104-
105-
.. Verify that the cluster API is accessible:
187+
.. Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response:
106188
+
107189
[source,terminal]
108190
----
@@ -125,20 +207,133 @@ If the configuration is correct, you receive a JSON object in response:
125207
"platform": "linux/amd64"
126208
}
127209
----
210+
+
211+
.. Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output:
212+
+
213+
[source,terminal]
214+
----
215+
$ curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure
216+
----
217+
+
218+
If the configuration is correct, the output from the command shows the following response:
219+
+
220+
[source,terminal]
221+
----
222+
HTTP/1.1 200 OK
223+
Content-Length: 0
224+
----
225+
+
226+
.. Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output:
227+
+
228+
[source,terminal]
229+
----
230+
$ curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>
231+
----
232+
+
233+
If the configuration is correct, the output from the command shows the following response:
234+
+
235+
[source,terminal]
236+
----
237+
HTTP/1.1 302 Found
238+
content-length: 0
239+
location: https://console-openshift-console.apps.ocp4.private.opequon.net/
240+
cache-control: no-cache
241+
----
242+
+
243+
.. Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output:
244+
+
245+
[source,terminal]
246+
----
247+
$ curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>
248+
----
249+
+
250+
If the configuration is correct, the output from the command shows the following response:
251+
+
252+
[source,terminal]
253+
----
254+
HTTP/1.1 200 OK
255+
referrer-policy: strict-origin-when-cross-origin
256+
set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
257+
x-content-type-options: nosniff
258+
x-dns-prefetch-control: off
259+
x-frame-options: DENY
260+
x-xss-protection: 1; mode=block
261+
date: Wed, 04 Oct 2023 16:29:38 GMT
262+
content-type: text/html; charset=utf-8
263+
set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
264+
cache-control: private
265+
----
128266

129-
.. Verify that cluster applications are accessible:
267+
. Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer.
268+
+
269+
.Examples of modified DNS records
270+
+
271+
[source,dns]
272+
----
273+
<load_balancer_ip_address> A api.<cluster_name>.<base_domain>
274+
A record pointing to Load Balancer Front End
275+
----
276+
+
277+
[source,dns]
278+
----
279+
<load_balancer_ip_address> A apps.<cluster_name>.<base_domain>
280+
A record pointing to Load Balancer Front End
281+
----
130282
+
131-
[NOTE]
283+
[IMPORTANT]
132284
====
133-
You can also verify application accessibility by opening the {product-title} console in a web browser.
285+
DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record.
134286
====
287+
288+
. Use the `curl` CLI command to verify that the external load balancer and DNS record configuration are operational:
289+
+
290+
.. Verify that you can access the cluster API, by running the following command and observing the output:
291+
+
292+
[source,terminal]
293+
----
294+
$ curl https://api.<cluster_name>.<base_domain>:6443/version --insecure
295+
----
296+
+
297+
If the configuration is correct, you receive a JSON object in response:
298+
+
299+
[source,json]
300+
----
301+
{
302+
"major": "1",
303+
"minor": "11+",
304+
"gitVersion": "v1.11.0+ad103ed",
305+
"gitCommit": "ad103ed",
306+
"gitTreeState": "clean",
307+
"buildDate": "2019-01-09T06:44:10Z",
308+
"goVersion": "go1.10.3",
309+
"compiler": "gc",
310+
"platform": "linux/amd64"
311+
}
312+
----
313+
+
314+
.. Verify that you can access the cluster machine configuration, by running the following command and observing the output:
315+
+
316+
[source,terminal]
317+
----
318+
$ curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure
319+
----
320+
+
321+
If the configuration is correct, the output from the command shows the following response:
322+
+
323+
[source,terminal]
324+
----
325+
HTTP/1.1 200 OK
326+
Content-Length: 0
327+
----
328+
+
329+
.. Verify that you can access each cluster application on port, by running the following command and observing the output:
135330
+
136331
[source,terminal]
137332
----
138333
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
139334
----
140335
+
141-
If the configuration is correct, you receive an HTTP response:
336+
If the configuration is correct, the output from the command shows the following response:
142337
+
143338
[source,terminal]
144339
----
@@ -157,6 +352,30 @@ content-type: text/html; charset=utf-8
157352
set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
158353
cache-control: private
159354
----
355+
+
356+
.. Verify that you can access each cluster application on port 443, by running the following command and observing the output:
357+
+
358+
[source,terminal]
359+
----
360+
$ curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
361+
----
362+
+
363+
If the configuration is correct, the output from the command shows the following response:
364+
+
365+
[source,terminal]
366+
----
367+
HTTP/1.1 200 OK
368+
referrer-policy: strict-origin-when-cross-origin
369+
set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
370+
x-content-type-options: nosniff
371+
x-dns-prefetch-control: off
372+
x-frame-options: DENY
373+
x-xss-protection: 1; mode=block
374+
date: Wed, 04 Oct 2023 16:29:38 GMT
375+
content-type: text/html; charset=utf-8
376+
set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
377+
cache-control: private
378+
----
160379

161380
ifeval::["{context}" == "installing-vsphere-installer-provisioned"]
162381
:!vsphere:
@@ -169,4 +388,4 @@ ifeval::["{context}" == "installing-vsphere-installer-provisioned-network-custom
169388
endif::[]
170389
ifeval::["{context}" == installing-restricted-networks-installer-provisioned-vsphere]
171390
:!vsphere:
172-
endif::[]
391+
endif::[]

0 commit comments

Comments
 (0)