You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/installation-infrastructure-user-infra.adoc
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,6 +74,13 @@ endif::ibm-z-kvm[]
74
74
. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the _Networking requirements for user-provisioned infrastructure_ section for details about the requirements.
75
75
76
76
. Configure your firewall to enable the ports required for the {product-title} cluster components to communicate. See _Networking requirements for user-provisioned infrastructure_ section for details about the ports that are required.
77
+
+
78
+
[IMPORTANT]
79
+
====
80
+
By default, port `1936` is accessible for an {product-title} cluster, because each control plane node needs access to this port.
81
+
82
+
Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers.
83
+
====
77
84
78
85
. Setup the required DNS infrastructure for your cluster.
79
86
.. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines.
Copy file name to clipboardExpand all lines: modules/installation-load-balancing-user-infra.adoc
+29-56Lines changed: 29 additions & 56 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@ endif::[]
55
55
= Load balancing requirements for user-provisioned infrastructure
56
56
57
57
ifndef::user-managed-lb[]
58
-
Before you install {product-title}, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
58
+
Before you install {product-title}, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
59
59
endif::user-managed-lb[]
60
60
61
61
ifdef::user-managed-lb[]
@@ -72,12 +72,12 @@ the development process.
72
72
For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope].
73
73
====
74
74
75
-
Before you install {product-title}, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
75
+
Before you install {product-title}, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
76
76
endif::user-managed-lb[]
77
77
78
78
[NOTE]
79
79
====
80
-
If you want to deploy the API and application ingress load balancers with a {op-system-base-full} instance, you must purchase the {op-system-base} subscription separately.
80
+
If you want to deploy the API and application Ingress load balancers with a {op-system-base-full} instance, you must purchase the {op-system-base} subscription separately.
81
81
====
82
82
83
83
The load balancing infrastructure must meet the following requirements:
@@ -132,8 +132,10 @@ error or becomes healthy, the endpoint must have been removed or added. Probing
132
132
every 5 or 10 seconds, with two successful requests to become healthy and three
133
133
to become unhealthy, are well-tested values.
134
134
====
135
-
136
-
. *Application ingress load balancer*: Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:
135
+
+
136
+
. *Application Ingress load balancer*: Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an {product-title} cluster.
137
+
+
138
+
Configure the following conditions:
137
139
+
138
140
--
139
141
** Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes.
@@ -142,12 +144,12 @@ to become unhealthy, are well-tested values.
142
144
+
143
145
[TIP]
144
146
====
145
-
If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
147
+
If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
146
148
====
147
149
+
148
150
Configure the following ports on both the front and back of the load balancers:
149
151
+
150
-
.Application ingress load balancer
152
+
.Application Ingress load balancer
151
153
[cols="2,5,^2,^2,2",options="header"]
152
154
|===
153
155
@@ -169,45 +171,34 @@ Configure the following ports on both the front and back of the load balancers:
169
171
|X
170
172
|HTTP traffic
171
173
172
-
|`1936`
173
-
|The worker nodes that run the Ingress Controller pods, by default. You must configure the `/healthz/ready` endpoint for the ingress health check probe.
174
-
|X
175
-
|X
176
-
|HTTP traffic
177
-
178
174
|===
179
-
180
-
[NOTE]
181
-
====
182
-
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
183
-
====
184
-
175
+
+
185
176
[NOTE]
186
177
====
187
-
A working configuration for the Ingress router is required for an
188
-
{product-title} cluster. You must configure the Ingress router after the control
189
-
plane initializes.
178
+
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
== Example load balancer configuration for user-provisioned clusters
195
184
196
-
This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
185
+
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
197
186
endif::user-managed-lb[]
198
187
199
188
ifdef::user-managed-lb[]
200
189
== Example load balancer configuration for clusters that are deployed with user-managed load balancers
201
190
202
-
This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
191
+
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
203
192
endif::user-managed-lb[]
204
193
194
+
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
195
+
205
196
[NOTE]
206
197
====
207
-
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
198
+
If you are using HAProxy as a load balancer and SELinux is set to `enforcing`, you must ensure that the HAProxy service can bind to the configured TCP port by running `setsebool -P haproxy_connect_any=1`.
208
199
====
209
200
210
-
.Sample API and application ingress load balancer configuration
201
+
.Sample API and application Ingress load balancer configuration
211
202
[%collapsible]
212
203
====
213
204
[source,text]
@@ -232,56 +223,43 @@ defaults
232
223
timeout http-keep-alive 10s
233
224
timeout check 10s
234
225
maxconn 3000
235
-
frontend stats
236
-
bind *:1936
237
-
mode http
238
-
log global
239
-
maxconn 10
240
-
stats enable
241
-
stats hide-version
242
-
stats refresh 30s
243
-
stats show-node
244
-
stats show-desc Stats for ocp4 cluster <1>
245
-
stats auth admin:ocp4
246
-
stats uri /stats
247
-
listen api-server-6443 <2>
226
+
listen api-server-6443 <1>
248
227
bind *:6443
249
228
mode tcp
250
-
server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup <3>
229
+
server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup <2>
251
230
server master0 master0.ocp4.example.com:6443 check inter 1s
252
231
server master1 master1.ocp4.example.com:6443 check inter 1s
253
232
server master2 master2.ocp4.example.com:6443 check inter 1s
254
-
listen machine-config-server-22623 <4>
233
+
listen machine-config-server-22623 <3>
255
234
bind *:22623
256
235
mode tcp
257
-
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup <3>
236
+
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup <2>
258
237
server master0 master0.ocp4.example.com:22623 check inter 1s
259
238
server master1 master1.ocp4.example.com:22623 check inter 1s
260
239
server master2 master2.ocp4.example.com:22623 check inter 1s
261
-
listen ingress-router-443 <5>
240
+
listen ingress-router-443 <4>
262
241
bind *:443
263
242
mode tcp
264
243
balance source
265
244
server worker0 worker0.ocp4.example.com:443 check inter 1s
266
245
server worker1 worker1.ocp4.example.com:443 check inter 1s
267
-
listen ingress-router-80 <6>
246
+
listen ingress-router-80 <5>
268
247
bind *:80
269
248
mode tcp
270
249
balance source
271
250
server worker0 worker0.ocp4.example.com:80 check inter 1s
272
251
server worker1 worker1.ocp4.example.com:80 check inter 1s
273
252
----
274
253
275
-
<1> In the example, the cluster name is `ocp4`.
276
-
<2> Port `6443` handles the Kubernetes API traffic and points to the control plane machines.
277
-
<3> The bootstrap entries must be in place before the {product-title} cluster installation and they must be removed after the bootstrap process is complete.
278
-
<4> Port `22623` handles the machine config server traffic and points to the control plane machines.
279
-
<5> Port `443` handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
280
-
<6> Port `80` handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
254
+
<1> Port `6443` handles the Kubernetes API traffic and points to the control plane machines.
255
+
<2> The bootstrap entries must be in place before the {product-title} cluster installation and they must be removed after the bootstrap process is complete.
256
+
<3> Port `22623` handles the machine config server traffic and points to the control plane machines.
257
+
<4> Port `443` handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
258
+
<5> Port `80` handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
281
259
+
282
260
[NOTE]
283
261
=====
284
-
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
262
+
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
285
263
=====
286
264
====
287
265
@@ -290,11 +268,6 @@ If you are deploying a three-node cluster with zero compute nodes, the Ingress C
290
268
If you are using HAProxy as a load balancer, you can check that the `haproxy` process is listening on ports `6443`, `22623`, `443`, and `80` by running `netstat -nltupe` on the HAProxy node.
291
269
====
292
270
293
-
[NOTE]
294
-
====
295
-
If you are using HAProxy as a load balancer and SELinux is set to `enforcing`, you must ensure that the HAProxy service can bind to the configured TCP port by running `setsebool -P haproxy_connect_any=1`.
0 commit comments