Skip to content

Commit 1c7fb63

Browse files
authored
Merge pull request #61430 from dfitzmau/OCPBUGS-9215
OCPBUGS#9215: Updated port 1936 in install vSphere docs
2 parents 5212d94 + c985b83 commit 1c7fb63

File tree

2 files changed

+36
-56
lines changed

2 files changed

+36
-56
lines changed

modules/installation-infrastructure-user-infra.adoc

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,13 @@ endif::ibm-z-kvm[]
7474
. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the _Networking requirements for user-provisioned infrastructure_ section for details about the requirements.
7575

7676
. Configure your firewall to enable the ports required for the {product-title} cluster components to communicate. See _Networking requirements for user-provisioned infrastructure_ section for details about the ports that are required.
77+
+
78+
[IMPORTANT]
79+
====
80+
By default, port `1936` is accessible for an {product-title} cluster, because each control plane node needs access to this port.
81+
82+
Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers.
83+
====
7784

7885
. Setup the required DNS infrastructure for your cluster.
7986
.. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines.

modules/installation-load-balancing-user-infra.adoc

Lines changed: 29 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ endif::[]
5555
= Load balancing requirements for user-provisioned infrastructure
5656

5757
ifndef::user-managed-lb[]
58-
Before you install {product-title}, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
58+
Before you install {product-title}, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
5959
endif::user-managed-lb[]
6060

6161
ifdef::user-managed-lb[]
@@ -72,12 +72,12 @@ the development process.
7272
For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope].
7373
====
7474

75-
Before you install {product-title}, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
75+
Before you install {product-title}, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
7676
endif::user-managed-lb[]
7777

7878
[NOTE]
7979
====
80-
If you want to deploy the API and application ingress load balancers with a {op-system-base-full} instance, you must purchase the {op-system-base} subscription separately.
80+
If you want to deploy the API and application Ingress load balancers with a {op-system-base-full} instance, you must purchase the {op-system-base} subscription separately.
8181
====
8282

8383
The load balancing infrastructure must meet the following requirements:
@@ -132,8 +132,10 @@ error or becomes healthy, the endpoint must have been removed or added. Probing
132132
every 5 or 10 seconds, with two successful requests to become healthy and three
133133
to become unhealthy, are well-tested values.
134134
====
135-
136-
. *Application ingress load balancer*: Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:
135+
+
136+
. *Application Ingress load balancer*: Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an {product-title} cluster.
137+
+
138+
Configure the following conditions:
137139
+
138140
--
139141
** Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes.
@@ -142,12 +144,12 @@ to become unhealthy, are well-tested values.
142144
+
143145
[TIP]
144146
====
145-
If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
147+
If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
146148
====
147149
+
148150
Configure the following ports on both the front and back of the load balancers:
149151
+
150-
.Application ingress load balancer
152+
.Application Ingress load balancer
151153
[cols="2,5,^2,^2,2",options="header"]
152154
|===
153155

@@ -169,45 +171,34 @@ Configure the following ports on both the front and back of the load balancers:
169171
|X
170172
|HTTP traffic
171173

172-
|`1936`
173-
|The worker nodes that run the Ingress Controller pods, by default. You must configure the `/healthz/ready` endpoint for the ingress health check probe.
174-
|X
175-
|X
176-
|HTTP traffic
177-
178174
|===
179-
180-
[NOTE]
181-
====
182-
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
183-
====
184-
175+
+
185176
[NOTE]
186177
====
187-
A working configuration for the Ingress router is required for an
188-
{product-title} cluster. You must configure the Ingress router after the control
189-
plane initializes.
178+
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
190179
====
191180

192181
[id="installation-load-balancing-user-infra-example_{context}"]
193182
ifndef::user-managed-lb[]
194183
== Example load balancer configuration for user-provisioned clusters
195184

196-
This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
185+
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
197186
endif::user-managed-lb[]
198187

199188
ifdef::user-managed-lb[]
200189
== Example load balancer configuration for clusters that are deployed with user-managed load balancers
201190

202-
This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
191+
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
203192
endif::user-managed-lb[]
204193

194+
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
195+
205196
[NOTE]
206197
====
207-
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
198+
If you are using HAProxy as a load balancer and SELinux is set to `enforcing`, you must ensure that the HAProxy service can bind to the configured TCP port by running `setsebool -P haproxy_connect_any=1`.
208199
====
209200

210-
.Sample API and application ingress load balancer configuration
201+
.Sample API and application Ingress load balancer configuration
211202
[%collapsible]
212203
====
213204
[source,text]
@@ -232,56 +223,43 @@ defaults
232223
timeout http-keep-alive 10s
233224
timeout check 10s
234225
maxconn 3000
235-
frontend stats
236-
bind *:1936
237-
mode http
238-
log global
239-
maxconn 10
240-
stats enable
241-
stats hide-version
242-
stats refresh 30s
243-
stats show-node
244-
stats show-desc Stats for ocp4 cluster <1>
245-
stats auth admin:ocp4
246-
stats uri /stats
247-
listen api-server-6443 <2>
226+
listen api-server-6443 <1>
248227
bind *:6443
249228
mode tcp
250-
server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup <3>
229+
server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup <2>
251230
server master0 master0.ocp4.example.com:6443 check inter 1s
252231
server master1 master1.ocp4.example.com:6443 check inter 1s
253232
server master2 master2.ocp4.example.com:6443 check inter 1s
254-
listen machine-config-server-22623 <4>
233+
listen machine-config-server-22623 <3>
255234
bind *:22623
256235
mode tcp
257-
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup <3>
236+
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup <2>
258237
server master0 master0.ocp4.example.com:22623 check inter 1s
259238
server master1 master1.ocp4.example.com:22623 check inter 1s
260239
server master2 master2.ocp4.example.com:22623 check inter 1s
261-
listen ingress-router-443 <5>
240+
listen ingress-router-443 <4>
262241
bind *:443
263242
mode tcp
264243
balance source
265244
server worker0 worker0.ocp4.example.com:443 check inter 1s
266245
server worker1 worker1.ocp4.example.com:443 check inter 1s
267-
listen ingress-router-80 <6>
246+
listen ingress-router-80 <5>
268247
bind *:80
269248
mode tcp
270249
balance source
271250
server worker0 worker0.ocp4.example.com:80 check inter 1s
272251
server worker1 worker1.ocp4.example.com:80 check inter 1s
273252
----
274253

275-
<1> In the example, the cluster name is `ocp4`.
276-
<2> Port `6443` handles the Kubernetes API traffic and points to the control plane machines.
277-
<3> The bootstrap entries must be in place before the {product-title} cluster installation and they must be removed after the bootstrap process is complete.
278-
<4> Port `22623` handles the machine config server traffic and points to the control plane machines.
279-
<5> Port `443` handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
280-
<6> Port `80` handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
254+
<1> Port `6443` handles the Kubernetes API traffic and points to the control plane machines.
255+
<2> The bootstrap entries must be in place before the {product-title} cluster installation and they must be removed after the bootstrap process is complete.
256+
<3> Port `22623` handles the machine config server traffic and points to the control plane machines.
257+
<4> Port `443` handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
258+
<5> Port `80` handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
281259
+
282260
[NOTE]
283261
=====
284-
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
262+
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
285263
=====
286264
====
287265
@@ -290,11 +268,6 @@ If you are deploying a three-node cluster with zero compute nodes, the Ingress C
290268
If you are using HAProxy as a load balancer, you can check that the `haproxy` process is listening on ports `6443`, `22623`, `443`, and `80` by running `netstat -nltupe` on the HAProxy node.
291269
====
292270

293-
[NOTE]
294-
====
295-
If you are using HAProxy as a load balancer and SELinux is set to `enforcing`, you must ensure that the HAProxy service can bind to the configured TCP port by running `setsebool -P haproxy_connect_any=1`.
296-
====
297-
298271
ifeval::["{context}" == "installing-ibm-z"]
299272
:!ibm-z:
300273
endif::[]

0 commit comments

Comments
 (0)