Skip to content

Commit 1b0517f

Browse files
authored
Merge pull request #62825 from skopacz1/OSDOCS-7012
OSDOCS#7012: Platform none support
2 parents 8a2f871 + bab5988 commit 1b0517f

File tree

4 files changed

+378
-2
lines changed

4 files changed

+378
-2
lines changed

installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,11 @@ The Agent-based Installer can also optionally generate or accept Zero Touch Prov
1818

1919
include::modules/understanding-agent-install.adoc[leveloffset=+1]
2020

21+
[role="_additional-resources"]
22+
.Additional resources
23+
24+
* xref:../../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#installation-requirements-platform-none_preparing-to-install-with-agent-based-installer[Requirements for a cluster using the platform "none" option]
25+
2126
include::modules/agent-installer-fips-compliance.adoc[leveloffset=+1]
2227

2328
include::modules/agent-installer-configuring-fips-compliance.adoc[leveloffset=+1]
@@ -34,6 +39,15 @@ include::modules/agent-installer-configuring-fips-compliance.adoc[leveloffset=+1
3439

3540
include::modules/agent-install-networking.adoc[leveloffset=+1]
3641

42+
[id="installation-requirements-platform-none_{context}"]
43+
== Requirements for a cluster using the platform "none" option
44+
45+
This section describes the requirements for an Agent-based {product-title} installation that is configured to use the platform `none` option.
46+
47+
include::modules/agent-install-dns-none.adoc[leveloffset=+2]
48+
49+
include::modules/agent-install-load-balancing-none.adoc[leveloffset=+2]
50+
3751
include::modules/agent-install-sample-config-bonds-vlans.adoc[leveloffset=+1]
3852

3953
include::modules/agent-install-sample-config-bond-sriov.adoc[leveloffset=+1]

modules/agent-install-dns-none.adoc

Lines changed: 172 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,172 @@
1+
:_content-type: CONCEPT
2+
[id="agent-install-dns-none_{context}"]
3+
= Platform "none" DNS requirements
4+
5+
In {product-title} deployments, DNS name resolution is required for the following components:
6+
7+
* The Kubernetes API
8+
* The {product-title} application wildcard
9+
* The control plane and compute machines
10+
11+
Reverse DNS resolution is also required for the Kubernetes API, the control plane machines, and the compute machines.
12+
13+
DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because {op-system-first} uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that {product-title} needs to operate.
14+
15+
[NOTE]
16+
====
17+
It is recommended to use a DHCP server to provide the hostnames to each cluster node.
18+
====
19+
20+
The following DNS records are required for an {product-title} cluster using the platform `none` option and they must be in place before installation. In each record, `<cluster_name>` is the cluster name and `<base_domain>` is the base domain that you specify in the `install-config.yaml` file. A complete DNS record takes the form: `<component>.<cluster_name>.<base_domain>.`.
21+
22+
.Required DNS records
23+
[cols="1a,3a,5a",options="header"]
24+
|===
25+
26+
|Component
27+
|Record
28+
|Description
29+
30+
.2+a|Kubernetes API
31+
|`api.<cluster_name>.<base_domain>.`
32+
|A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
33+
34+
|`api-int.<cluster_name>.<base_domain>.`
35+
|A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster.
36+
[IMPORTANT]
37+
====
38+
The API server must be able to resolve the worker nodes by the hostnames
39+
that are recorded in Kubernetes. If the API server cannot resolve the node
40+
names, then proxied API calls can fail, and you cannot retrieve logs from pods.
41+
====
42+
43+
|Routes
44+
|`*.apps.<cluster_name>.<base_domain>.`
45+
|A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
46+
47+
For example, `console-openshift-console.apps.<cluster_name>.<base_domain>` is used as a wildcard route to the {product-title} console.
48+
49+
|Control plane machines
50+
|`<master><n>.<cluster_name>.<base_domain>.`
51+
|DNS A/AAAA or CNAME records and DNS PTR records to identify each machine
52+
for the control plane nodes. These records must be resolvable by the nodes within the cluster.
53+
54+
|Compute machines
55+
|`<worker><n>.<cluster_name>.<base_domain>.`
56+
|DNS A/AAAA or CNAME records and DNS PTR records to identify each machine
57+
for the worker nodes. These records must be resolvable by the nodes within the cluster.
58+
59+
|===
60+
61+
[NOTE]
62+
====
63+
In {product-title} 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration.
64+
====
65+
66+
[TIP]
67+
====
68+
You can use the `dig` command to verify name and reverse name resolution.
69+
====
70+
71+
[id="agent-install-dns-none-example_{context}"]
72+
== Example DNS configuration for platform "none" clusters
73+
74+
This section provides A and PTR record configuration samples that meet the DNS requirements for deploying {product-title} using the platform `none` option. The samples are not meant to provide advice for choosing one DNS solution over another.
75+
76+
In the examples, the cluster name is `ocp4` and the base domain is `example.com`.
77+
78+
.Example DNS A record configuration for a platform "none" cluster
79+
80+
The following example is a BIND zone file that shows sample A records for name resolution in a cluster using the platform `none` option.
81+
82+
.Sample DNS zone database
83+
[%collapsible]
84+
====
85+
[source,text]
86+
----
87+
$TTL 1W
88+
@ IN SOA ns1.example.com. root (
89+
2019070700 ; serial
90+
3H ; refresh (3 hours)
91+
30M ; retry (30 minutes)
92+
2W ; expiry (2 weeks)
93+
1W ) ; minimum (1 week)
94+
IN NS ns1.example.com.
95+
IN MX 10 smtp.example.com.
96+
;
97+
;
98+
ns1.example.com. IN A 192.168.1.5
99+
smtp.example.com. IN A 192.168.1.5
100+
;
101+
helper.example.com. IN A 192.168.1.5
102+
helper.ocp4.example.com. IN A 192.168.1.5
103+
;
104+
api.ocp4.example.com. IN A 192.168.1.5 <1>
105+
api-int.ocp4.example.com. IN A 192.168.1.5 <2>
106+
;
107+
*.apps.ocp4.example.com. IN A 192.168.1.5 <3>
108+
;
109+
master0.ocp4.example.com. IN A 192.168.1.97 <4>
110+
master1.ocp4.example.com. IN A 192.168.1.98 <4>
111+
master2.ocp4.example.com. IN A 192.168.1.99 <4>
112+
;
113+
worker0.ocp4.example.com. IN A 192.168.1.11 <5>
114+
worker1.ocp4.example.com. IN A 192.168.1.7 <5>
115+
;
116+
;EOF
117+
----
118+
119+
<1> Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer.
120+
<2> Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications.
121+
<3> Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
122+
+
123+
[NOTE]
124+
=====
125+
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
126+
=====
127+
+
128+
<4> Provides name resolution for the control plane machines.
129+
<5> Provides name resolution for the compute machines.
130+
====
131+
132+
.Example DNS PTR record configuration for a platform "none" cluster
133+
134+
The following example BIND zone file shows sample PTR records for reverse name resolution in a cluster using the platform `none` option.
135+
136+
.Sample DNS zone database for reverse records
137+
[%collapsible]
138+
====
139+
[source,text]
140+
----
141+
$TTL 1W
142+
@ IN SOA ns1.example.com. root (
143+
2019070700 ; serial
144+
3H ; refresh (3 hours)
145+
30M ; retry (30 minutes)
146+
2W ; expiry (2 weeks)
147+
1W ) ; minimum (1 week)
148+
IN NS ns1.example.com.
149+
;
150+
5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. <1>
151+
5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. <2>
152+
;
153+
97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. <3>
154+
98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. <3>
155+
99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. <3>
156+
;
157+
11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. <4>
158+
7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. <4>
159+
;
160+
;EOF
161+
----
162+
163+
<1> Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer.
164+
<2> Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications.
165+
<3> Provides reverse DNS resolution for the control plane machines.
166+
<4> Provides reverse DNS resolution for the compute machines.
167+
====
168+
169+
[NOTE]
170+
====
171+
A PTR record is not required for the {product-title} application wildcard.
172+
====
Lines changed: 190 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,190 @@
1+
:_content-type: CONCEPT
2+
[id="agent-install-load-balancing-none_{context}"]
3+
= Platform "none" Load balancing requirements
4+
5+
6+
Before you install {product-title}, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
7+
8+
[NOTE]
9+
====
10+
These requirements do not apply to single-node OpenShift clusters using the platform `none` option.
11+
====
12+
13+
[NOTE]
14+
====
15+
If you want to deploy the API and application Ingress load balancers with a {op-system-base-full} instance, you must purchase the {op-system-base} subscription separately.
16+
====
17+
18+
The load balancing infrastructure must meet the following requirements:
19+
20+
. *API load balancer*: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:
21+
+
22+
--
23+
** Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes.
24+
** A stateless load balancing algorithm. The options vary based on the load balancer implementation.
25+
--
26+
+
27+
[IMPORTANT]
28+
====
29+
Do not configure session persistence for an API load balancer.
30+
====
31+
+
32+
Configure the following ports on both the front and back of the load balancers:
33+
+
34+
.API load balancer
35+
[cols="2,5,^2,^2,2",options="header"]
36+
|===
37+
38+
|Port
39+
|Back-end machines (pool members)
40+
|Internal
41+
|External
42+
|Description
43+
44+
|`6443`
45+
|Control plane. You must configure the `/readyz` endpoint for the API server health check probe.
46+
|X
47+
|X
48+
|Kubernetes API server
49+
50+
|`22623`
51+
|Control plane.
52+
|X
53+
|
54+
|Machine config server
55+
56+
|===
57+
+
58+
[NOTE]
59+
====
60+
The load balancer must be configured to take a maximum of 30 seconds from the
61+
time the API server turns off the `/readyz` endpoint to the removal of the API
62+
server instance from the pool. Within the time frame after `/readyz` returns an
63+
error or becomes healthy, the endpoint must have been removed or added. Probing
64+
every 5 or 10 seconds, with two successful requests to become healthy and three
65+
to become unhealthy, are well-tested values.
66+
====
67+
+
68+
. *Application Ingress load balancer*: Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an {product-title} cluster.
69+
+
70+
Configure the following conditions:
71+
+
72+
--
73+
** Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes.
74+
** A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.
75+
--
76+
+
77+
[TIP]
78+
====
79+
If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
80+
====
81+
+
82+
Configure the following ports on both the front and back of the load balancers:
83+
+
84+
.Application Ingress load balancer
85+
[cols="2,5,^2,^2,2",options="header"]
86+
|===
87+
88+
|Port
89+
|Back-end machines (pool members)
90+
|Internal
91+
|External
92+
|Description
93+
94+
|`443`
95+
|The machines that run the Ingress Controller pods, compute, or worker, by default.
96+
|X
97+
|X
98+
|HTTPS traffic
99+
100+
|`80`
101+
|The machines that run the Ingress Controller pods, compute, or worker, by default.
102+
|X
103+
|X
104+
|HTTP traffic
105+
106+
|===
107+
+
108+
[NOTE]
109+
====
110+
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
111+
====
112+
113+
[id="agent-install-load-balancing-none-example_{context}"]
114+
== Example load balancer configuration for platform "none" clusters
115+
116+
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters using the platform `none` option. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
117+
118+
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
119+
120+
[NOTE]
121+
====
122+
If you are using HAProxy as a load balancer and SELinux is set to `enforcing`, you must ensure that the HAProxy service can bind to the configured TCP port by running `setsebool -P haproxy_connect_any=1`.
123+
====
124+
125+
.Sample API and application Ingress load balancer configuration
126+
[%collapsible]
127+
====
128+
[source,text]
129+
----
130+
global
131+
log 127.0.0.1 local2
132+
pidfile /var/run/haproxy.pid
133+
maxconn 4000
134+
daemon
135+
defaults
136+
mode http
137+
log global
138+
option dontlognull
139+
option http-server-close
140+
option redispatch
141+
retries 3
142+
timeout http-request 10s
143+
timeout queue 1m
144+
timeout connect 10s
145+
timeout client 1m
146+
timeout server 1m
147+
timeout http-keep-alive 10s
148+
timeout check 10s
149+
maxconn 3000
150+
listen api-server-6443 <1>
151+
bind *:6443
152+
mode tcp
153+
server master0 master0.ocp4.example.com:6443 check inter 1s
154+
server master1 master1.ocp4.example.com:6443 check inter 1s
155+
server master2 master2.ocp4.example.com:6443 check inter 1s
156+
listen machine-config-server-22623 <2>
157+
bind *:22623
158+
mode tcp
159+
server master0 master0.ocp4.example.com:22623 check inter 1s
160+
server master1 master1.ocp4.example.com:22623 check inter 1s
161+
server master2 master2.ocp4.example.com:22623 check inter 1s
162+
listen ingress-router-443 <3>
163+
bind *:443
164+
mode tcp
165+
balance source
166+
server worker0 worker0.ocp4.example.com:443 check inter 1s
167+
server worker1 worker1.ocp4.example.com:443 check inter 1s
168+
listen ingress-router-80 <4>
169+
bind *:80
170+
mode tcp
171+
balance source
172+
server worker0 worker0.ocp4.example.com:80 check inter 1s
173+
server worker1 worker1.ocp4.example.com:80 check inter 1s
174+
----
175+
176+
<1> Port `6443` handles the Kubernetes API traffic and points to the control plane machines.
177+
<2> Port `22623` handles the machine config server traffic and points to the control plane machines.
178+
<3> Port `443` handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
179+
<4> Port `80` handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
180+
+
181+
[NOTE]
182+
=====
183+
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
184+
=====
185+
====
186+
187+
[TIP]
188+
====
189+
If you are using HAProxy as a load balancer, you can check that the `haproxy` process is listening on ports `6443`, `22623`, `443`, and `80` by running `netstat -nltupe` on the HAProxy node.
190+
====

0 commit comments

Comments
 (0)