Skip to content

Commit 4695864

Browse files
authored
Merge pull request #39726 from johnwilkins/bz-1998580
BZ 1998580, BZ 2006334 and BZ 2007529 documentation fixes.
2 parents 063aec6 + 6e0ccb0 commit 4695864

9 files changed

+85
-71
lines changed
92 KB
Loading
96.9 KB
Loading

installing/installing_bare_metal_ipi/ipi-install-overview.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ Installer-provisioned installation provides support for installing {product-titl
77

88
During installer-provisioned installation on bare metal, the installer on the bare metal node labeled as `provisioner` creates a bootstrap virtual machine (VM). The role of the bootstrap VM is to assist in the process of deploying an {product-title} cluster. The bootstrap VM connects to the `baremetal` network and to the `provisioning` network, if present, via the network bridges.
99

10-
image::71_OpenShift_4.6_Baremetal_IPI_Deployment_1020_1.svg[Deployment phase one]
10+
image::210_OpenShift_Baremetal_IPI_Deployment_updates_0122_1.png[Deployment phase one]
1111

12-
When the installation of OpenShift control plane nodes is complete and fully operational, the installer destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the control plane nodes.
12+
When the installation of {product-title} is complete and fully operational, the installer destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes accordingly. The API VIP moves to the control plane nodes and the Ingress VIP moves to the worker nodes.
1313

14-
image::161_OpenShift_Baremetal_IPI_Deployment_updates_0521.svg[Deployment phase two]
14+
image::210_OpenShift_Baremetal_IPI_Deployment_updates_0122_2.png[Deployment phase two]
1515

1616
[IMPORTANT]
1717
====
18-
The `provisioning` network is optional, but it is required for PXE booting. If you deploy without a `provisioning` network, you must use a virtual media BMC addressing option such as `redfish-virtualmedia` or `idrac-virtualmedia`.
18+
The `provisioning` network is optional, but it is required for PXE booting. If you deploy without a `provisioning` network, you must use a virtual media BMC addressing option such as `redfish-virtualmedia` or `idrac-virtualmedia`.
1919
====

modules/ipi-install-additional-install-config-parameters.adoc

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -95,14 +95,16 @@ a| `provisioningNetworkInterface` | | The name of the network interface on node
9595

9696
| `defaultMachinePlatform` | | The default configuration used for machine pools without a platform configuration.
9797

98-
| `apiVIP` | `api.<clustername.clusterdomain>` | The VIP to use for internal API communication.
98+
| `apiVIP` | | (Optional) The virtual IP address for Kubernetes API communication.
9999

100-
This setting must either be provided or pre-configured in the DNS so that the
101-
default name resolves correctly.
100+
This setting must either be provided in the `install-config.yaml` file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the `apiVIP` configuration setting in the `install-config.yaml` file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses `api.<cluster_name>.<base_domain>` to derive the IP address from the DNS.
102101

103102
| `disableCertificateVerification` | `False` | `redfish` and `redfish-virtualmedia` need this parameter to manage BMC addresses. The value should be `True` when using a self-signed certificate for BMC addresses.
104103

105-
| `ingressVIP` | `test.apps.<clustername.clusterdomain>` | The VIP to use for ingress traffic.
104+
| `ingressVIP` | | (Optional) The virtual IP address for ingress traffic.
105+
106+
This setting must either be provided in the `install-config.yaml` file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the `ingressVIP` configuration setting in the `install-config.yaml` file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses `test.apps.<cluster_name>.<base_domain>` to derive the IP address from the DNS.
107+
106108

107109
|===
108110

modules/ipi-install-configure-network-components-to-run-on-the-control-plane.adoc

Lines changed: 6 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,11 @@
33
// ipi-install-configuration-files.adoc
44
[id='configure-network-components-to-run-on-the-control-plane_{context}']
55

6-
= Configure network components to run on the control plane
6+
= (Optional) Configure network components to run on the control plane
77

8-
Configure networking components to run exclusively on the control plane nodes. By default, {product-title} allows any node in the machine config pool to host the `apiVIP` and `ingressVIP` virtual IP addresses. However, many environments deploy worker nodes in separate subnets from the control plane nodes. Consequently, you must place the `apiVIP` and `ingressVIP` virtual IP addresses exclusively with the control plane nodes.
8+
You can configure networking components to run exclusively on the control plane nodes. By default, {product-title} allows any node in the machine config pool to host the `ingressVIP` virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes. When deploying remote workers in separate subnets, you must place the `ingressVIP` virtual IP address exclusively with the control plane nodes.
9+
10+
image::161_OpenShift_Baremetal_IPI_Deployment_updates_0521.svg[Installer-provisioned networking]
911

1012
.Procedure
1113

@@ -44,38 +46,15 @@ spec:
4446
config:
4547
ignition:
4648
version: 3.2.0
47-
systemd:
48-
units:
49-
- name: nodeip-configuration.service
50-
enabled: true
51-
contents: |
52-
[Unit]
53-
Description=Writes IP address configuration so that kubelet and crio services select a valid node IP
54-
Wants=network-online.target
55-
After=network-online.target ignition-firstboot-complete.service
56-
Before=kubelet.service crio.service
57-
[Service]
58-
Type=oneshot
59-
ExecStart=/bin/bash -c "exit 0 "
60-
[Install]
61-
WantedBy=multi-user.target
6249
storage:
6350
files:
6451
- path: /etc/kubernetes/manifests/keepalived.yaml
6552
mode: 0644
6653
contents:
6754
source: data:,
68-
- path: /etc/kubernetes/manifests/mdns-publisher.yaml
69-
mode: 0644
70-
contents:
71-
source: data:,
72-
- path: /etc/kubernetes/manifests/coredns.yaml
73-
mode: 0644
74-
contents:
75-
source: data:,
7655
----
7756
+
78-
This manifest places the `apiVIP` and `ingressVIP` virtual IP addresses on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only:
57+
This manifest places the `ingressVIP` virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only:
7958
+
8059
* `openshift-ingress-operator`
8160
+
@@ -109,12 +88,5 @@ $ sed -i "s;mastersSchedulable: false;mastersSchedulable: true;g" clusterconfigs
10988
+
11089
[NOTE]
11190
====
112-
If control plane nodes are not schedulable, deploying the cluster will fail.
113-
====
114-
115-
. Before deploying the cluster, ensure that the `api.<cluster-name>.<domain>` domain name is resolvable in the external DNS server. When you configure network components to run exclusively on the control plane, the internal DNS resolution no longer works for worker nodes, which is an expected outcome.
116-
+
117-
[IMPORTANT]
118-
====
119-
Failure to create a DNS record for the `api.<cluster-name>.<domain>` domain name in the external DNS server precludes worker nodes from joining the cluster.
91+
If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail.
12092
====

modules/ipi-install-network-requirements.adoc

Lines changed: 56 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77

88
Installer-provisioned installation of {product-title} involves several network requirements. First, installer-provisioned installation involves an optional non-routable `provisioning` network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable `baremetal` network.
99

10+
image::210_OpenShift_Baremetal_IPI_Deployment_updates_0122_2.png[Installer-provisioned networking]
11+
1012
[discrete]
1113
== Configuring NICs
1214

@@ -26,25 +28,56 @@ When using a VLAN, each NIC must be on a separate VLAN corresponding to the appr
2628
====
2729

2830
[discrete]
29-
== Configuring the DNS server
31+
== DNS requirements
3032

3133
Clients access the {product-title} cluster nodes over the `baremetal` network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
3234

35+
[source,text]
3336
----
34-
<cluster_name>.<domain>
37+
<cluster_name>.<base_domain>
3538
----
3639

3740
For example:
3841

42+
[source,text]
3943
----
4044
test-cluster.example.com
4145
----
4246

4347
{product-title} includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
4448

45-
[IMPORTANT]
49+
In {product-title} deployments, DNS name resolution is required for the following components:
50+
51+
* The Kubernetes API
52+
* The {product-title} application wildcard ingress API
53+
54+
A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. {op-system-first} uses the reverse records or DHCP to set the hostnames for all the nodes.
55+
56+
Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, `<cluster_name>` is the cluster name and `<base_domain>` is the base domain that you specify in the `install-config.yaml` file. A complete DNS record takes the form: `<component>.<cluster_name>.<base_domain>.`.
57+
58+
.Required DNS records
59+
[cols="1a,3a,5a",options="header"]
60+
|===
61+
62+
|Component
63+
|Record
64+
|Description
65+
66+
.2+a|Kubernetes API
67+
|`api.<cluster_name>.<base_domain>.`
68+
|An A/AAAA record, and a PTR record, identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
69+
70+
|Routes
71+
|`*.apps.<cluster_name>.<base_domain>.`
72+
|The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
73+
74+
For example, `console-openshift-console.apps.<cluster_name>.<base_domain>` is used as a wildcard route to the {product-title} console.
75+
76+
|===
77+
78+
[TIP]
4679
====
47-
You must create a DNS entry for the `api.<cluster_name>.<domain>` domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the `api.<cluster_name>.<domain>` domain name in the external DNS server precludes worker nodes from joining the cluster.
80+
You can use the `dig` command to verify DNS resolution.
4881
====
4982

5083
[discrete]
@@ -59,10 +92,10 @@ Network administrators must reserve IP addresses for each node in the {product-t
5992

6093
For the `baremetal` network, a network administrator must reserve a number of IP addresses, including:
6194

62-
. Two virtual IP addresses.
95+
. Two unique virtual IP addresses.
6396
+
64-
- One IP address for the API endpoint
65-
- One IP address for the wildcard ingress endpoint
97+
- One virtual IP address for the API endpoint.
98+
- One virtual IP address for the wildcard ingress endpoint.
6699
+
67100
. One IP address for the provisioner node.
68101
. One IP address for each control plane (master) node.
@@ -85,17 +118,22 @@ The following table provides an exemplary embodiment of fully qualified domain n
85118
[width="100%", cols="3,5,2", options="header"]
86119
|=====
87120
| Usage | Host Name | IP
88-
| API | `api.<cluster_name>.<domain>` | `<ip>`
89-
| Ingress LB (apps) | `*.apps.<cluster_name>.<domain>` | `<ip>`
90-
| Provisioner node | `provisioner.<cluster_name>.<domain>` | `<ip>`
91-
| Master-0 | `openshift-master-0.<cluster_name>.<domain>` | `<ip>`
92-
| Master-1 | `openshift-master-1.<cluster_name>-.<domain>` | `<ip>`
93-
| Master-2 | `openshift-master-2.<cluster_name>.<domain>` | `<ip>`
94-
| Worker-0 | `openshift-worker-0.<cluster_name>.<domain>` | `<ip>`
95-
| Worker-1 | `openshift-worker-1.<cluster_name>.<domain>` | `<ip>`
96-
| Worker-n | `openshift-worker-n.<cluster_name>.<domain>` | `<ip>`
121+
| API | `api.<cluster_name>.<base_domain>` | `<ip>`
122+
| Ingress LB (apps) | `*.apps.<cluster_name>.<base_domain>` | `<ip>`
123+
| Provisioner node | `provisioner.<cluster_name>.<base_domain>` | `<ip>`
124+
| Master-0 | `openshift-master-0.<cluster_name>.<base_domain>` | `<ip>`
125+
| Master-1 | `openshift-master-1.<cluster_name>-.<base_domain>` | `<ip>`
126+
| Master-2 | `openshift-master-2.<cluster_name>.<base_domain>` | `<ip>`
127+
| Worker-0 | `openshift-worker-0.<cluster_name>.<base_domain>` | `<ip>`
128+
| Worker-1 | `openshift-worker-1.<cluster_name>.<base_domain>` | `<ip>`
129+
| Worker-n | `openshift-worker-n.<cluster_name>.<base_domain>` | `<ip>`
97130
|=====
98131

132+
[NOTE]
133+
====
134+
If you do not create DHCP reservations, the installer requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes.
135+
====
136+
99137
[discrete]
100138
== Network Time Protocol (NTP)
101139

@@ -106,7 +144,7 @@ Each {product-title} node in the cluster must have access to an NTP server. {pro
106144
Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail.
107145
====
108146

109-
You may reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
147+
You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
110148

111149
[discrete]
112150
== State-driven network configuration requirements (Technology Preview)
@@ -123,4 +161,4 @@ State-driven network configuration requires installing `kubernetes-nmstate`, and
123161
[discrete]
124162
== Port access for the out-of-band management IP address
125163

126-
The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the `baremetal` node during installation, the out-of-band management IP address address must be granted access to the TCP 6180 port.
164+
The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the `baremetal` node during installation, the out-of-band management IP address address must be granted access to the TCP 6180 port.

modules/ipi-install-out-of-band-management.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@
66
[id="out-of-band-management_{context}"]
77
= Out-of-band management
88

9-
Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the `provisioner` node.
9+
Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner node.
1010

11-
Each node must be accessible via out-of-band management. When using an out-of-band management network, the `provisioner` node requires access to the out-of-band management network for a successful {product-title} 4 installation.
11+
Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful {product-title} 4 installation.
1212

13-
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the `provisioning` network or the `baremetal` network are valid options.
13+
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning network or the baremetal network are valid options.

modules/ipi-install-required-data-for-installation.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ Prior to the installation of the {product-title} cluster, gather the following i
1515

1616
.When using the `provisioning` network
1717

18-
* NIC1 (`provisioning`) MAC address
19-
* NIC2 (`baremetal`) MAC address
18+
* NIC (`provisioning`) MAC address
19+
* NIC (`baremetal`) MAC address
2020

2121
.When omitting the `provisioning` network
2222

23-
* NICx (`baremetal`) MAC address
23+
* NIC (`baremetal`) MAC address

modules/ipi-install-validation-checklist-for-nodes.adoc

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,19 +8,21 @@
88

99
.When using the `provisioning` network
1010

11-
* [ ] NIC1 VLAN is configured for the `provisioning` network. (optional)
12-
* [ ] NIC1 is PXE-enabled on the provisioner, control plane (master), and worker nodes when using a `provisioning` network. (optional)
11+
* [ ] NIC1 VLAN is configured for the `provisioning` network.
12+
* [ ] NIC1 for the `provisioning` network is PXE-enabled on the provisioner, control plane (master), and worker nodes.
1313
* [ ] NIC2 VLAN is configured for the `baremetal` network.
1414
* [ ] PXE has been disabled on all other NICs.
15+
* [ ] DNS is configured with API and Ingress endpoints.
1516
* [ ] Control plane and worker nodes are configured.
1617
* [ ] All nodes accessible via out-of-band management.
17-
* [ ] A separate management network has been created. (optional)
18+
* [ ] (Optional) A separate management network has been created.
1819
* [ ] Required data for installation.
1920

2021
.When omitting the `provisioning` network
2122

22-
* [ ] NICx VLAN is configured for the `baremetal` network.
23+
* [ ] NIC1 VLAN is configured for the `baremetal` network.
24+
* [ ] DNS is configured with API and Ingress endpoints.
2325
* [ ] Control plane and worker nodes are configured.
2426
* [ ] All nodes accessible via out-of-band management.
25-
* [ ] A separate management network has been created. (optional)
27+
* [ ] (Optional) A separate management network has been created.
2628
* [ ] Required data for installation.

0 commit comments

Comments
 (0)