You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: installing/installing_bare_metal_ipi/ipi-install-overview.adoc
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,13 +7,13 @@ Installer-provisioned installation provides support for installing {product-titl
7
7
8
8
During installer-provisioned installation on bare metal, the installer on the bare metal node labeled as `provisioner` creates a bootstrap virtual machine (VM). The role of the bootstrap VM is to assist in the process of deploying an {product-title} cluster. The bootstrap VM connects to the `baremetal` network and to the `provisioning` network, if present, via the network bridges.
When the installation of OpenShift control plane nodes is complete and fully operational, the installer destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the control plane nodes.
12
+
When the installation of {product-title} is complete and fully operational, the installer destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes accordingly. The API VIP moves to the control plane nodes and the Ingress VIP moves to the worker nodes.
The `provisioning` network is optional, but it is required for PXE booting. If you deploy without a `provisioning` network, you must use a virtual media BMC addressing option such as `redfish-virtualmedia` or `idrac-virtualmedia`.
18
+
The `provisioning` network is optional, but it is required for PXE booting. If you deploy without a `provisioning` network, you must use a virtual media BMC addressing option such as `redfish-virtualmedia` or `idrac-virtualmedia`.
Copy file name to clipboardExpand all lines: modules/ipi-install-additional-install-config-parameters.adoc
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -95,14 +95,16 @@ a| `provisioningNetworkInterface` | | The name of the network interface on node
95
95
96
96
| `defaultMachinePlatform` | | The default configuration used for machine pools without a platform configuration.
97
97
98
-
| `apiVIP` | `api.<clustername.clusterdomain>` | The VIP to use for internal API communication.
98
+
| `apiVIP` | | (Optional) The virtual IP address for Kubernetes API communication.
99
99
100
-
This setting must either be provided or pre-configured in the DNS so that the
101
-
default name resolves correctly.
100
+
This setting must either be provided in the `install-config.yaml` file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the `apiVIP` configuration setting in the `install-config.yaml` file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses `api.<cluster_name>.<base_domain>` to derive the IP address from the DNS.
102
101
103
102
| `disableCertificateVerification` | `False` | `redfish` and `redfish-virtualmedia` need this parameter to manage BMC addresses. The value should be `True` when using a self-signed certificate for BMC addresses.
104
103
105
-
| `ingressVIP` | `test.apps.<clustername.clusterdomain>` | The VIP to use for ingress traffic.
104
+
| `ingressVIP` | | (Optional) The virtual IP address for ingress traffic.
105
+
106
+
This setting must either be provided in the `install-config.yaml` file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the `ingressVIP` configuration setting in the `install-config.yaml` file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses `test.apps.<cluster_name>.<base_domain>` to derive the IP address from the DNS.
= Configure network components to run on the control plane
6
+
= (Optional) Configure network components to run on the control plane
7
7
8
-
Configure networking components to run exclusively on the control plane nodes. By default, {product-title} allows any node in the machine config pool to host the `apiVIP` and `ingressVIP` virtual IP addresses. However, many environments deploy worker nodes in separate subnets from the control plane nodes. Consequently, you must place the `apiVIP` and `ingressVIP` virtual IP addresses exclusively with the control plane nodes.
8
+
You can configure networking components to run exclusively on the control plane nodes. By default, {product-title} allows any node in the machine config pool to host the `ingressVIP` virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes. When deploying remote workers in separate subnets, you must place the `ingressVIP` virtual IP address exclusively with the control plane nodes.
This manifest places the `apiVIP` and `ingressVIP` virtual IP addresses on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only:
57
+
This manifest places the `ingressVIP` virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only:
If control plane nodes are not schedulable, deploying the cluster will fail.
113
-
====
114
-
115
-
. Before deploying the cluster, ensure that the `api.<cluster-name>.<domain>` domain name is resolvable in the external DNS server. When you configure network components to run exclusively on the control plane, the internal DNS resolution no longer works for worker nodes, which is an expected outcome.
116
-
+
117
-
[IMPORTANT]
118
-
====
119
-
Failure to create a DNS record for the `api.<cluster-name>.<domain>` domain name in the external DNS server precludes worker nodes from joining the cluster.
91
+
If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail.
Copy file name to clipboardExpand all lines: modules/ipi-install-network-requirements.adoc
+56-18Lines changed: 56 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,6 +7,8 @@
7
7
8
8
Installer-provisioned installation of {product-title} involves several network requirements. First, installer-provisioned installation involves an optional non-routable `provisioning` network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable `baremetal` network.
@@ -26,25 +28,56 @@ When using a VLAN, each NIC must be on a separate VLAN corresponding to the appr
26
28
====
27
29
28
30
[discrete]
29
-
== Configuring the DNS server
31
+
== DNS requirements
30
32
31
33
Clients access the {product-title} cluster nodes over the `baremetal` network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
32
34
35
+
[source,text]
33
36
----
34
-
<cluster_name>.<domain>
37
+
<cluster_name>.<base_domain>
35
38
----
36
39
37
40
For example:
38
41
42
+
[source,text]
39
43
----
40
44
test-cluster.example.com
41
45
----
42
46
43
47
{product-title} includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
44
48
45
-
[IMPORTANT]
49
+
In {product-title} deployments, DNS name resolution is required for the following components:
50
+
51
+
* The Kubernetes API
52
+
* The {product-title} application wildcard ingress API
53
+
54
+
A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. {op-system-first} uses the reverse records or DHCP to set the hostnames for all the nodes.
55
+
56
+
Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, `<cluster_name>` is the cluster name and `<base_domain>` is the base domain that you specify in the `install-config.yaml` file. A complete DNS record takes the form: `<component>.<cluster_name>.<base_domain>.`.
57
+
58
+
.Required DNS records
59
+
[cols="1a,3a,5a",options="header"]
60
+
|===
61
+
62
+
|Component
63
+
|Record
64
+
|Description
65
+
66
+
.2+a|Kubernetes API
67
+
|`api.<cluster_name>.<base_domain>.`
68
+
|An A/AAAA record, and a PTR record, identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
69
+
70
+
|Routes
71
+
|`*.apps.<cluster_name>.<base_domain>.`
72
+
|The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
73
+
74
+
For example, `console-openshift-console.apps.<cluster_name>.<base_domain>` is used as a wildcard route to the {product-title} console.
75
+
76
+
|===
77
+
78
+
[TIP]
46
79
====
47
-
You must create a DNS entry for the `api.<cluster_name>.<domain>` domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the `api.<cluster_name>.<domain>` domain name in the external DNS server precludes worker nodes from joining the cluster.
80
+
You can use the `dig` command to verify DNS resolution.
48
81
====
49
82
50
83
[discrete]
@@ -59,10 +92,10 @@ Network administrators must reserve IP addresses for each node in the {product-t
59
92
60
93
For the `baremetal` network, a network administrator must reserve a number of IP addresses, including:
61
94
62
-
. Two virtual IP addresses.
95
+
. Two unique virtual IP addresses.
63
96
+
64
-
- One IP address for the API endpoint
65
-
- One IP address for the wildcard ingress endpoint
97
+
- One virtual IP address for the API endpoint.
98
+
- One virtual IP address for the wildcard ingress endpoint.
66
99
+
67
100
. One IP address for the provisioner node.
68
101
. One IP address for each control plane (master) node.
@@ -85,17 +118,22 @@ The following table provides an exemplary embodiment of fully qualified domain n
If you do not create DHCP reservations, the installer requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes.
135
+
====
136
+
99
137
[discrete]
100
138
== Network Time Protocol (NTP)
101
139
@@ -106,7 +144,7 @@ Each {product-title} node in the cluster must have access to an NTP server. {pro
106
144
Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail.
107
145
====
108
146
109
-
You may reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
147
+
You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
== Port access for the out-of-band management IP address
125
163
126
-
The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the `baremetal` node during installation, the out-of-band management IP address address must be granted access to the TCP 6180 port.
164
+
The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the `baremetal` node during installation, the out-of-band management IP address address must be granted access to the TCP 6180 port.
Copy file name to clipboardExpand all lines: modules/ipi-install-out-of-band-management.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,8 @@
6
6
[id="out-of-band-management_{context}"]
7
7
= Out-of-band management
8
8
9
-
Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the `provisioner` node.
9
+
Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner node.
10
10
11
-
Each node must be accessible via out-of-band management. When using an out-of-band management network, the `provisioner` node requires access to the out-of-band management network for a successful {product-title} 4 installation.
11
+
Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful {product-title} 4 installation.
12
12
13
-
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the `provisioning` network or the `baremetal` network are valid options.
13
+
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning network or the baremetal network are valid options.
0 commit comments