Skip to content

Commit b78372d

Browse files
authored
Merge pull request #34121 from johnwilkins/network-requirements-feedback
IPI network requirements feedback
2 parents a41e7ef + a63ef26 commit b78372d

6 files changed

+85
-69
lines changed

installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,15 @@ ifndef::openshift-origin[. One provisioner node with {op-system-base-full} 8.x i
1313
. Baseboard Management Controller (BMC) access to each node.
1414
ifeval::[{product-version} > 4.5]
1515
. At least one network:
16-
.. One *required* routable network
17-
.. One *optional* network for provisioning nodes; and,
18-
.. One *optional* management network.
16+
.. One required routable network
17+
.. One optional network for provisioning nodes; and,
18+
.. One optional management network.
1919
endif::[]
2020
ifeval::[{product-version} < 4.6]
2121
. At least two networks:
22-
.. One *required* routable network
23-
.. One *required* network for provisioning nodes; and,
24-
.. One *optional* management network.
22+
.. One required routable network
23+
.. One required network for provisioning nodes; and,
24+
.. One optional management network.
2525
endif::[]
2626

2727
Before starting an installer-provisioned installation of {product-title}, ensure the hardware environment meets the following requirements.

modules/ipi-install-network-requirements.adoc

Lines changed: 22 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -5,33 +5,7 @@
55
[id='network-requirements_{context}']
66
= Network requirements
77

8-
Installer-provisioned installation of {product-title} involves several network requirements by default. First, installer-provisioned installation involves a non-routable `provisioning` network for provisioning the operating system on each bare metal node and a routable `baremetal` network. Since installer-provisioned installation deploys `ironic-dnsmasq`, the networks should have no other DHCP servers running on the same broadcast domain. Network administrators must reserve IP addresses for each node in the {product-title} cluster.
9-
10-
ifeval::[{product-version} > 4.7]
11-
{product-title} 4.8 and later releases include functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. Once the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
12-
endif::[]
13-
14-
.Network Time Protocol (NTP)
15-
16-
ifeval::[{product-version} <= 4.7]
17-
Each {product-title} node in the cluster must have access to an NTP server. {product-title} nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
18-
19-
[IMPORTANT]
20-
====
21-
Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail.
22-
====
23-
endif::[]
24-
25-
ifeval::[{product-version} > 4.7]
26-
Each {product-title} node in the cluster must have access to an NTP server. {product-title} nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
27-
28-
[IMPORTANT]
29-
====
30-
Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail.
31-
====
32-
33-
In {product-title} 4.8 and later releases, you may reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
34-
endif::[]
8+
Installer-provisioned installation of {product-title} involves several network requirements. First, installer-provisioned installation involves an optional non-routable `provisioning` network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable `baremetal` network.
359

3610
.Configuring NICs
3711

@@ -73,6 +47,10 @@ For example:
7347
test-cluster.example.com
7448
----
7549

50+
ifeval::[{product-version}>4.7]
51+
{product-title} 4.8 and later releases include functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. Once the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
52+
endif::[]
53+
7654
ifdef::upstream[]
7755
For assistance in configuring the DNS server, check xref:ipi-install-upstream-appendix[Appendix] section for:
7856

@@ -81,6 +59,11 @@ For assistance in configuring the DNS server, check xref:ipi-install-upstream-ap
8159

8260
endif::[]
8361

62+
.Dynamic Host Configuration Protocol (DHCP) requirements
63+
64+
By default, installer-provisioned installation deploys `ironic-dnsmasq` with DHCP enabled for the `provisioning` network. No other DHCP servers should be running on the `provisioning` network when the `provisioningNetwork` configuration setting is set to `managed`, which is the default value. If you have a DHCP server running on the `provisioning` network, you must set the `provisioningNetwork` configuration setting to `unmanaged` in the `install-config.yaml` file.
65+
66+
Network administrators must reserve IP addresses for each node in the {product-title} cluster for the `baremetal` network on an external DHCP server.
8467

8568
.Reserving IP addresses for nodes with the DHCP server
8669

@@ -107,7 +90,7 @@ ifeval::[{product-version} > 4.6]
10790
[IMPORTANT]
10891
.Reserving IP addresses so they become static IP addresses
10992
====
110-
Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To use static IP addresses in the {product-title} cluster, *reserve the IP addresses with an infinite lease*. During deployment, the installer will reconfigure the NICs from DHCP assigned addresses to static IP addresses. NICs with DHCP leases that are not infinite will remain configured to use DHCP.
93+
Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To use static IP addresses in the {product-title} cluster, reserve the IP addresses with an infinite lease. During deployment, the installer will reconfigure the NICs from DHCP assigned addresses to static IP addresses. NICs with DHCP leases that are not infinite will remain configured to use DHCP.
11194
====
11295
endif::[]
11396

@@ -145,6 +128,17 @@ For assistance in configuring the DHCP server, check xref:ipi-install-upstream-a
145128
- xref:creating-dhcp-reservations-using-dnsmasq-option2_{context}[Creating DHCP reservations with dnsmasq (Option 2)]
146129
endif::[]
147130

131+
.Network Time Protocol (NTP)
132+
133+
Each {product-title} node in the cluster must have access to an NTP server. {product-title} nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
134+
135+
[IMPORTANT]
136+
====
137+
Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail.
138+
====
139+
140+
You may reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
141+
148142
ifeval::[{product-version} == 4.6]
149143
.Additional requirements with no provisioning network
150144

modules/ipi-install-node-requirements.adoc

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,52 +1,51 @@
11
// Module included in the following assemblies:
22
//
33
// * installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc
4-
5-
[id='node-requirements_{context}']
6-
4+
:product-version: 4.8
5+
[id="node-requirements_{context}"'"]
76
= Node requirements
87
98
Installer-provisioned installation involves a number of hardware node requirements:
109
11-
- *CPU architecture:* All nodes must use `x86_64` CPU architecture.
10+
* *CPU architecture:* All nodes must use `x86_64` CPU architecture.
1211
13-
- *Similar nodes:* Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration.
12+
* *Similar nodes:* Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration.
1413
1514
ifeval::[{product-version} < 4.5]
16-
- *Intelligent Platform Management Interface (IPMI):* Installer-provisioned installation requires IPMI enabled on each node.
15+
* *Intelligent Platform Management Interface (IPMI):* Installer-provisioned installation requires IPMI enabled on each node.
1716
endif::[]
1817
1918
ifeval::[{product-version} > 4.4]
20-
- *Baseboard Management Controller:* The `provisioner` node must be able to access the baseboard management controller (BMC) of each {product-title} cluster node. You may use IPMI, Redfish, or a proprietary protocol.
19+
* *Baseboard Management Controller:* The `provisioner` node must be able to access the baseboard management controller (BMC) of each {product-title} cluster node. You may use IPMI, Redfish, or a proprietary protocol.
2120
endif::[]
2221
2322
ifndef::openshift-origin[]
24-
- *Latest generation:* Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, {op-system-base} 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support {op-system-base} 8 for the `provisioner` node and {op-system} 8 for the control plane and worker nodes.
23+
* *Latest generation:* Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, {op-system-base} 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support {op-system-base} 8 for the `provisioner` node and {op-system} 8 for the control plane and worker nodes.
2524
endif::[]
2625
ifdef::openshift-origin[]
27-
- *Latest generation:* Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, {op-system-first} ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support {op-system} for the `provisioner` node and {op-system} for the control plane and worker nodes.
26+
* *Latest generation:* Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, {op-system-first} ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support {op-system} for the `provisioner` node and {op-system} for the control plane and worker nodes.
2827
endif::[]
2928
30-
- *Registry node:* (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.
29+
* *Registry node:* (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.
3130
32-
- *Provisioner node:* Installer-provisioned installation requires one `provisioner` node.
31+
* *Provisioner node:* Installer-provisioned installation requires one `provisioner` node.
3332
34-
- *Control plane:* Installer-provisioned installation requires three control plane nodes for high availability.
33+
* *Control plane:* Installer-provisioned installation requires three control plane nodes for high availability.
3534
36-
- *Worker nodes:* While not required, a typical production cluster has one or more worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing.
35+
* *Worker nodes:* While not required, a typical production cluster has one or more worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing.
3736
38-
- *Network interfaces:* Each node must have at least one 10GB network interface for the routable `baremetal` network. Each node must have one 10GB network interface for a `provisioning` network *when using the `provisioning` network* for deployment. Using the `provisioning` network is the default configuration. Network interface names must follow the same naming convention across all nodes. For example, the first NIC name on a node, such as `eth0` or `eno1`, must be the same name on all of the other nodes. The same principle applies to the remaining NICs on each node.
37+
* *Network interfaces:* Each node must have at least one 10GB network interface for the routable `baremetal` network. Each node must have one 10GB network interface for a `provisioning` network when using the `provisioning` network for deployment. Using the `provisioning` network is the default configuration. Network interface names must follow the same naming convention across all nodes. For example, the first NIC name on a node, such as `eth0` or `eno1`, must be the same name on all of the other nodes. The same principle applies to the remaining NICs on each node.
3938
4039
ifeval::[{product-version} > 4.3]
41-
- *Unified Extensible Firmware Interface (UEFI):* Installer-provisioned installation requires UEFI boot on all {product-title} nodes when using IPv6 addressing on the `provisioning` network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the `provisioning` network NIC, but omitting the `provisioning` network removes this requirement.
40+
* *Unified Extensible Firmware Interface (UEFI):* Installer-provisioned installation requires UEFI boot on all {product-title} nodes when using IPv6 addressing on the `provisioning` network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the `provisioning` network NIC, but omitting the `provisioning` network removes this requirement.
4241
endif::[]
4342
4443
ifeval::[{product-version} == 4.7]
45-
- *Secure Boot:* Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. To deploy an {product-title} cluster with Secure Boot, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot only when installer-provisioned installations use Red Fish Virtual Media. Red Hat does not support Secure Boot with self-generated keys.
44+
* *Secure Boot:* Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. To deploy an {product-title} cluster with Secure Boot, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot only when installer-provisioned installations use Red Fish Virtual Media. Red Hat does not support Secure Boot with self-generated keys.
4645
endif::[]
4746
4847
ifeval::[{product-version} > 4.7]
49-
- *Secure Boot:* Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed.
48+
* *Secure Boot:* Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed.
5049
+
5150
. *Manually:* To deploy an {product-title} cluster with Secure Boot manually, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot with manually enabled UEFI and Secure Boot only when installer-provisioned installations use Redfish virtual media. See "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section for additional details.
5251
+

modules/ipi-install-preparing-the-provisioner-node-for-openshift-install.adoc

Lines changed: 38 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Perform the following steps to prepare the environment.
1111

1212
. Log in to the provisioner node via `ssh`.
1313

14-
. Create a non-root user (`kni`) and provide that user with `sudo` privileges.
14+
. Create a non-root user (`kni`) and provide that user with `sudo` privileges:
1515
+
1616
[source,terminal]
1717
----
@@ -21,22 +21,22 @@ Perform the following steps to prepare the environment.
2121
# chmod 0440 /etc/sudoers.d/kni
2222
----
2323

24-
. Create an `ssh` key for the new user.
24+
. Create an `ssh` key for the new user:
2525
+
2626
[source,terminal]
2727
----
2828
# su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''"
2929
----
3030

31-
. Log in as the new user on the provisioner node.
31+
. Log in as the new user on the provisioner node:
3232
+
3333
[source,terminal]
3434
----
3535
# su - kni
3636
$
3737
----
3838

39-
. Use Red Hat Subscription Manager to register the provisioner node.
39+
. Use Red Hat Subscription Manager to register the provisioner node:
4040
+
4141
[source,terminal]
4242
----
@@ -49,21 +49,21 @@ $ sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --en
4949
For more information about Red Hat Subscription Manager, see link:https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html-single/rhsm/index[Using and Configuring Red Hat Subscription Manager].
5050
====
5151

52-
. Install the following packages.
52+
. Install the following packages:
5353
+
5454
[source,terminal]
5555
----
5656
$ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
5757
----
5858

59-
. Modify the user to add the `libvirt` group to the newly created user.
59+
. Modify the user to add the `libvirt` group to the newly created user:
6060
+
6161
[source,terminal]
6262
----
6363
$ sudo usermod --append --groups libvirt <user>
6464
----
6565

66-
. Restart `firewalld` and enable the `http` service.
66+
. Restart `firewalld` and enable the `http` service:
6767
+
6868
[source,terminal]
6969
----
@@ -72,14 +72,14 @@ $ sudo firewall-cmd --zone=public --add-service=http --permanent
7272
$ sudo firewall-cmd --reload
7373
----
7474

75-
. Start and enable the `libvirtd` service.
75+
. Start and enable the `libvirtd` service:
7676
+
7777
[source,terminal]
7878
----
7979
$ sudo systemctl enable libvirtd --now
8080
----
8181

82-
. Create the `default` storage pool and start it.
82+
. Create the `default` storage pool and start it:
8383
+
8484
[source,terminal]
8585
----
@@ -92,26 +92,48 @@ $ sudo virsh pool-autostart default
9292
+
9393
[NOTE]
9494
====
95-
This step can also be run from the web console.
95+
You can also configure networking from the web console.
9696
====
9797
+
98+
Export the `baremetal` network NIC name:
99+
+
98100
[source,terminal]
99101
----
100102
$ export PUB_CONN=<baremetal_nic_name>
101-
$ export PROV_CONN=<prov_nic_name>
103+
----
104+
+
105+
Configure the `baremetal` network:
106+
+
107+
[source,terminal]
108+
----
102109
$ sudo nohup bash -c "
103-
nmcli con down \"$PROV_CONN\"
104110
nmcli con down \"$PUB_CONN\"
105-
nmcli con delete \"$PROV_CONN\"
106111
nmcli con delete \"$PUB_CONN\"
107112
# RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists
108113
nmcli con down \"System $PUB_CONN\"
109114
nmcli con delete \"System $PUB_CONN\"
110-
nmcli connection add ifname provisioning type bridge con-name provisioning
111-
nmcli con add type bridge-slave ifname \"$PROV_CONN\" master provisioning
112115
nmcli connection add ifname baremetal type bridge con-name baremetal
113116
nmcli con add type bridge-slave ifname \"$PUB_CONN\" master baremetal
114117
pkill dhclient;dhclient baremetal
118+
"
119+
----
120+
+
121+
If you are deploying with a `provisioning` network, export the `provisioning` network NIC name:
122+
+
123+
[source,terminal]
124+
----
125+
$ export PROV_CONN=<prov_nic_name>
126+
----
127+
+
128+
If you are deploying with a `provisioning` network, configure the `provisioning` network:
129+
+
130+
[source,terminal]
131+
----
132+
$ sudo nohup bash -c "
133+
nmcli con down \"$PROV_CONN\"
134+
nmcli con delete \"$PROV_CONN\"
135+
nmcli connection add ifname provisioning type bridge con-name provisioning
136+
nmcli con add type bridge-slave ifname \"$PROV_CONN\" master provisioning
115137
nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual
116138
nmcli con down provisioning
117139
nmcli con up provisioning
@@ -120,7 +142,7 @@ $ sudo nohup bash -c "
120142
+
121143
[NOTE]
122144
====
123-
The `ssh` connection might disconnect after executing this step.
145+
The `ssh` connection might disconnect after executing these steps.
124146
125147
The IPv6 address can be any address as long as it is not routable via the `baremetal` network.
126148

modules/ipi-install-required-data-for-installation.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ Prior to the installation of the {product-title} cluster, gather the following i
1111
** Examples
1212
*** Dell (iDRAC) IP
1313
*** HP (iLO) IP
14+
*** Fujitsu (iRMC) IP
1415

1516
.When using the `provisioning` network
1617

0 commit comments

Comments
 (0)