You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* xref:../../installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc#configuring-ntp-for-disconnected-clusters_ipi-install-installation-workflow[Optional: Configuring NTP for disconnected clusters]
19
+
20
+
* xref:../../installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc#network-requirements-ntp_ipi-install-prerequisites[Network Time Protocol (NTP)]
// * list of assemblies where this module is included
4
+
// ipi-install-installation-workflow.adoc
5
+
6
+
:_content-type: PROCEDURE
7
+
[id="checking-ntp-sync_{context}"]
8
+
= Checking NTP server synchronization
9
+
10
+
The {product-title} installation program installs the `chrony` Network Time Protocol (NTP) service on the cluster nodes. To complete installation, each node must have access to an NTP time server. You can verify NTP server synchronization by using the `chrony` service.
11
+
12
+
For disconnected clusters, you must configure the NTP servers on the control plane nodes. For more information see the _Additional resources_ section.
13
+
14
+
.Prerequisites
15
+
16
+
* You installed the `chrony` package on the target node.
17
+
18
+
.Procedure
19
+
20
+
. Log in to the node by using the `ssh` command.
21
+
22
+
. View the NTP servers available to the node by running the following command:
23
+
+
24
+
[source,terminal]
25
+
----
26
+
$ chronyc sources
27
+
----
28
+
+
29
+
.Example output
30
+
[source,terminal]
31
+
----
32
+
MS Name/IP address Stratum Poll Reach LastRx Last sample
Installer-provisioned installation of {product-title} involves several network requirements. First, installer-provisioned installation involves an optional non-routable `provisioning` network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable `baremetal` network.
@@ -49,7 +49,7 @@ Certain ports must be open between cluster nodes for installer-provisioned insta
49
49
50
50
Before deploying {product-title}, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation.
51
51
52
-
[id='network-requirements-config-nics_{context}']
52
+
[id="network-requirements-config-nics_{context}"]
53
53
== Configuring NICs
54
54
55
55
{product-title} deploys with two networks:
@@ -67,7 +67,7 @@ The `provisioning` network is optional, but it is required for PXE booting. If y
67
67
When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network.
68
68
====
69
69
70
-
[id='network-requirements-dns_{context}']
70
+
[id="network-requirements-dns_{context}"]
71
71
== DNS requirements
72
72
73
73
Clients access the {product-title} cluster nodes over the `baremetal` network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
@@ -120,14 +120,14 @@ For example, `console-openshift-console.apps.<cluster_name>.<base_domain>` is us
120
120
You can use the `dig` command to verify DNS resolution.
By default, installer-provisioned installation deploys `ironic-dnsmasq` with DHCP enabled for the `provisioning` network. No other DHCP servers should be running on the `provisioning` network when the `provisioningNetwork` configuration setting is set to `managed`, which is the default value. If you have a DHCP server running on the `provisioning` network, you must set the `provisioningNetwork` configuration setting to `unmanaged` in the `install-config.yaml` file.
127
127
128
128
Network administrators must reserve IP addresses for each node in the {product-title} cluster for the `baremetal` network on an external DHCP server.
== Reserving IP addresses for nodes with the DHCP server
132
132
133
133
For the `baremetal` network, a network administrator must reserve a number of IP addresses, including:
@@ -179,7 +179,14 @@ The following table provides an exemplary embodiment of fully qualified domain n
179
179
If you do not create DHCP reservations, the installer requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes.
180
180
====
181
181
182
-
[id='network-requirements-ntp_{context}']
182
+
[id="network-requirements-provisioner_{context}"]
183
+
== Provisioner node requirements
184
+
185
+
You must specify the MAC address for the provisioner node in your installation configuration. The `bootMacAddress` specification is typically associated with PXE network booting. However, the Ironic provisioning service also requires the `bootMacAddress` specification to identify nodes during the inspection of the cluster, or during node redeployment in the cluster.
186
+
187
+
The provisioner node requires layer 2 connectivity for network booting, DHCP and DNS resolution, and local network communication. The provisioner node requires layer 3 connectivity for virtual media booting.
188
+
189
+
[id="network-requirements-ntp_{context}"]
183
190
== Network Time Protocol (NTP)
184
191
185
192
Each {product-title} node in the cluster must have access to an NTP server. {product-title} nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
@@ -191,7 +198,7 @@ Define a consistent clock date and time format in each cluster node's BIOS setti
191
198
192
199
You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
193
200
194
-
[id='network-requirements-out-of-band_{context}']
201
+
[id="network-requirements-out-of-band_{context}"]
195
202
== Port access for the out-of-band management IP address
196
203
197
204
The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioner node during installation, the out-of-band management IP address must be granted access to port `6180` on the provisioner node and on the {product-title} control plane nodes. TLS port `6183` is required for virtual media installation, for example, by using Redfish.
0 commit comments