Skip to content

Commit dd4350f

Browse files
authored
Merge pull request #45058 from johnwilkins/BZ2004210-3
BZ2004210: Minor edits as part of top-to-bottom review.
2 parents 4193374 + 194a157 commit dd4350f

13 files changed

+76
-24
lines changed

installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Installer-provisioned installation of {product-title} requires:
1111
ifdef::openshift-origin[. One provisioner node with {op-system-first} installed. The provisioning node can be removed after installation.]
1212
ifndef::openshift-origin[. One provisioner node with {op-system-base-full} 8.x installed. The provisioning node can be removed after installation.]
1313
. Three control plane nodes.
14-
. Baseboard Management Controller (BMC) access to each node.
14+
. Baseboard management controller (BMC) access to each node.
1515
. At least one network:
1616
.. One required routable network
1717
.. One optional network for provisioning nodes; and,

modules/ipi-install-configuring-nodes.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ Red Hat only supports manually configured Secure Boot when deploying with Redfis
8080
To enable Secure Boot manually, refer to the hardware guide for the node and execute the following:
8181

8282
. Boot the node and enter the BIOS menu.
83-
. Set the node's boot mode to UEFI Enabled.
83+
. Set the node's boot mode to `UEFI Enabled`.
8484
. Enable Secure Boot.
8585

8686
[IMPORTANT]

modules/ipi-install-configuring-the-install-config-file.adoc

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -96,24 +96,30 @@ ifndef::upstream[]
9696
endif::[]
9797

9898

99-
. Create a directory to store cluster configs.
99+
. Create a directory to store cluster configs:
100100
+
101101
[source,terminal]
102102
----
103103
$ mkdir ~/clusterconfigs
104+
----
105+
106+
. Copy the `install-config.yaml` file to the new directory:
107+
+
108+
[source,terminal]
109+
----
104110
$ cp install-config.yaml ~/clusterconfigs
105111
----
106112

107-
. Ensure all bare metal nodes are powered off prior to installing the {product-title} cluster.
113+
. Ensure all bare metal nodes are powered off prior to installing the {product-title} cluster:
108114
+
109115
[source,terminal]
110116
----
111117
$ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
112118
----
113119

114-
. Remove old bootstrap resources if any are left over from a previous deployment attempt.
120+
. Remove old bootstrap resources if any are left over from a previous deployment attempt:
115121
+
116-
[source,terminal]
122+
[source,bash]
117123
----
118124
for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
119125
do

modules/ipi-install-creating-an-rhcos-images-cache.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="ipi-install-creating-an-rhcos-images-cache_{context}"]
77
= (Optional) Creating an {op-system} images cache
88

9-
To employ image caching, you must download the {op-system-first} image used by the bootstrap VM to provision the different nodes. Image caching is optional, but especially useful when running the installation program on a network with limited bandwidth.
9+
To employ image caching, you must download the {op-system-first} image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth.
1010

1111
[NOTE]
1212
====
@@ -103,7 +103,7 @@ quay.io/centos7/httpd-24-centos7:latest
103103
----
104104
ifndef::upstream[]
105105
+
106-
<1> Creates a caching webserver with the name `rhcos_image_cache`. This pod serves the `bootstrapOSImage` image in the `install-config.yaml` file for deployment.
106+
<1> Creates a caching webserver with the name `rhcos_image_cache`. This pod serves the `bootstrapOSImage` image in the `install-config.yaml` file for deployment.
107107
endif::[]
108108
109109
. Generate the `bootstrapOSImage` configuration:

modules/ipi-install-deploying-routers-on-worker-nodes.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ If the cluster has no worker nodes, the installer deploys the two routers on the
2121

2222
.Procedure
2323

24-
. Create a `router-replicas.yaml` file.
24+
. Create a `router-replicas.yaml` file:
2525
+
2626
[source,yaml]
2727
----
@@ -45,7 +45,7 @@ spec:
4545
Replace `<num-of-router-pods>` with an appropriate value. If working with just one worker node, set `replicas:` to `1`. If working with more than 3 worker nodes, you can increase `replicas:` from the default value `2` as appropriate.
4646
====
4747

48-
. Save and copy the `router-replicas.yaml` file to the `clusterconfigs/openshift` directory.
48+
. Save and copy the `router-replicas.yaml` file to the `clusterconfigs/openshift` directory:
4949
+
5050
[source,terminal]
5151
----

modules/ipi-install-extracting-the-openshift-installer.adoc

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,19 @@ After retrieving the installer, the next step is to extract it.
1515
[source,terminal]
1616
----
1717
$ export cmd=openshift-baremetal-install
18+
----
19+
+
20+
[source,terminal]
21+
----
1822
$ export pullsecret_file=~/pull-secret.txt
23+
----
24+
+
25+
[source,terminal]
26+
----
1927
$ export extract_dir=$(pwd)
2028
----
2129

30+
2231
. Get the `oc` binary:
2332
+
2433
[source,terminal]
@@ -31,6 +40,14 @@ $ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/ope
3140
[source,terminal]
3241
----
3342
$ sudo cp oc /usr/local/bin
43+
----
44+
+
45+
[source,terminal]
46+
----
3447
$ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE}
48+
----
49+
+
50+
[source,terminal]
51+
----
3552
$ sudo cp openshift-baremetal-install /usr/local/bin
3653
----

modules/ipi-install-following-the-installation.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="ipi-install-troubleshooting-following-the-installation_{context}"]
77
= Following the installation
88

9-
During the deployment process, you can check the installation's overall status by issuing the `tail` command to the `.openshift_install.log` log file in the install directory folder.
9+
During the deployment process, you can check the installation's overall status by issuing the `tail` command to the `.openshift_install.log` log file in the install directory folder:
1010

1111
[source,terminal]
1212
----

modules/ipi-install-installing-rhel-on-the-provisioner-node.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@
77
[id="installing-rhel-on-the-provisioner-node_{context}"]
88
= Installing {op-system-base} on the provisioner node
99

10-
With the networking configuration complete, the next step is to install {op-system-base} {op-system-version} on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the {product-title} cluster. For the purposes of this document, installing {op-system-base} on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media.
10+
With the configuration of the prerequisites complete, the next step is to install {op-system-base} {op-system-version} on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the {product-title} cluster. For the purposes of this document, installing {op-system-base} on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media.

modules/ipi-install-network-requirements.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ image::210_OpenShift_Baremetal_IPI_Deployment_updates_0122_2.png[Installer-provi
1414
[id="network-requirements-increase-mtu_{context}"]
1515
== Increase the network MTU
1616

17-
Before deploying {product-title}, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation.
17+
Before deploying {product-title}, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation.
1818

1919
[id='network-requirements-config-nics_{context}']
2020
== Configuring NICs
@@ -70,9 +70,9 @@ Installer-provisioned installation includes functionality that uses cluster memb
7070
|Record
7171
|Description
7272

73-
.2+a|Kubernetes API
73+
|Kubernetes API
7474
|`api.<cluster_name>.<base_domain>.`
75-
|An A/AAAA record, and a PTR record, identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
75+
|An A/AAAA record and a PTR record identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
7676

7777
|Routes
7878
|`*.apps.<cluster_name>.<base_domain>.`

modules/ipi-install-node-requirements.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Do not deploy a cluster with only one worker node, because the cluster will depl
3636
3737
* *Network interfaces:* Each node must have at least one network interface for the routable `baremetal` network. Each node must have one network interface for a `provisioning` network when using the `provisioning` network for deployment. Using the `provisioning` network is the default configuration.
3838
39-
* *Unified Extensible Firmware Interface (UEFI):* Installer-provisioned installation requires UEFI boot on all {product-title} nodes when using IPv6 addressing on the `provisioning` network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the `provisioning` network NIC, but omitting the `provisioning` network removes this requirement.
39+
* *Unified extensible firmware interface (UEFI):* Installer-provisioned installation requires UEFI boot on all {product-title} nodes when using IPv6 addressing on the `provisioning` network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the `provisioning` network NIC, but omitting the `provisioning` network removes this requirement.
4040
4141
* *Secure Boot:* Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed.
4242
+

0 commit comments

Comments
 (0)