Skip to content

Commit 83b0136

Browse files
authored
Merge pull request #42991 from ktothill/TELCODOCS-324
TELCODOCS-324: IPI Expanding Cluster Without Provisioner Node
2 parents b64218c + e8df944 commit 83b0136

10 files changed

+46
-45
lines changed

installing/installing_bare_metal_ipi/ipi-install-overview.adoc

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,15 +6,17 @@ include::_attributes/common-attributes.adoc[]
66

77
Installer-provisioned installation provides support for installing {product-title} on bare metal nodes. This guide provides a methodology to achieving a successful installation.
88

9-
During installer-provisioned installation on bare metal, the installer on the bare metal node labeled as `provisioner` creates a bootstrap virtual machine (VM). The role of the bootstrap VM is to assist in the process of deploying an {product-title} cluster. The bootstrap VM connects to the `baremetal` network and to the `provisioning` network, if present, via the network bridges.
9+
During installer-provisioned installation on bare metal, the installation program on the bare metal node labeled as `provisioner` creates a bootstrap virtual machine (VM). The role of the bootstrap VM is to assist in the process of deploying an {product-title} cluster. The bootstrap VM connects to the `baremetal` network and to the `provisioning` network, if present, via the network bridges.
1010

1111
image::210_OpenShift_Baremetal_IPI_Deployment_updates_0122_1.png[Deployment phase one]
1212

13-
When the installation of {product-title} is complete and fully operational, the installer destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes accordingly. The API VIP moves to the control plane nodes and the Ingress VIP moves to the worker nodes.
13+
The provisioning node can be removed after the installation.
14+
15+
When the installation of {product-title} is complete and fully operational, the installation program destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes accordingly. The API VIP moves to the control plane nodes and the Ingress VIP moves to the worker nodes.
1416

1517
image::210_OpenShift_Baremetal_IPI_Deployment_updates_0122_2.png[Deployment phase two]
1618

1719
[IMPORTANT]
1820
====
1921
The `provisioning` network is optional, but it is required for PXE booting. If you deploy without a `provisioning` network, you must use a virtual media BMC addressing option such as `redfish-virtualmedia` or `idrac-virtualmedia`.
20-
====
22+
====

installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@ include::modules/ipi-install-configuring-ntp-for-disconnected-clusters.adoc[leve
1212

1313
include::modules/nw-enabling-a-provisioning-network-after-installation.adoc[leveloffset=+1]
1414

15-
include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+1]
15+
include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+1]

installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ toc::[]
88

99
Installer-provisioned installation of {product-title} requires:
1010

11-
ifdef::openshift-origin[. One provisioner node with {op-system-first} installed.]
12-
ifndef::openshift-origin[. One provisioner node with {op-system-base-full} 8.x installed.]
11+
ifdef::openshift-origin[. One provisioner node with {op-system-first} installed. The provisioning node can be removed after installation.]
12+
ifndef::openshift-origin[. One provisioner node with {op-system-base-full} 8.x installed. The provisioning node can be removed after installation.]
1313
. Three control plane nodes.
1414
. Baseboard Management Controller (BMC) access to each node.
1515
ifeval::[{product-version} > 4.5]

modules/ipi-install-preparing-the-bare-metal-node.adoc

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -22,33 +22,33 @@ Preparing the bare metal node requires executing the following procedure from th
2222
+
2323
[source,terminal]
2424
----
25-
[kni@provisioner ~]$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux-$VERSION.tar.gz | tar zxvf - oc
25+
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux-$VERSION.tar.gz | tar zxvf - oc
2626
----
2727
+
2828
[source,terminal]
2929
----
30-
[kni@provisioner ~]$ sudo cp oc /usr/local/bin
30+
$ sudo cp oc /usr/local/bin
3131
----
3232

33-
. Power off the bare metal node through the baseboard management controller and ensure it is off.
33+
. Power off the bare metal node by using the baseboard management controller, and ensure it is off.
3434

35-
. Retrieve the user name and password of the bare metal node's baseboard management controller. Then, create `base64` strings from the user name and password. In the following example, the user name is `root` and the password is `password`.
35+
. Retrieve the user name and password of the bare metal node's baseboard management controller. Then, create `base64` strings from the user name and password:
3636
+
37-
[source,terminal]
37+
[source,terminal,subs="+quotes"]
3838
----
39-
[kni@provisioner ~]$ echo -ne "root" | base64
39+
$ echo -ne "root" | base64
4040
----
4141
+
4242
[source,terminal]
4343
----
44-
[kni@provisioner ~]$ echo -ne "password" | base64
44+
$ echo -ne "password" | base64
4545
----
4646

4747
. Create a configuration file for the bare metal node.
4848
+
4949
[source,terminal]
5050
----
51-
[kni@provisioner ~]$ vim bmh.yaml
51+
$ vim bmh.yaml
5252
----
5353
+
5454
[source,yaml]
@@ -109,7 +109,7 @@ If the MAC address of an existing bare metal node matches the MAC address of a b
109109
+
110110
[source,terminal]
111111
----
112-
[kni@provisioner ~]$ oc -n openshift-machine-api create -f bmh.yaml
112+
$ oc -n openshift-machine-api create -f bmh.yaml
113113
----
114114
+
115115
.Example output
@@ -125,7 +125,7 @@ Where `<num>` will be the worker number.
125125
+
126126
[source,terminal]
127127
----
128-
[kni@provisioner ~]$ oc -n openshift-machine-api get bmh openshift-worker-<num>
128+
$ oc -n openshift-machine-api get bmh openshift-worker-<num>
129129
----
130130
+
131131
Where `<num>` is the worker node number.

modules/ipi-install-provisioning-the-bare-metal-node.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,4 +117,4 @@ $ ssh openshift-worker-<num>
117117
[source,terminal]
118118
----
119119
[kni@openshift-worker-<num>]$ journalctl -fu kubelet
120-
----
120+
----

modules/ipi-install-troubleshooting-bootstrap-vm-cannot-boot.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
During the deployment, it is possible for the bootstrap VM to fail to boot the cluster nodes, which prevents the VM from provisioning the nodes with the {op-system} image. This scenario can arise due to:
99

1010
* A problem with the `install-config.yaml` file.
11-
* Issues with out-of-band network access via the baremetal network.
11+
* Issues with out-of-band network access when using the baremetal network.
1212
1313
To verify the issue, there are three containers related to `ironic`:
1414

@@ -20,14 +20,14 @@ To verify the issue, there are three containers related to `ironic`:
2020

2121
. Log in to the bootstrap VM:
2222
+
23-
[source,bash]
23+
[source,terminal]
2424
----
2525
2626
----
2727

2828
. To check the container logs, execute the following:
2929
+
30-
[source,bash]
30+
[source,terminal]
3131
----
3232
[core@localhost ~]$ sudo podman logs -f <container-name>
3333
----
@@ -41,7 +41,7 @@ The cluster nodes might be in the `ON` state when deployment started.
4141
Power off the {product-title} cluster nodes before you begin the
4242
installation over IPMI:
4343

44-
[source,bash]
44+
[source,terminal]
4545
----
4646
$ ipmitool -I lanplus -U root -P <password> -H <out-of-band-ip> power off
47-
----
47+
----

modules/ipi-install-troubleshooting-bootstrap-vm-inspecting-logs.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,4 +66,4 @@ If the bootstrap VM cannot access the URL to the images, use the `curl` command
6666
[source,terminal]
6767
----
6868
[core@localhost ~]$ sudo podman logs <ironic-api>
69-
----
69+
----

modules/ipi-install-troubleshooting-bootstrap-vm.adoc

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -6,18 +6,18 @@
66

77
= Bootstrap VM issues
88

9-
The {product-title} installer spawns a bootstrap node virtual machine, which handles provisioning the {product-title} cluster nodes.
9+
The {product-title} installation program spawns a bootstrap node virtual machine, which handles provisioning the {product-title} cluster nodes.
1010

1111
.Procedure
1212

13-
. About 10 to 15 minutes after triggering the installer, check to ensure the bootstrap VM is operational using the `virsh` command:
13+
. About 10 to 15 minutes after triggering the installation program, check to ensure the bootstrap VM is operational using the `virsh` command:
1414
+
15-
[source,bash]
15+
[source,terminal]
1616
----
1717
$ sudo virsh list
1818
----
1919
+
20-
[source,bash]
20+
[source,terminal]
2121
----
2222
Id Name State
2323
--------------------------------------------
@@ -33,12 +33,12 @@ If the bootstrap VM is not running after 10-15 minutes, troubleshoot why it is n
3333

3434
. Verify `libvirtd` is running on the system:
3535
+
36-
[source,bash]
36+
[source,terminal]
3737
----
3838
$ systemctl status libvirtd
3939
----
4040
+
41-
[source,bash]
41+
[source,terminal]
4242
----
4343
● libvirtd.service - Virtualization daemon
4444
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
@@ -56,16 +56,15 @@ If the bootstrap VM is operational, log in to it.
5656

5757
. Use the `virsh console` command to find the IP address of the bootstrap VM:
5858
+
59-
[source,bash]
59+
[source,terminal]
6060
----
6161
$ sudo virsh console example.com
6262
----
6363
+
64-
[source,bash]
64+
[source,terminal]
6565
----
6666
Connected to domain example.com
6767
Escape character is ^]
68-
6968
Red Hat Enterprise Linux CoreOS 43.81.202001142154.0 (Ootpa) 4.3
7069
SSH host key: SHA256:BRWJktXZgQQRY5zjuAV0IKZ4WM7i4TiUyMVanqu9Pqg (ED25519)
7170
SSH host key: SHA256:7+iKGA7VtG5szmk2jB5gl/5EZ+SNcJ3a2g23o0lnIio (ECDSA)
@@ -88,7 +87,7 @@ When deploying an {product-title} cluster without the `provisioning` network, yo
8887
In the console output of the previous step, you can use the IPv6 IP address provided by `ens3` or the IPv4 IP provided by `ens4`.
8988
====
9089
+
91-
[source,bash]
90+
[source,terminal]
9291
----
9392
9493
----
@@ -97,11 +96,11 @@ If you are not successful logging in to the bootstrap VM, you have likely encoun
9796

9897
* You cannot reach the `172.22.0.0/24` network. Verify network connectivity on the provisioner host specifically around the `provisioning` network bridge. This will not be the issue if you are not using the `provisioning` network.
9998

100-
* You cannot reach the bootstrap VM via the public network. When attempting
99+
* You cannot reach the bootstrap VM through the public network. When attempting
101100
to SSH via `baremetal` network, verify connectivity on the
102101
`provisioner` host specifically around the `baremetal` network bridge.
103102

104103
* You encountered `Permission denied (publickey,password,keyboard-interactive)`. When
105104
attempting to access the bootstrap VM, a `Permission denied` error
106105
might occur. Verify that the SSH key for the user attempting to log
107-
into the VM is set within the `install-config.yaml` file.
106+
into the VM is set within the `install-config.yaml` file.

modules/ipi-install-troubleshooting-cleaning-up-previous-installations.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,15 +12,15 @@ In the event of a previous failed deployment, remove the artifacts from the fail
1212

1313
. Power off all bare metal nodes prior to installing the {product-title} cluster:
1414
+
15-
[source,bash]
15+
[source,terminal]
1616
----
17-
$ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
17+
$ ipmitool -I lanplus -U _<user>_ -P _<password>_ -H _<management-server-ip>_ power off
1818
----
1919

2020
ifeval::[{product-version} >= 4.6]
2121
. Remove all old bootstrap resources if any are left over from a previous deployment attempt:
2222
+
23-
[source,bash]
23+
[source,terminal]
2424
----
2525
for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
2626
do
@@ -37,7 +37,7 @@ endif::[]
3737
ifeval::[{product-version} < 4.6]
3838
. Remove all old bootstrap resources if any are left over from a previous deployment attempt:
3939
+
40-
[source,bash]
40+
[source,terminal]
4141
----
4242
for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
4343
do
@@ -51,7 +51,7 @@ endif::[]
5151

5252
. Remove the following from the `clusterconfigs` directory to prevent Terraform from failing:
5353
+
54-
[source,bash]
54+
[source,terminal]
5555
----
5656
$ rm -rf ~/clusterconfigs/auth ~/clusterconfigs/terraform* ~/clusterconfigs/tls ~/clusterconfigs/metadata.json
57-
----
57+
----

modules/ipi-install-troubleshooting-reviewing-the-installation.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,12 +12,12 @@ After installation, ensure the installer deployed the nodes and pods successfull
1212

1313
. When the {product-title} cluster nodes are installed appropriately, the following `Ready` state is seen within the `STATUS` column:
1414
+
15-
[source,bash]
15+
[source,terminal]
1616
----
1717
$ oc get nodes
1818
----
1919
+
20-
[source,bash]
20+
[source,terminal]
2121
----
2222
NAME STATUS ROLES AGE VERSION
2323
master-0.example.com Ready master,worker 4h v1.23.0
@@ -28,7 +28,7 @@ master-2.example.com Ready master,worker 4h v1.23.0
2828
. Confirm the installer deployed all pods successfully. The following command
2929
removes any pods that are still running or have completed as part of the output.
3030
+
31-
[source,bash]
31+
[source,terminal]
3232
----
3333
$ oc get pods --all-namespaces | grep -iv running | grep -iv complete
34-
----
34+
----

0 commit comments

Comments
 (0)