Skip to content

Commit 42383bb

Browse files
committed
OSDOCS-5324: Multi-arch on bare metal docs TP
1 parent ee39eb0 commit 42383bb

File tree

4 files changed

+133
-11
lines changed

4 files changed

+133
-11
lines changed

modules/machine-user-infra-machines-iso.adoc

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,19 +2,46 @@
22
//
33
// * machine_management/user_infra/adding-bare-metal-compute-user-infra.adoc
44
// * post_installation_configuration/node-tasks.adoc
5+
ifeval::["{context}" == "multi-architecture-configuration"]
6+
:multi:
7+
endif::[]
58

69
:_content-type: PROCEDURE
710
[id="machine-user-infra-machines-iso_{context}"]
8-
= Creating more {op-system} machines using an ISO image
11+
= Creating {op-system} machines using an ISO image
912

1013
You can create more {op-system-first} compute machines for your bare metal cluster by using an ISO image to create the machines.
1114

1215
.Prerequisites
1316

1417
* Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
18+
* You must have the OpenShift CLI (`oc`) installed.
1519
1620
.Procedure
1721

22+
. Extract the Ignition config file from the cluster by running the following command:
23+
+
24+
[source,terminal]
25+
----
26+
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
27+
----
28+
29+
. Upload the `worker.ign` Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files.
30+
31+
. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node:
32+
+
33+
[source,terminal]
34+
----
35+
$ curl -k http://<HTTP_server>/worker.ign
36+
----
37+
38+
. You can access the ISO image for booting your new machine by running to following command:
39+
+
40+
[source,terminal]
41+
----
42+
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
43+
----
44+
1845
. Use the ISO file to install {op-system} on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
1946
** Burn the ISO image to a disk and boot it directly.
2047
** Use ISO redirection with a LOM interface.
@@ -55,3 +82,7 @@ Ensure that the installation is successful on each node before commencing with t
5582
====
5683

5784
. Continue to create more compute machines for your cluster.
85+
86+
ifeval::["{context}" == "multi-architecture-configuration"]
87+
:!multi:
88+
endif::[]

modules/machine-user-infra-machines-pxe.adoc

Lines changed: 37 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,11 @@
22
//
33
// * machine_management/user_infra/adding-bare-metal-compute-user-infra.adoc
44
// * post_installation_configuration/node-tasks.adoc
5+
// * post_installation_configuration/multi-architecture-configuration.adoc
56

67
:_content-type: PROCEDURE
78
[id="machine-user-infra-machines-pxe_{context}"]
8-
= Creating more {op-system} machines by PXE or iPXE booting
9+
= Creating {op-system} machines by PXE or iPXE booting
910

1011
You can create more {op-system-first} compute machines for your bare metal cluster by using PXE or iPXE booting.
1112

@@ -33,25 +34,51 @@ LABEL pxeboot
3334
<1> Specify the location of the live `kernel` file that you uploaded to your HTTP server.
3435
<2> Specify locations of the {op-system} files that you uploaded to your HTTP server. The `initrd` parameter value is the location of the live `initramfs` file, the `coreos.inst.ignition_url` parameter value is the location of the worker Ignition config file, and the `coreos.live.rootfs_url` parameter value is the location of the live `rootfs` file. The `coreos.inst.ignition_url` and `coreos.live.rootfs_url` parameters only support HTTP and HTTPS.
3536
+
36-
+
3737
[NOTE]
3838
====
39-
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more `console=` arguments to the `APPEND` line. For example, add `console=tty0 console=ttyS0` to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see link:https://access.redhat.com/articles/7212[How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?].
39+
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more `console=` arguments to the `APPEND` line. For example, add `console=tty0 console=ttyS0` to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see link:https://access.redhat.com/articles/7212[How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?].
4040
====
4141

42-
** For iPXE:
42+
** For iPXE (`x86_64` + `aarch64`):
4343
+
4444
----
45-
kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img <1>
46-
initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img <2>
45+
kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign <1> <2>
46+
initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img <3>
47+
boot
4748
----
48-
<1> Specify locations of the {op-system} files that you uploaded to your HTTP server. The `kernel` parameter value is the location of the `kernel` file, the `initrd=main` argument is needed for booting on UEFI systems, the `coreos.inst.ignition_url` parameter value is the location of the worker Ignition config file, and the `coreos.live.rootfs_url` parameter value is the location of the live `rootfs` file. The `coreos.inst.ignition_url` and `coreos.live.rootfs_url` parameters only support HTTP and HTTPS.
49-
<2> Specify the location of the `initramfs` file that you uploaded to your HTTP server.
49+
<1> Specify the locations of the {op-system} files that you uploaded to your
50+
HTTP server. The `kernel` parameter value is the location of the `kernel` file,
51+
the `initrd=main` argument is needed for booting on UEFI systems,
52+
the `coreos.live.rootfs_url` parameter value is the location of the `rootfs` file,
53+
and the `coreos.inst.ignition_url` parameter value is the
54+
location of the worker Ignition config file.
55+
<2> If you use multiple NICs, specify a single interface in the `ip` option.
56+
For example, to use DHCP on a NIC that is named `eno1`, set `ip=eno1:dhcp`.
57+
<3> Specify the location of the `initramfs` file that you uploaded to your HTTP server.
5058
+
59+
[NOTE]
60+
====
61+
This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more `console=` arguments to the `kernel` line. For example, add `console=tty0 console=ttyS0` to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see link:https://access.redhat.com/articles/7212[How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?] and "Enabling the serial console for PXE and ISO installation" in the "Advanced {op-system} installation configuration" section.
62+
====
5163
+
5264
[NOTE]
5365
====
54-
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more `console=` arguments to the `kernel` line. For example, add `console=tty0 console=ttyS0` to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see link:https://access.redhat.com/articles/7212[How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?].
66+
To network boot the CoreOS `kernel` on `aarch64` architecture, you need to use a version of iPXE build with the `IMAGE_GZIP` option enabled. See link:https://ipxe.org/buildcfg/image_gzip[`IMAGE_GZIP` option in iPXE].
5567
====
5668

57-
. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
69+
** For PXE (with UEFI and GRUB as second stage) on `aarch64`:
70+
+
71+
----
72+
menuentry 'Install CoreOS' {
73+
linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign <1> <2>
74+
initrd rhcos-<version>-live-initramfs.<architecture>.img <3>
75+
}
76+
----
77+
<1> Specify the locations of the {op-system} files that you uploaded to your
78+
HTTP/TFTP server. The `kernel` parameter value is the location of the `kernel` file on your TFTP server.
79+
The `coreos.live.rootfs_url` parameter value is the location of the `rootfs` file, and the `coreos.inst.ignition_url` parameter value is the location of the worker Ignition config file on your HTTP Server.
80+
<2> If you use multiple NICs, specify a single interface in the `ip` option.
81+
For example, to use DHCP on a NIC that is named `eno1`, set `ip=eno1:dhcp`.
82+
<3> Specify the location of the `initramfs` file that you uploaded to your TFTP server.
83+
84+
. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
// Module included in the following assemblies:
2+
3+
// * post_installation_configuration/multi-architecture-configuration.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="multi-architecture-verifying-cluster-compatibility_{context}"]
7+
8+
= Verifying cluster compatibility
9+
10+
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
11+
12+
.Prerequisites
13+
14+
* You installed the OpenShift CLI (`oc`)
15+
16+
.Procedure
17+
18+
* You can check that your cluster uses the architecture payload by running the following command:
19+
+
20+
[source,terminal]
21+
----
22+
$ oc adm release info -o json | jq .metadata.metadata
23+
----
24+
25+
.Verification
26+
27+
. If you see the following output, then your cluster is using the multi-architecture payload:
28+
+
29+
[source,terminal]
30+
----
31+
$ "release.openshift.io/architecture": "multi"
32+
----
33+
You can then begin adding multi-arch compute nodes to your cluster.
34+
35+
. If you see the following output, then your cluster is not using the multi-architecture payload:
36+
+
37+
[source,terminal]
38+
----
39+
$ null
40+
----
41+
To migrate your cluster to one that supports multi-architecture compute machines, follow the procedure in "Migrating to a cluster with multi-architecture compute machines".

post_installation_configuration/multi-architecture-configuration.adoc

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,12 @@ An {product-title} cluster with multi-architecture compute machines is a cluster
1010

1111
For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see xref:../updating/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
1212

13+
include::modules/multi-architecture-verifying-cluster-compatibility.adoc[leveloffset=+1]
14+
15+
[role="_additional-resources"]
16+
.Additional resources
17+
* xref:../updating/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
18+
1319
== Creating a cluster with multi-architecture compute machine on Azure
1420
1521
To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see xref:../installing/installing_azure/installing-azure-customizations.adoc[Installing a cluster on Azure with customizations]. You can then add an `arm64` compute machine set to your cluster to create a cluster with multi-architecture compute machines.
@@ -34,4 +40,21 @@ include::modules/multi-architecture-modify-machine-set-aws.adoc[leveloffset=+2]
3440
.Additional resources
3541
* xref:../installing/installing_aws/installing-aws-customizations.adoc#installation-aws-arm-tested-machine-types_installing-aws-customizations[Tested instance types for AWS 64-bit ARM]
3642
43+
== Creating a cluster with multi-architecture compute machine on bare metal (Technology Preview)
44+
45+
To create a cluster with multi-architecture compute machines on bare metal, you must have an existing single-architecture bare metal cluster. For more information on bare metal installations, see xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[Installing a user provisioned cluster on bare metal]. You can then add 64-bit ARM compute machines to your {product-title} cluster on bare metal.
46+
47+
Before you can add 64-bit ARM nodes to your bare metal cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../updating/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
48+
49+
The following procedures explain how to create a {op-system} compute machine using an ISO image or network PXE booting. This will allow you to add ARM64 nodes to your bare metal cluster and deploy a cluster with multi-architecture compute machines.
50+
51+
:Featurename: Clusters with multi-architecture compute machines on bare metal user-provisioned installations
52+
include::snippets/technology-preview.adoc[leveloffset=+2]
53+
54+
include::modules/machine-user-infra-machines-iso.adoc[leveloffset=+2]
55+
56+
include::modules/machine-user-infra-machines-pxe.adoc[leveloffset=+2]
57+
58+
include::modules/installation-approve-csrs.adoc[leveloffset=+2]
59+
3760
include::modules/multi-architecture-import-imagestreams.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)