Skip to content

Commit 6072138

Browse files
authored
Merge pull request #64357 from alishaIBM/mac
[MIXEDARCH-275] Creating a cluster with multi-architecture compute machine on IBM Power
2 parents 558d7d1 + 86c7382 commit 6072138

File tree

7 files changed

+119
-3
lines changed

7 files changed

+119
-3
lines changed

_topic_maps/_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -550,6 +550,8 @@ Topics:
550550
File: creating-multi-arch-compute-nodes-ibm-z
551551
- Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM
552552
File: creating-multi-arch-compute-nodes-ibm-z-kvm
553+
- Name: Creating a cluster with multi-architecture compute machines on IBM Power
554+
File: creating-multi-arch-compute-nodes-ibm-power
553555
- Name: Managing your cluster with multi-architecture compute machines
554556
File: multi-architecture-compute-managing
555557
- Name: Enabling encryption on a vSphere cluster

modules/installation-approve-csrs.adoc

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@
2323
// * installing/installing_ibm_z/installing-ibm-power.adoc
2424
// * installing/installing_ibm_z/installing-restricted-networks-ibm-power.adoc
2525
// * installing/installing_azure/installing-restricted-networks-azure-user-provisioned.adoc
26+
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-power.adoc
2627

2728

2829
ifeval::["{context}" == "installing-ibm-z"]
@@ -31,6 +32,9 @@ endif::[]
3132
ifeval::["{context}" == "installing-ibm-z-kvm"]
3233
:ibm-z-kvm:
3334
endif::[]
35+
ifeval::["{context}" == "creating-multi-arch-compute-nodes-ibm-power"]
36+
:ibm-power:
37+
endif::[]
3438

3539
:_content-type: PROCEDURE
3640
[id="installation-approve-csrs_{context}"]
@@ -170,18 +174,35 @@ $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}
170174
+
171175
[source,terminal]
172176
----
177+
ifndef::ibm-power[]
173178
$ oc get nodes
179+
endif::ibm-power[]
180+
ifdef::ibm-power[]
181+
$ oc get nodes -o wide
182+
endif::ibm-power[]
174183
----
175184
+
176185
.Example output
177186
[source,terminal]
178187
----
188+
ifndef::ibm-power[]
179189
NAME STATUS ROLES AGE VERSION
180190
master-0 Ready master 73m v1.27.3
181191
master-1 Ready master 73m v1.27.3
182192
master-2 Ready master 74m v1.27.3
183193
worker-0 Ready worker 11m v1.27.3
184194
worker-1 Ready worker 11m v1.27.3
195+
endif::ibm-power[]
196+
ifdef::ibm-power[]
197+
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
198+
worker-0-ppc64le Ready worker 42d v1.28.2+e3ba6d9 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9
199+
worker-1-ppc64le Ready worker 42d v1.28.2+e3ba6d9 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9
200+
master-0-x86 Ready control-plane,master 75d v1.28.2+e3ba6d9 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9
201+
master-1-x86 Ready control-plane,master 75d v1.28.2+e3ba6d9 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9
202+
master-2-x86 Ready control-plane,master 75d v1.28.2+e3ba6d9 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9
203+
worker-0-x86 Ready worker 75d v1.28.2+e3ba6d9 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9
204+
worker-1-x86 Ready worker 75d v1.28.2+e3ba6d9 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.28.1-3.rhaos4.15.gitb36169e.el9
205+
endif::ibm-power[]
185206
----
186207
+
187208
[NOTE]
@@ -198,3 +219,6 @@ endif::[]
198219
ifeval::["{context}" == "installing-ibm-z-kvm"]
199220
:!ibm-z-kvm:
200221
endif::[]
222+
ifeval::["{context}" == "creating-multi-arch-compute-nodes-ibm-power"]
223+
:!ibm-power:
224+
endif::[]

modules/machine-user-infra-machines-iso.adoc

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,25 @@
22
//
33
// * machine_management/user_infra/adding-bare-metal-compute-user-infra.adoc
44
// * post_installation_configuration/node-tasks.adoc
5+
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-power.adoc
6+
57
ifeval::["{context}" == "multi-architecture-configuration"]
68
:multi:
79
endif::[]
10+
ifeval::["{context}" == "creating-multi-arch-compute-nodes-ibm-power"]
11+
:ibm-power:
12+
endif::[]
813

914
:_content-type: PROCEDURE
1015
[id="machine-user-infra-machines-iso_{context}"]
1116
= Creating {op-system} machines using an ISO image
1217

18+
ifndef::ibm-power[]
1319
You can create more {op-system-first} compute machines for your bare metal cluster by using an ISO image to create the machines.
20+
endif::ibm-power[]
21+
ifdef::ibm-power[]
22+
You can create more {op-system-first} compute machines for your cluster by using an ISO image to create the machines.
23+
endif::ibm-power[]
1424

1525
.Prerequisites
1626

@@ -85,4 +95,7 @@ Ensure that the installation is successful on each node before commencing with t
8595

8696
ifeval::["{context}" == "multi-architecture-configuration"]
8797
:!multi:
98+
endif::[]
99+
ifeval::["{context}" == "creating-multi-arch-compute-nodes-ibm-power"]
100+
:!ibm-power:
88101
endif::[]

modules/machine-user-infra-machines-pxe.adoc

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,11 @@
33
// * machine_management/user_infra/adding-bare-metal-compute-user-infra.adoc
44
// * post_installation_configuration/node-tasks.adoc
55
// * post_installation_configuration/multi-architecture-configuration.adoc
6+
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-power.adoc
7+
8+
ifeval::["{context}" == "creating-multi-arch-compute-nodes-ibm-power"]
9+
:ibm-power:
10+
endif::[]
611

712
:_content-type: PROCEDURE
813
[id="machine-user-infra-machines-pxe_{context}"]
@@ -39,7 +44,12 @@ LABEL pxeboot
3944
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more `console=` arguments to the `APPEND` line. For example, add `console=tty0 console=ttyS0` to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see link:https://access.redhat.com/articles/7212[How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?].
4045
====
4146

47+
ifndef::ibm-power[]
4248
** For iPXE (`x86_64` + `aarch64`):
49+
endif::ibm-power[]
50+
ifdef::ibm-power[]
51+
** For iPXE (`x86_64` + `ppc64le`):
52+
endif::ibm-power[]
4353
+
4454
----
4555
kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign <1> <2>
@@ -63,10 +73,20 @@ This configuration does not enable serial console access on machines with a grap
6373
+
6474
[NOTE]
6575
====
76+
ifndef::ibm-power[]
6677
To network boot the CoreOS `kernel` on `aarch64` architecture, you need to use a version of iPXE build with the `IMAGE_GZIP` option enabled. See link:https://ipxe.org/buildcfg/image_gzip[`IMAGE_GZIP` option in iPXE].
78+
endif::ibm-power[]
79+
ifdef::ibm-power[]
80+
To network boot the CoreOS `kernel` on `ppc64le` architecture, you need to use a version of iPXE build with the `IMAGE_GZIP` option enabled. See link:https://ipxe.org/buildcfg/image_gzip[`IMAGE_GZIP` option in iPXE].
81+
endif::ibm-power[]
6782
====
6883

84+
ifndef::ibm-power[]
6985
** For PXE (with UEFI and GRUB as second stage) on `aarch64`:
86+
endif::ibm-power[]
87+
ifdef::ibm-power[]
88+
** For PXE (with UEFI and GRUB as second stage) on `ppc64le`:
89+
endif::ibm-power[]
7090
+
7191
----
7292
menuentry 'Install CoreOS' {
@@ -81,4 +101,8 @@ The `coreos.live.rootfs_url` parameter value is the location of the `rootfs` fil
81101
For example, to use DHCP on a NIC that is named `eno1`, set `ip=eno1:dhcp`.
82102
<3> Specify the location of the `initramfs` file that you uploaded to your TFTP server.
83103

84-
. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
104+
. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
105+
106+
ifeval::["{context}" == "creating-multi-arch-compute-nodes-ibm-power"]
107+
:!ibm-power:
108+
endif::[]

modules/multi-architecture-verifying-cluster-compatibility.adoc

Lines changed: 29 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,11 @@
66
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-gcp.adoc
77
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z-kvm.adoc
88
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z.adoc
9+
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-power.adoc
10+
11+
ifeval::["{context}" == "creating-multi-arch-compute-nodes-ibm-power"]
12+
:ibm-power:
13+
endif::[]
914

1015
:_content-type: PROCEDURE
1116
[id="multi-architecture-verifying-cluster-compatibility_{context}"]
@@ -18,6 +23,18 @@ Before you can start adding compute nodes of different architectures to your clu
1823

1924
* You installed the OpenShift CLI (`oc`)
2025

26+
ifdef::ibm-power[]
27+
[NOTE]
28+
====
29+
When using multiple architectures, hosts for {product-title} nodes must share the same storage layer. If they do not have the same storage layer, use a storage provider such as `nfs-provisioner`.
30+
====
31+
32+
[NOTE]
33+
====
34+
You should limit the number of network hops between the compute and control plane as much as possible.
35+
====
36+
endif::ibm-power[]
37+
2138
.Procedure
2239

2340
* You can check that your cluster uses the architecture payload by running the following command:
@@ -41,6 +58,16 @@ You can then begin adding multi-arch compute nodes to your cluster.
4158
+
4259
[source,terminal]
4360
----
44-
$ null
61+
{
62+
"url": "https://access.redhat.com/errata/RHSA-2023:4671"
63+
}
4564
----
46-
To migrate your cluster to one that supports multi-architecture compute machines, follow the procedure in "Migrating to a cluster with multi-architecture compute machines".
65+
+
66+
[IMPORTANT]
67+
====
68+
To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
69+
====
70+
71+
ifeval::["{context}" == "creating-multi-arch-compute-nodes-ibm-power"]
72+
:!ibm-power:
73+
endif::[]
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
:_content-type: ASSEMBLY
2+
:context: creating-multi-arch-compute-nodes-ibm-power
3+
[id="creating-multi-arch-compute-nodes-ibm-power"]
4+
= Creating a cluster with multi-architecture compute machines on {ibmpowerProductName}
5+
include::_attributes/common-attributes.adoc[]
6+
7+
toc::[]
8+
9+
To create a cluster with multi-architecture compute machines on {ibmpowerProductName} (`ppc64le`), you must have an existing single-architecture (`x86_64`) cluster. You can then add `ppc64le` compute machines to your {product-title} cluster.
10+
11+
[IMPORTANT]
12+
====
13+
Before you can add `ppc64le` nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
14+
====
15+
16+
The following procedures explain how to create a {op-system} compute machine using an ISO image or network PXE booting. This will allow you to add `ppc64le` nodes to your cluster and deploy a cluster with multi-architecture compute machines.
17+
18+
include::modules/multi-architecture-verifying-cluster-compatibility.adoc[leveloffset=+1]
19+
20+
include::modules/machine-user-infra-machines-iso.adoc[leveloffset=+1]
21+
22+
include::modules/machine-user-infra-machines-pxe.adoc[leveloffset=+1]
23+
24+
include::modules/installation-approve-csrs.adoc[leveloffset=+1]

post_installation_configuration/configuring-multi-arch-compute-machines/multi-architecture-configuration.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ To create a cluster with multi-architecture compute machines for various platfor
3636

3737
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z-kvm.adoc#creating-multi-arch-compute-nodes-ibm-z-kvm[Creating a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} with {op-system-base} KVM]
3838

39+
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-power.adoc#creating-multi-arch-compute-nodes-ibm-power[Creating a cluster with multi-architecture compute machines on {ibmpowerProductName}]
40+
3941
[IMPORTANT]
4042
====
4143
Autoscaling from zero is currently not supported on Google Cloud Platform (GCP).

0 commit comments

Comments
 (0)