Skip to content

Commit 346fd71

Browse files
Merge pull request #63855 from SNiemann15/multiarch_compute_z
2 parents 94d416c + 80c0c99 commit 346fd71

File tree

6 files changed

+297
-3
lines changed

6 files changed

+297
-3
lines changed

_topic_maps/_topic_map.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -539,6 +539,10 @@ Topics:
539539
File: creating-multi-arch-compute-nodes-aws
540540
- Name: Creating a cluster with multi-architecture compute machines on bare metal
541541
File: creating-multi-arch-compute-nodes-bare-metal
542+
- Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM
543+
File: creating-multi-arch-compute-nodes-ibm-z
544+
- Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM
545+
File: creating-multi-arch-compute-nodes-ibm-z-kvm
542546
- Name: Managing your cluster with multi-architecture compute machines
543547
File: multi-architecture-compute-managing
544548
- Name: Enabling encryption on a vSphere cluster
Lines changed: 100 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,100 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z-kvm.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="machine-user-infra-machines-ibm-z-kvm_{context}"]
7+
= Creating {op-system} machines using `virt-install`
8+
9+
You can create more {op-system-first} compute machines for your cluster by using `virt-install`.
10+
11+
.Prerequisites
12+
13+
* You have at least one LPAR running on {op-system-base} 8.7 or later with KVM, referred to as {op-system-base} KVM host in this procedure.
14+
* The KVM/QEMU hypervisor is installed on the {op-system-base} KVM host.
15+
* You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes.
16+
* An HTTP or HTTPS server is set up.
17+
18+
.Procedure
19+
20+
. Extract the Ignition config file from the cluster by running the following command:
21+
+
22+
[source,terminal]
23+
----
24+
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
25+
----
26+
27+
. Upload the `worker.ign` Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file.
28+
29+
. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node:
30+
+
31+
[source,terminal]
32+
----
33+
$ curl -k http://<HTTP_server>/worker.ign
34+
----
35+
36+
. Download the {op-system-base} live `kernel`, `initramfs`, and `rootfs` files by running the following commands:
37+
+
38+
[source,terminal]
39+
----
40+
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
41+
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
42+
----
43+
+
44+
[source,terminal]
45+
----
46+
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
47+
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
48+
----
49+
+
50+
[source,terminal]
51+
----
52+
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
53+
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
54+
----
55+
56+
. Move the downloaded {op-system-base} live `kernel`, `initramfs` and `rootfs` files to an HTTP or HTTPS server before you launch `virt-install`.
57+
58+
. Create the new KVM guest nodes using the {op-system-base} `kernel`, `initramfs`, and Ignition files; the new disk image; and adjusted parm line arguments.
59+
+
60+
--
61+
[source,terminal]
62+
----
63+
$ virt-install \
64+
--connect qemu:///system \
65+
--name {vn_name} \
66+
--autostart \
67+
--os-variant rhel9.2 \ <1>
68+
--cpu host \
69+
--vcpus {vcpus} \
70+
--memory {memory_mb} \
71+
--disk {vn_name}.qcow2,size={image_size | default(100,true)} \
72+
--network network={virt_network_parm} \
73+
--location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ <2>
74+
--extra-args "rd.neednet=1" \
75+
--extra-args "coreos.inst.install_dev=/dev/vda" \
76+
--extra-args "coreos.inst.ignition_url={worker_ign}" \ <3>
77+
--extra-args "coreos.live.rootfs_url={rhcos_rootfs}" \ <4>
78+
--extra-args "ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}::none:{MTU}" \
79+
--extra-args "nameserver={dns}" \
80+
--extra-args "console=ttysclp0" \
81+
--noautoconsole \
82+
--wait
83+
----
84+
<1> For `os-variant`, specify the {op-system-base} version for the {op-system} compute machine. `rhel9.2` is the recommended version. To query the supported {op-system-base} version of your operating system, run the following command:
85+
+
86+
[source,terminal]
87+
----
88+
$ osinfo-query os -f short-id
89+
----
90+
+
91+
[NOTE]
92+
====
93+
The `os-variant` is case sensitive.
94+
====
95+
+
96+
<2> For `--location`, specify the location of the kernel/initrd on the HTTP or HTTPS server.
97+
<3> For `coreos.inst.ignition_url=`, specify the `worker.ign` Ignition file for the machine role. Only HTTP and HTTPS protocols are supported.
98+
<4> For `coreos.live.rootfs_url=`, specify the matching rootfs artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
99+
--
100+
. Continue to create more compute machines for your cluster.
Lines changed: 148 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,148 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="machine-user-infra-machines-ibm-z_{context}"]
7+
= Creating {op-system} machines on {ibmzProductName} with z/VM
8+
9+
You can create more {op-system-first} compute machines running on {ibmzProductName} with z/VM and attach them to your existing cluster.
10+
11+
.Prerequisites
12+
13+
* You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes.
14+
* You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create.
15+
16+
.Procedure
17+
18+
. Extract the Ignition config file from the cluster by running the following command:
19+
+
20+
[source,terminal]
21+
----
22+
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
23+
----
24+
25+
. Upload the `worker.ign` Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file.
26+
27+
. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node:
28+
+
29+
[source,terminal]
30+
----
31+
$ curl -k http://<HTTP_server>/worker.ign
32+
----
33+
34+
. Download the {op-system-base} live `kernel`, `initramfs`, and `rootfs` files by running the following commands:
35+
+
36+
[source,terminal]
37+
----
38+
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
39+
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
40+
----
41+
+
42+
[source,terminal]
43+
----
44+
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
45+
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
46+
----
47+
+
48+
[source,terminal]
49+
----
50+
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
51+
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
52+
----
53+
54+
. Move the downloaded {op-system-base} live `kernel`, `initramfs`, and `rootfs` files to an HTTP or HTTPS server that is accessible from the z/VM guest you want to add.
55+
56+
. Create a parameter file for the z/VM guest. The following parameters are specific for the virtual machine:
57+
** Optional: To specify a static IP address, add an `ip=` parameter with the following entries, with each separated by a colon:
58+
... The IP address for the machine.
59+
... An empty string.
60+
... The gateway.
61+
... The netmask.
62+
... The machine host and domain name in the form `hostname.domainname`. Omit this value to let {op-system} decide.
63+
... The network interface name. Omit this value to let {op-system} decide.
64+
... The value `none`.
65+
** For `coreos.inst.ignition_url=`, specify the URL to the `worker.ign` file. Only HTTP and HTTPS protocols are supported.
66+
** For `coreos.live.rootfs_url=`, specify the matching rootfs artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
67+
68+
** For installations on DASD-type disks, complete the following tasks:
69+
... For `coreos.inst.install_dev=`, specify `/dev/dasda`.
70+
... Use `rd.dasd=` to specify the DASD where {op-system} is to be installed.
71+
... Leave all other parameters unchanged.
72+
+
73+
The following is an example parameter file, `additional-worker-dasd.parm`:
74+
+
75+
[source,terminal]
76+
----
77+
rd.neednet=1 \
78+
console=ttysclp0 \
79+
coreos.inst.install_dev=/dev/dasda \
80+
coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \
81+
coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \
82+
ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \
83+
rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \
84+
zfcp.allow_lun_scan=0 \
85+
rd.dasd=0.0.3490
86+
----
87+
+
88+
Write all options in the parameter file as a single line and make sure that you have no newline characters.
89+
90+
** For installations on FCP-type disks, complete the following tasks:
91+
... Use `rd.zfcp=<adapter>,<wwpn>,<lun>` to specify the FCP disk where {op-system} is to be installed. For multipathing, repeat this step for each additional path.
92+
+
93+
[NOTE]
94+
====
95+
When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems.
96+
====
97+
... Set the install device as: `coreos.inst.install_dev=/dev/sda`.
98+
+
99+
[NOTE]
100+
====
101+
If additional LUNs are configured with NPIV, FCP requires `zfcp.allow_lun_scan=0`. If you must enable `zfcp.allow_lun_scan=1` because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node.
102+
====
103+
... Leave all other parameters unchanged.
104+
+
105+
[IMPORTANT]
106+
====
107+
Additional post-installation steps are required to fully enable multipathing. For more information, see “Enabling multipathing with kernel arguments on {op-system}" in _Post-installation machine configuration tasks_.
108+
====
109+
// Add xref once it's allowed.
110+
+
111+
The following is an example parameter file, `additional-worker-fcp.parm` for a worker node with multipathing:
112+
+
113+
[source,terminal]
114+
----
115+
rd.neednet=1 \
116+
console=ttysclp0 \
117+
coreos.inst.install_dev=/dev/sda \
118+
coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \
119+
coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \
120+
ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \
121+
rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \
122+
zfcp.allow_lun_scan=0 \
123+
rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \
124+
rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \
125+
rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \
126+
rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000
127+
----
128+
+
129+
Write all options in the parameter file as a single line and make sure that you have no newline characters.
130+
131+
. Transfer the `initramfs`, `kernel`, parameter files, and {op-system} images to z/VM, for example, by using FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/sect-installing-zvm-s390[Installing under Z/VM].
132+
. Punch the files to the virtual reader of the z/VM guest virtual machine.
133+
+
134+
See link:https://www.ibm.com/docs/en/zvm/latest?topic=commands-punch[PUNCH] in IBM Documentation.
135+
+
136+
[TIP]
137+
====
138+
You can use the CP PUNCH command or, if you use Linux, the **vmur** command to transfer files between two z/VM guest virtual machines.
139+
====
140+
+
141+
. Log in to CMS on the bootstrap machine.
142+
. IPL the bootstrap machine from the reader by running the following command:
143+
+
144+
----
145+
$ ipl c
146+
----
147+
+
148+
See link:https://www.ibm.com/docs/en/zvm/latest?topic=commands-ipl[IPL] in IBM Documentation.
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
:_content-type: ASSEMBLY
2+
:context: creating-multi-arch-compute-nodes-ibm-z-kvm
3+
[id="creating-multi-arch-compute-nodes-ibm-z-kvm"]
4+
= Creating a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} with {op-system-base} KVM
5+
include::_attributes/common-attributes.adoc[]
6+
7+
toc::[]
8+
9+
To create a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} (`s390x`) with {op-system-base} KVM, you must have an existing single-architecture `x86_64` cluster. You can then add `s390x` compute machines to your {product-title} cluster.
10+
11+
Before you can add `s390x` nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
12+
13+
The following procedures explain how to create a {op-system} compute machine using a {op-system-base} KVM instance. This will allow you to add `s390x` nodes to your cluster and deploy a cluster with multi-architecture compute machines.
14+
15+
include::modules/multi-architecture-verifying-cluster-compatibility.adoc[leveloffset=+1]
16+
17+
include::modules/machine-user-infra-machines-ibm-z-kvm.adoc[leveloffset=+1]
18+
19+
include::modules/installation-approve-csrs.adoc[leveloffset=+1]
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
:_content-type: ASSEMBLY
2+
:context: creating-multi-arch-compute-nodes-ibm-z
3+
[id="creating-multi-arch-compute-nodes-ibm-z"]
4+
= Creating a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} with z/VM
5+
include::_attributes/common-attributes.adoc[]
6+
7+
toc::[]
8+
9+
To create a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} (`s390x`) with z/VM, you must have an existing single-architecture `x86_64` cluster. You can then add `s390x` compute machines to your {product-title} cluster.
10+
11+
Before you can add `s390x` nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
12+
13+
The following procedures explain how to create a {op-system} compute machine using a z/VM instance. This will allow you to add `s390x` nodes to your cluster and deploy a cluster with multi-architecture compute machines.
14+
15+
include::modules/multi-architecture-verifying-cluster-compatibility.adoc[leveloffset=+1]
16+
17+
include::modules/machine-user-infra-machines-ibm-z.adoc[leveloffset=+1]
18+
19+
include::modules/installation-approve-csrs.adoc[leveloffset=+1]

post_installation_configuration/configuring-multi-arch-compute-machines/multi-architecture-configuration.adoc

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
:_content-type: CONCEPT
22
:context: multi-architecture-configuration
33
[id="post-install-multi-architecture-configuration"]
4-
= About clusters with multi-architecture compute machines
4+
= About clusters with multi-architecture compute machines
55
include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9-
An {product-title} cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. Clusters with multi-architecture compute machines are available only on AWS or Azure installer-provisioned infrastructures and bare metal user-provisioned infrastructures with x86_64 control plane machines.
9+
An {product-title} cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. Clusters with multi-architecture compute machines are available only on Amazon Web Services (AWS) or Microsoft Azure installer-provisioned infrastructures and bare metal, {ibmpowerProductName}, and {ibmzProductName} user-provisioned infrastructures with x86_64 control plane machines.
1010

1111
[NOTE]
1212
====
@@ -20,7 +20,7 @@ The Cluster Samples Operator is not supported on clusters with multi-architectur
2020

2121
For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
2222

23-
== Configuring your cluster with multi-architecture compute machines
23+
== Configuring your cluster with multi-architecture compute machines
2424

2525
To create a cluster with multi-architecture compute machines for various platforms, you can use the documentation in the following sections:
2626

@@ -29,3 +29,7 @@ To create a cluster with multi-architecture compute machines for various platfor
2929
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-aws.adoc#creating-multi-arch-compute-nodes-aws[Creating a cluster with multi-architecture compute machines on AWS]
3030

3131
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-bare-metal.adoc#creating-multi-arch-compute-nodes-bare-metal[Creating a cluster with multi-architecture compute machines on bare metal]
32+
33+
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z.adoc#creating-multi-arch-compute-nodes-ibm-z[Creating a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} with z/VM]
34+
35+
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z-kvm.adoc#creating-multi-arch-compute-nodes-ibm-z-kvm[Creating a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} with {op-system-base} KVM]

0 commit comments

Comments
 (0)