|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z.adoc |
| 4 | + |
| 5 | +:_content-type: PROCEDURE |
| 6 | +[id="machine-user-infra-machines-ibm-z_{context}"] |
| 7 | += Creating {op-system} machines on {ibmzProductName} with z/VM |
| 8 | + |
| 9 | +You can create more {op-system-first} compute machines running on {ibmzProductName} with z/VM and attach them to your existing cluster. |
| 10 | + |
| 11 | +.Prerequisites |
| 12 | + |
| 13 | +* You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes. |
| 14 | +* You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. |
| 15 | +
|
| 16 | +.Procedure |
| 17 | + |
| 18 | +. Extract the Ignition config file from the cluster by running the following command: |
| 19 | ++ |
| 20 | +[source,terminal] |
| 21 | +---- |
| 22 | +$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign |
| 23 | +---- |
| 24 | + |
| 25 | +. Upload the `worker.ign` Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. |
| 26 | + |
| 27 | +. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node: |
| 28 | ++ |
| 29 | +[source,terminal] |
| 30 | +---- |
| 31 | +$ curl -k http://<HTTP_server>/worker.ign |
| 32 | +---- |
| 33 | + |
| 34 | +. Download the {op-system-base} live `kernel`, `initramfs`, and `rootfs` files by running the following commands: |
| 35 | ++ |
| 36 | +[source,terminal] |
| 37 | +---- |
| 38 | +$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ |
| 39 | +| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location') |
| 40 | +---- |
| 41 | ++ |
| 42 | +[source,terminal] |
| 43 | +---- |
| 44 | +$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ |
| 45 | +| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location') |
| 46 | +---- |
| 47 | ++ |
| 48 | +[source,terminal] |
| 49 | +---- |
| 50 | +$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ |
| 51 | +| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location') |
| 52 | +---- |
| 53 | + |
| 54 | +. Move the downloaded {op-system-base} live `kernel`, `initramfs`, and `rootfs` files to an HTTP or HTTPS server that is accessible from the z/VM guest you want to add. |
| 55 | + |
| 56 | +. Create a parameter file for the z/VM guest. The following parameters are specific for the virtual machine: |
| 57 | +** Optional: To specify a static IP address, add an `ip=` parameter with the following entries, with each separated by a colon: |
| 58 | +... The IP address for the machine. |
| 59 | +... An empty string. |
| 60 | +... The gateway. |
| 61 | +... The netmask. |
| 62 | +... The machine host and domain name in the form `hostname.domainname`. Omit this value to let {op-system} decide. |
| 63 | +... The network interface name. Omit this value to let {op-system} decide. |
| 64 | +... The value `none`. |
| 65 | +** For `coreos.inst.ignition_url=`, specify the URL to the `worker.ign` file. Only HTTP and HTTPS protocols are supported. |
| 66 | +** For `coreos.live.rootfs_url=`, specify the matching rootfs artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported. |
| 67 | + |
| 68 | +** For installations on DASD-type disks, complete the following tasks: |
| 69 | +... For `coreos.inst.install_dev=`, specify `/dev/dasda`. |
| 70 | +... Use `rd.dasd=` to specify the DASD where {op-system} is to be installed. |
| 71 | +... Leave all other parameters unchanged. |
| 72 | ++ |
| 73 | +The following is an example parameter file, `additional-worker-dasd.parm`: |
| 74 | ++ |
| 75 | +[source,terminal] |
| 76 | +---- |
| 77 | +rd.neednet=1 \ |
| 78 | +console=ttysclp0 \ |
| 79 | +coreos.inst.install_dev=/dev/dasda \ |
| 80 | +coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ |
| 81 | +coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ |
| 82 | +ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ |
| 83 | +rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ |
| 84 | +zfcp.allow_lun_scan=0 \ |
| 85 | +rd.dasd=0.0.3490 |
| 86 | +---- |
| 87 | ++ |
| 88 | +Write all options in the parameter file as a single line and make sure that you have no newline characters. |
| 89 | + |
| 90 | +** For installations on FCP-type disks, complete the following tasks: |
| 91 | +... Use `rd.zfcp=<adapter>,<wwpn>,<lun>` to specify the FCP disk where {op-system} is to be installed. For multipathing, repeat this step for each additional path. |
| 92 | ++ |
| 93 | +[NOTE] |
| 94 | +==== |
| 95 | +When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. |
| 96 | +==== |
| 97 | +... Set the install device as: `coreos.inst.install_dev=/dev/sda`. |
| 98 | ++ |
| 99 | +[NOTE] |
| 100 | +==== |
| 101 | +If additional LUNs are configured with NPIV, FCP requires `zfcp.allow_lun_scan=0`. If you must enable `zfcp.allow_lun_scan=1` because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. |
| 102 | +==== |
| 103 | +... Leave all other parameters unchanged. |
| 104 | ++ |
| 105 | +[IMPORTANT] |
| 106 | +==== |
| 107 | +Additional post-installation steps are required to fully enable multipathing. For more information, see “Enabling multipathing with kernel arguments on {op-system}" in _Post-installation machine configuration tasks_. |
| 108 | +==== |
| 109 | +// Add xref once it's allowed. |
| 110 | ++ |
| 111 | +The following is an example parameter file, `additional-worker-fcp.parm` for a worker node with multipathing: |
| 112 | ++ |
| 113 | +[source,terminal] |
| 114 | +---- |
| 115 | +rd.neednet=1 \ |
| 116 | +console=ttysclp0 \ |
| 117 | +coreos.inst.install_dev=/dev/sda \ |
| 118 | +coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ |
| 119 | +coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ |
| 120 | +ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ |
| 121 | +rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ |
| 122 | +zfcp.allow_lun_scan=0 \ |
| 123 | +rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ |
| 124 | +rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ |
| 125 | +rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ |
| 126 | +rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 |
| 127 | +---- |
| 128 | ++ |
| 129 | +Write all options in the parameter file as a single line and make sure that you have no newline characters. |
| 130 | + |
| 131 | +. Transfer the `initramfs`, `kernel`, parameter files, and {op-system} images to z/VM, for example, by using FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/sect-installing-zvm-s390[Installing under Z/VM]. |
| 132 | +. Punch the files to the virtual reader of the z/VM guest virtual machine. |
| 133 | ++ |
| 134 | +See link:https://www.ibm.com/docs/en/zvm/latest?topic=commands-punch[PUNCH] in IBM Documentation. |
| 135 | ++ |
| 136 | +[TIP] |
| 137 | +==== |
| 138 | +You can use the CP PUNCH command or, if you use Linux, the **vmur** command to transfer files between two z/VM guest virtual machines. |
| 139 | +==== |
| 140 | ++ |
| 141 | +. Log in to CMS on the bootstrap machine. |
| 142 | +. IPL the bootstrap machine from the reader by running the following command: |
| 143 | ++ |
| 144 | +---- |
| 145 | +$ ipl c |
| 146 | +---- |
| 147 | ++ |
| 148 | +See link:https://www.ibm.com/docs/en/zvm/latest?topic=commands-ipl[IPL] in IBM Documentation. |
0 commit comments