Skip to content

Commit 05a9225

Browse files
authored
Merge pull request #77425 from SNiemann15/ibmz_sno_lpar
[OCPBUGS-35431] Add SNO installation method for LPAR IBM Z
2 parents 62b1136 + a6afd4b commit 05a9225

File tree

4 files changed

+235
-9
lines changed

4 files changed

+235
-9
lines changed

installing/installing_sno/install-sno-installing-sno.adoc

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,11 @@ include::modules/creating-custom-live-rhcos-iso.adoc[leveloffset=+1]
139139
[id="install-sno-with-ibmz"]
140140
== Installing {sno} with {ibm-z-title} and {ibm-linuxone-title}
141141

142-
Installing a single-node cluster on {ibm-z-name} and {ibm-linuxone-name} requires user-provisioned installation using either the "Installing a cluster with {op-system-base} KVM on {ibm-z-name} and {ibm-linuxone-name}" or the "Installing a cluster with z/VM on {ibm-z-name} and {ibm-linuxone-name}" procedure.
142+
Installing a single-node cluster on {ibm-z-name} and {ibm-linuxone-name} requires user-provisioned installation using one of the following procedures:
143+
144+
* xref:../../installing/installing_ibm_z/installing-ibm-z.adoc#installing-ibm-z[Installing a cluster with z/VM on {ibm-z-name} and {ibm-linuxone-name}]
145+
* xref:../../installing/installing_ibm_z/installing-ibm-z-kvm.adoc#installing-ibm-z-kvm[Installing a cluster with {op-system-base} KVM on {ibm-z-name} and {ibm-linuxone-name}]
146+
* xref:../../installing/installing_ibm_z/installing-ibm-z-lpar.adoc#installing-ibm-z-lpar[Installing a cluster in an LPAR on {ibm-z-name} and {ibm-linuxone-name}]
143147

144148
[NOTE]
145149
====
@@ -157,16 +161,12 @@ Installing a single-node cluster on {ibm-z-name} simplifies installation for dev
157161
You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of {ibm-z-name}. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster.
158162
====
159163

160-
[role="_additional-resources"]
161-
.Additional resources
162-
163-
* xref:../../installing/installing_ibm_z/installing-ibm-z.adoc#installing-ibm-z[Installing a cluster with z/VM on {ibm-z-name} and {ibm-linuxone-name}]
164-
* xref:../../installing/installing_ibm_z/installing-ibm-z-kvm.adoc#installing-ibm-z-kvm[Installing a cluster with {op-system-base} KVM on {ibm-z-name} and{ibm-linuxone-name}]
165-
166164
include::modules/install-sno-ibm-z.adoc[leveloffset=+2]
167165

168166
include::modules/install-sno-ibm-z-kvm.adoc[leveloffset=+2]
169167

168+
include::modules/install-sno-ibm-z-lpar.adoc[leveloffset=+2]
169+
170170
[id="installing-sno-with-ibmpower"]
171171
== Installing {sno} with {ibm-power-title}
172172

modules/install-sno-ibm-z-kvm.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
$ OCP_VERSION=<ocp_version> <1>
2020
----
2121
+
22-
<1> Replace `<ocp_version>` with the current version, for example, `latest-{product-version}`
22+
<1> Replace `<ocp_version>` with the current version. For example, `latest-{product-version}`.
2323

2424
. Set the host architecture by running the following command:
2525
+

modules/install-sno-ibm-z-lpar.adoc

Lines changed: 226 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,226 @@
1+
// This is included in the following assemblies:
2+
//
3+
// installing_sno/install-sno-installing-sno.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="installing-sno-on-ibm-z-lpar_{context}"]
7+
= Installing {sno} in an LPAR on {ibm-z-title} and {ibm-linuxone-title}
8+
9+
.Prerequisites
10+
11+
* If you are deploying a single-node cluster there are zero compute nodes, the Ingress Controller pods run on the control plane nodes. In single-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the _Load balancing requirements for user-provisioned infrastructure_ section for more information.
12+
13+
.Procedure
14+
15+
. Set the {product-title} version by running the following command:
16+
+
17+
[source,terminal]
18+
----
19+
$ OCP_VERSION=<ocp_version> <1>
20+
----
21+
+
22+
<1> Replace `<ocp_version>` with the current version. For example, `latest-{product-version}`.
23+
24+
. Set the host architecture by running the following command:
25+
+
26+
[source,terminal]
27+
----
28+
$ ARCH=<architecture> <1>
29+
----
30+
<1> Replace `<architecture>` with the target host architecture `s390x`.
31+
32+
. Download the {product-title} client (`oc`) and make it available for use by entering the following commands:
33+
+
34+
[source,terminal]
35+
----
36+
$ curl -k https://mirror.openshift.com/pub/openshift-v4/${ARCH}/clients/ocp/${OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz
37+
----
38+
+
39+
[source,terminal]
40+
----
41+
$ tar zxvf oc.tar.gz
42+
----
43+
+
44+
[source,terminal]
45+
----
46+
$ chmod +x oc
47+
----
48+
49+
. Download the {product-title} installer and make it available for use by entering the following commands:
50+
+
51+
[source,terminal]
52+
----
53+
$ curl -k https://mirror.openshift.com/pub/openshift-v4/${ARCH}/clients/ocp/${OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz
54+
----
55+
+
56+
[source,terminal]
57+
----
58+
$ tar zxvf openshift-install-linux.tar.gz
59+
----
60+
+
61+
[source,terminal]
62+
----
63+
$ chmod +x openshift-install
64+
----
65+
66+
. Prepare the `install-config.yaml` file:
67+
+
68+
[source,yaml]
69+
----
70+
apiVersion: v1
71+
baseDomain: <domain> <1>
72+
compute:
73+
- name: worker
74+
replicas: 0 <2>
75+
controlPlane:
76+
name: master
77+
replicas: 1 <3>
78+
metadata:
79+
name: <name> <4>
80+
networking: <5>
81+
clusterNetwork:
82+
- cidr: 10.128.0.0/14
83+
hostPrefix: 23
84+
machineNetwork:
85+
- cidr: 10.0.0.0/16 <6>
86+
networkType: OVNKubernetes
87+
serviceNetwork:
88+
- 172.30.0.0/16
89+
platform:
90+
none: {}
91+
pullSecret: '<pull_secret>' <7>
92+
sshKey: |
93+
<ssh_key> <8>
94+
----
95+
<1> Add the cluster domain name.
96+
<2> Set the `compute` replicas to `0`. This makes the control plane node schedulable.
97+
<3> Set the `controlPlane` replicas to `1`. In conjunction with the previous `compute` setting, this setting ensures the cluster runs on a single node.
98+
<4> Set the `metadata` name to the cluster name.
99+
<5> Set the `networking` details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters.
100+
<6> Set the `cidr` value to match the subnet of the {sno} cluster.
101+
<7> Copy the {cluster-manager-url-pull} and add the contents to this configuration setting.
102+
<8> Add the public SSH key from the administration host so that you can log in to the cluster after installation.
103+
104+
. Generate {product-title} assets by running the following commands:
105+
+
106+
[source,terminal]
107+
----
108+
$ mkdir ocp
109+
----
110+
+
111+
[source,terminal]
112+
----
113+
$ cp install-config.yaml ocp
114+
----
115+
116+
. Change to the directory that contains the {product-title} installation program and generate the Kubernetes manifests for the cluster:
117+
+
118+
[source,terminal]
119+
----
120+
$ ./openshift-install create manifests --dir <installation_directory> <1>
121+
----
122+
+
123+
<1> For `<installation_directory>`, specify the installation directory that contains the `install-config.yaml` file you created.
124+
125+
. Check that the `mastersSchedulable` parameter in the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` Kubernetes manifest file is set to `true`.
126+
+
127+
--
128+
.. Open the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` file.
129+
.. Locate the `mastersSchedulable` parameter and ensure that it is set to `true` as shown in the following `spec` stanza:
130+
+
131+
[source,yaml]
132+
----
133+
spec:
134+
mastersSchedulable: true
135+
status: {}
136+
----
137+
.. Save and exit the file.
138+
--
139+
140+
. Create the Ignition configuration files by running the following command from the directory that contains the installation program:
141+
+
142+
[source,terminal]
143+
----
144+
$ ./openshift-install create ignition-configs --dir <installation_directory> <1>
145+
----
146+
<1> For `<installation_directory>`, specify the same installation directory.
147+
148+
. Obtain the {op-system-base} `kernel`, `initramfs`, and `rootfs` artifacts from the link:https://access.redhat.com/downloads/content/290[Product Downloads] page on the Red Hat Customer Portal or from the link:https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/latest/[{op-system} image mirror] page.
149+
+
150+
[IMPORTANT]
151+
====
152+
The {op-system} images might not change with every release of {product-title}. You must download images with the highest version that is less than or equal to the {product-title} version that you install. Only use the appropriate `kernel`, `initramfs`, and `rootfs` artifacts described in the following procedure.
153+
====
154+
+
155+
The file names contain the {product-title} version number. They resemble the following examples:
156+
+
157+
`kernel`:: `rhcos-<version>-live-kernel-<architecture>`
158+
`initramfs`:: `rhcos-<version>-live-initramfs.<architecture>.img`
159+
`rootfs`:: `rhcos-<version>-live-rootfs.<architecture>.img`
160+
+
161+
[NOTE]
162+
====
163+
The `rootfs` image is the same for FCP and DASD.
164+
====
165+
166+
. Move the following artifacts and files to an HTTP or HTTPS server:
167+
168+
** Downloaded {op-system-base} live `kernel`, `initramfs`, and `rootfs` artifacts
169+
** Ignition files
170+
171+
. Create a parameter file for the bootstrap in an LPAR:
172+
+
173+
.Example parameter file for the bootstrap machine
174+
+
175+
[source,terminal]
176+
----
177+
cio_ignore=all,!condev rd.neednet=1 \
178+
console=ttysclp0 \
179+
coreos.inst.install_dev=/dev/<block_device> \// <1>
180+
coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \// <2>
181+
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \// <3>
182+
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \// <4>
183+
rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 \
184+
rd.dasd=0.0.4411 \// <5>
185+
rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \// <6>
186+
zfcp.allow_lun_scan=0
187+
----
188+
<1> Specify the block device on the system to install to. For installations on DASD-type disk use `dasda`, for installations on FCP-type disks use `sda`.
189+
<2> Specify the location of the `bootstrap.ign` config file. Only HTTP and HTTPS protocols are supported.
190+
<3> For the `coreos.live.rootfs_url=` artifact, specify the matching `rootfs` artifact for the `kernel`and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
191+
<4> For the `ip=` parameter, assign the IP address manually as described in "Installing a cluster in an LPAR on {ibm-z-name} and {ibm-linuxone-name}".
192+
<5> For installations on DASD-type disks, use `rd.dasd=` to specify the DASD where {op-system} is to be installed. Omit this entry for FCP-type disks.
193+
<6> For installations on FCP-type disks, use `rd.zfcp=<adapter>,<wwpn>,<lun>` to specify the FCP disk where {op-system} is to be installed. Omit this entry for DASD-type disks.
194+
+
195+
You can adjust further parameters if required.
196+
197+
. Create a parameter file for the control plane in an LPAR:
198+
+
199+
.Example parameter file for the control plane machine
200+
+
201+
[source,terminal]
202+
----
203+
cio_ignore=all,!condev rd.neednet=1 \
204+
console=ttysclp0 \
205+
coreos.inst.install_dev=/dev/<block_device> \
206+
coreos.inst.ignition_url=http://<http_server>/master.ign \// <1>
207+
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \
208+
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \
209+
rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 \
210+
rd.dasd=0.0.4411 \
211+
rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \
212+
zfcp.allow_lun_scan=0
213+
----
214+
<1> Specify the location of the `master.ign` config file. Only HTTP and HTTPS protocols are supported.
215+
216+
. Transfer the following artifacts, files, and images to the LPAR. For example by using FTP:
217+
218+
** `kernel` and `initramfs` artifacts
219+
** Parameter files
220+
** {op-system} images
221+
+
222+
For details about how to transfer the files with FTP and boot, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/performing_a_standard_rhel_9_installation/assembly_installing-on-64-bit-ibm-z_installing-rhel#installing-in-an-lpar_installing-in-an-lpar[Installing in an LPAR].
223+
224+
. Boot the bootstrap machine.
225+
226+
. Boot the control plane machine.

modules/install-sno-ibm-z.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
$ OCP_VERSION=<ocp_version> <1>
2020
----
2121
+
22-
<1> Replace `<ocp_version>` with the current version, for example, `latest-{product-version}`
22+
<1> Replace `<ocp_version>` with the current version. For example, `latest-{product-version}`.
2323

2424
. Set the host architecture by running the following command:
2525
+

0 commit comments

Comments
 (0)