Skip to content

Commit a2812ed

Browse files
committed
IBM Z add sno support
1 parent ffd2a72 commit a2812ed

5 files changed

+483
-5
lines changed

installing/installing_sno/install-sno-installing-sno.adoc

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,3 +75,34 @@ include::modules/install-sno-installing-with-usb-media.adoc[leveloffset=+1]
7575
include::modules/install-booting-from-an-iso-over-http-redfish.adoc[leveloffset=+1]
7676

7777
include::modules/creating-custom-live-rhcos-iso.adoc[leveloffset=+1]
78+
79+
== Installing {sno} with {ibmzProductName} and {linuxoneProductName}
80+
81+
Installing a single node cluster on {ibmzProductName} and {linuxoneProductName} requires user-provisioned installation using either the "Installing a cluster with {op-system-base} KVM on {ibmzProductName} and {linuxoneProductName}" or the "Installing a cluster with z/VM on {ibmzProductName} and {linuxoneProductName}" procedure.
82+
83+
[NOTE]
84+
====
85+
Installing a single-node cluster on {ibmzProductName} simplifies installation for development and test environments and requires less resource requirements at entry level.
86+
====
87+
88+
[discrete]
89+
=== Hardware requirements
90+
91+
* The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster.
92+
* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
93+
94+
[NOTE]
95+
====
96+
You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of {ibmzProductName}. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster.
97+
====
98+
99+
[role="_additional-resources"]
100+
.Additional resources
101+
102+
* xref:../../installing/installing_ibm_z/installing-ibm-z.adoc#installing-ibm-z[Installing a cluster with z/VM on {ibmzProductName} and {linuxoneProductName}]
103+
104+
* xref:../../installing/installing_ibm_z/installing-ibm-z-kvm.adoc#installing-ibm-z-kvm[Installing a cluster with {op-system-base} KVM on {ibmzProductName} and{linuxoneProductName}]
105+
106+
include::modules/install-sno-ibm-z.adoc[leveloffset=+2]
107+
108+
include::modules/install-sno-ibm-z-kvm.adoc[leveloffset=+2]

modules/install-sno-about-installing-on-a-single-node.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="install-sno-about-installing-on-a-single-node_{context}"]
77
= About OpenShift on a single node
88

9-
You can create a single-node cluster with standard installation methods. {product-title} on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability.
9+
You can create a single-node cluster with standard installation methods. {product-title} on a single node is a specialized installation that requires the creation of a special Ignition configuration file. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability.
1010

1111
[IMPORTANT]
1212
====

modules/install-sno-ibm-z-kvm.adoc

Lines changed: 173 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,173 @@
1+
// This is included in the following assemblies:
2+
//
3+
// installing_sno/install-sno-installing-sno.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="installing-sno-on-ibm-z-kvm_{context}"]
7+
= Installing {sno} with {op-system-base} KVM on {ibmzProductName} and {linuxoneProductName}
8+
9+
.Prerequisites
10+
11+
* You have installed `podman`.
12+
13+
.Procedure
14+
15+
. Set the {product-title} version by running the following command:
16+
+
17+
[source,terminal]
18+
----
19+
$ OCP_VERSION=<ocp_version> <1>
20+
----
21+
+
22+
<1> Replace `<ocp_version>` with the current version, for example, `latest-{product-version}`
23+
24+
. Set the host architecture by running the following command:
25+
+
26+
[source,terminal]
27+
----
28+
$ ARCH=<architecture> <1>
29+
----
30+
<1> Replace `<architecture>` with the target host architecture `s390x`.
31+
32+
. Download the {product-title} client (`oc`) and make it available for use by entering the following commands:
33+
+
34+
[source,terminal]
35+
----
36+
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz
37+
----
38+
+
39+
[source,terminal]
40+
----
41+
$ tar zxf oc.tar.gz
42+
----
43+
+
44+
[source,terminal]
45+
----
46+
$ chmod +x oc
47+
----
48+
49+
. Download the {product-title} installer and make it available for use by entering the following commands:
50+
+
51+
[source,terminal]
52+
----
53+
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz
54+
----
55+
+
56+
[source,terminal]
57+
----
58+
$ tar zxvf openshift-install-linux.tar.gz
59+
----
60+
+
61+
[source,terminal]
62+
----
63+
$ chmod +x openshift-install
64+
----
65+
66+
. Prepare the `install-config.yaml` file:
67+
+
68+
[source,yaml]
69+
----
70+
apiVersion: v1
71+
baseDomain: <domain> <1>
72+
compute:
73+
- name: worker
74+
replicas: 0 <2>
75+
controlPlane:
76+
name: master
77+
replicas: 1 <3>
78+
metadata:
79+
name: <name> <4>
80+
networking: <5>
81+
clusterNetwork:
82+
- cidr: 10.128.0.0/14
83+
hostPrefix: 23
84+
machineNetwork:
85+
- cidr: 10.0.0.0/16 <6>
86+
networkType: OVNKubernetes
87+
serviceNetwork:
88+
- 172.30.0.0/16
89+
platform:
90+
none: {}
91+
bootstrapInPlace:
92+
installationDisk: /dev/disk/by-id/<disk_id> <7>
93+
pullSecret: '<pull_secret>' <8>
94+
sshKey: |
95+
<ssh_key> <9>
96+
----
97+
<1> Add the cluster domain name.
98+
<2> Set the `compute` replicas to `0`. This makes the control plane node schedulable.
99+
<3> Set the `controlPlane` replicas to `1`. In conjunction with the previous `compute` setting, this setting ensures the cluster runs on a single node.
100+
<4> Set the `metadata` name to the cluster name.
101+
<5> Set the `networking` details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters.
102+
<6> Set the `cidr` value to match the subnet of the {sno} cluster.
103+
<7> Set the path to the installation disk drive, for example, `/dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2`.
104+
<8> Copy the {cluster-manager-url-pull} and add the contents to this configuration setting.
105+
<9> Add the public SSH key from the administration host so that you can log in to the cluster after installation.
106+
107+
. Generate {product-title} assets by running the following commands:
108+
+
109+
[source,terminal]
110+
----
111+
$ mkdir ocp
112+
----
113+
+
114+
[source,terminal]
115+
----
116+
$ cp install-config.yaml ocp
117+
----
118+
+
119+
[source,terminal]
120+
----
121+
$ ./openshift-install --dir=ocp create single-node-ignition-config
122+
----
123+
124+
. Obtain the {op-system-base} `kernel`, `initramfs`, and `rootfs` artifacts from the link:https://access.redhat.com/downloads/content/290[Product Downloads] page on the Red Hat Customer Portal or from the link:https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/latest/[{op-system} image mirror] page.
125+
+
126+
[IMPORTANT]
127+
====
128+
The {op-system} images might not change with every release of {product-title}. You must download images with the highest version that is less than or equal to the {product-title} version that you install. Only use the appropriate `kernel`, `initramfs`, and `rootfs` artifacts described in the following procedure.
129+
====
130+
+
131+
The file names contain the {product-title} version number. They resemble the following examples:
132+
+
133+
`kernel`:: `rhcos-<version>-live-kernel-<architecture>`
134+
`initramfs`:: `rhcos-<version>-live-initramfs.<architecture>.img`
135+
`rootfs`:: `rhcos-<version>-live-rootfs.<architecture>.img`
136+
+
137+
. Before you launch `virt-install`, move the following files and artifacts to an HTTP or HTTPS server:
138+
139+
** Downloaded {op-system-base} live `kernel`, `initramfs`, and `rootfs` artifacts
140+
** Ignition files
141+
142+
. Create the KVM guest nodes by using the following components:
143+
144+
** {op-system-base} `kernel` and `initramfs` artifacts
145+
** Ignition files
146+
** The new disk image
147+
** Adjusted parm line arguments
148+
149+
[source,terminal]
150+
----
151+
$ virt-install \
152+
--name {vn_name} \
153+
--autostart \
154+
--memory={memory_mb} \
155+
--cpu host \
156+
--vcpus {vcpus} \
157+
--location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \// <1>
158+
--disk size=100 \
159+
--network network={virt_network_parm} \
160+
--graphics none \
161+
--noautoconsole \
162+
--extra-args "ip=${IP}::${GATEWAY}:${MASK}:${VM_NAME}::none" \
163+
--extra-args "nameserver=${NAME_SERVER}" \
164+
--extra-args "ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot" \
165+
--extra-args "coreos.live.rootfs_url={rhcos_liveos}" \// <2>
166+
--extra-args "ignition.config.url={rhcos_ign}" \// <3>
167+
--extra-args "random.trust_cpu=on rd.luks.options=discard" \
168+
--extra-args "console=tty1 console=ttyS1,115200n8" \
169+
--wait
170+
----
171+
<1> For the `--location` parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server.
172+
<2> For the `coreos.live.rootfs_url=` artifact, specify the matching `rootfs` artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
173+
<3> For the `ignition.config.url=` parameter, specify the Ignition file for the machine role. Only HTTP and HTTPS protocols are supported.

0 commit comments

Comments
 (0)