Skip to content

Commit b99e242

Browse files
committed
Single Node Cluster Support (SNO) for P
1 parent 00f3994 commit b99e242

5 files changed

+272
-5
lines changed

installing/installing_sno/install-sno-installing-sno.adoc

Lines changed: 33 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ You can install {sno} using the web-based Assisted Installer and a discovery ISO
1010

1111
ifndef::openshift-origin[]
1212

13+
[id="installing-sno-assisted-installer"]
1314
== Installing {sno} using the Assisted Installer
1415

1516
To install {product-title} on a single node, use the web-based Assisted Installer wizard to guide you through the process and manage the installation.
@@ -78,7 +79,7 @@ include::modules/creating-custom-live-rhcos-iso.adoc[leveloffset=+1]
7879

7980
== Installing {sno} with {ibmzProductName} and {linuxoneProductName}
8081

81-
Installing a single node cluster on {ibmzProductName} and {linuxoneProductName} requires user-provisioned installation using either the "Installing a cluster with {op-system-base} KVM on {ibmzProductName} and {linuxoneProductName}" or the "Installing a cluster with z/VM on {ibmzProductName} and {linuxoneProductName}" procedure.
82+
Installing a single-node cluster on {ibmzProductName} and {linuxoneProductName} requires user-provisioned installation using either the "Installing a cluster with {op-system-base} KVM on {ibmzProductName} and {linuxoneProductName}" or the "Installing a cluster with z/VM on {ibmzProductName} and {linuxoneProductName}" procedure.
8283

8384
[NOTE]
8485
====
@@ -105,4 +106,34 @@ You can use dedicated or shared IFLs to assign sufficient compute resources. Res
105106

106107
include::modules/install-sno-ibm-z.adoc[leveloffset=+2]
107108

108-
include::modules/install-sno-ibm-z-kvm.adoc[leveloffset=+2]
109+
include::modules/install-sno-ibm-z-kvm.adoc[leveloffset=+2]
110+
111+
[id="installing-sno-with-ibmpower"]
112+
== Installing {sno} with {ibmpowerProductName}
113+
114+
Installing a single-node cluster on {ibmpowerProductName} requires user-provisioned installation using the "Installing a cluster with {ibmpowerProductName}" procedure.
115+
116+
[NOTE]
117+
====
118+
Installing a single-node cluster on {ibmpowerProductName} simplifies installation for development and test environments and requires less resource requirements at entry level.
119+
====
120+
121+
[discrete]
122+
=== Hardware requirements
123+
124+
* The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster.
125+
* At least one network connection to connect to the `LoadBalancer` service and to serve data for traffic outside of the cluster.
126+
127+
[NOTE]
128+
====
129+
You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of {ibmpowerProductName}. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster.
130+
====
131+
132+
[role="_additional-resources"]
133+
.Additional resources
134+
135+
* xref:../../installing/installing_ibm_power/installing-ibm-power.adoc#installing-ibm-power[Installing a cluster on {ibmpowerProductName}]
136+
137+
include::modules/setting-up-bastion-for-sno.adoc[leveloffset=+2]
138+
139+
include::modules/install-sno-ibm-power.adoc[leveloffset=+2]

modules/install-sno-ibm-power.adoc

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
// This is included in the following assemblies:
2+
//
3+
// installing_sno/install-sno-installing-sno.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="installing-sno-on-ibm-power_{context}"]
7+
= Installing {sno} with {ibmpowerProductName}
8+
9+
.Prerequisites
10+
11+
* You have set up bastion.
12+
13+
.Procedure
14+
15+
There are two steps for the {sno} cluster installation. First the {sno} logical partition (LPAR) needs to boot up with PXE, then you need to monitor the installation progress.
16+
17+
. Use the following command to boot powerVM with netboot:
18+
+
19+
[source,terminal]
20+
----
21+
$ lpar_netboot -i -D -f -t ent -m <sno_mac> -s auto -d auto -S <server_ip> -C <sno_ip> -G <gateway> <lpar_name> default_profile <cec_name>
22+
----
23+
+
24+
where:
25+
+
26+
--
27+
sno_mac:: Specifies the MAC address of the {sno} cluster.
28+
sno_ip:: Specifies the IP address of the {sno} cluster.
29+
server_ip:: Specifies the IP address of bastion (PXE server).
30+
gateway:: Specifies the Network's gateway IP.
31+
lpar_name:: Specifies the {sno} lpar name in HMC.
32+
cec_name:: Specifies the System name where the sno_lpar resides
33+
--
34+
35+
. After the {sno} LPAR boots up with PXE, use the `openshift-install` command to monitor the progress of installation:
36+
37+
.. Run the following command after the bootstrap is complete:
38+
+
39+
[source,terminal]
40+
----
41+
./openshift-install wait-for bootstrap-complete
42+
----
43+
44+
.. Run the following command after it returns successfully:
45+
+
46+
[source,terminal]
47+
----
48+
./openshift-install wait-for install-complete
49+
----

modules/install-sno-requirements-for-installing-on-a-single-node.adoc

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,11 @@
88

99
Installing {product-title} on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements:
1010

11-
* *Administration host:* You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation.
11+
* *Administration host:* You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation.
12+
[NOTE]
13+
====
14+
For the `ppc64le` platform, the host should prepare the ISO, but does not need to create the USB boot drive. The ISO can be mounted to PowerVM directly.
15+
====
1216

1317
[NOTE]
1418
====
@@ -41,7 +45,7 @@ The server must have a Baseboard Management Controller (BMC) when booting with v
4145

4246
[NOTE]
4347
====
44-
BMC is not supported on {ibmzProductName}.
48+
BMC is not supported on {ibmzProductName} and {ibmpowerProductName}.
4549
====
4650

4751
* *Networking:* The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN):

modules/installation-aws_con_installing-sno-on-aws.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@
66
[id="installing-sno-on-aws_{context}"]
77
= Installing {sno} on AWS
88

9-
Installing a single node cluster on AWS requires installer-provisioned installation using the "Installing a cluster on AWS with customizations" procedure.
9+
Installing a single-node cluster on AWS requires installer-provisioned installation using the "Installing a cluster on AWS with customizations" procedure.
Lines changed: 183 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,183 @@
1+
// This module is included in the following assemblies:
2+
//
3+
// installing_sno/install-sno-installing-sno.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="setting-up-bastion-for-sno_{context}"]
7+
= Setting up basion for {sno} with {ibmpowerProductName}
8+
9+
Prior to installing {sno} on {ibmpowerProductName}, you must set up bastion. Setting up a bastion server for {sno} on {ibmpowerProductName} requires the configuration of the following services:
10+
11+
* PXE is used for the {sno} cluster installation. PXE requires the following services to be configured and run:
12+
** DNS to define api, api-int, and *.apps
13+
** DHCP service to enable PXE and assign an IP address to {sno} node
14+
** HTTP to provide ignition and {op-system} rootfs image
15+
** TFTP to enable PXE
16+
* You must install `dnsmasq` to support DNS, DHCP and PXE, httpd for HTTP.
17+
18+
Use the following procedure to configure a bastion server that meets these requirements.
19+
20+
.Procedure
21+
22+
. Use the following command to install `grub2`, which is required to enable PXE for PowerVM:
23+
+
24+
[source,terminal]
25+
----
26+
grub2-mknetdir --net-directory=/var/lib/tftpboot
27+
----
28+
+
29+
.Example `/var/lib/tftpboot/boot/grub2/grub.cfg` file
30+
[source,terminal]
31+
----
32+
default=0
33+
fallback=1
34+
timeout=1
35+
if [ ${net_default_mac} == fa:b0:45:27:43:20 ]; then
36+
menuentry "CoreOS (BIOS)" {
37+
echo "Loading kernel"
38+
linux "/rhcos/kernel" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://192.168.10.5:8000/install/rootfs.img ignition.config.url=http://192.168.10.5:8000/ignition/sno.ign
39+
echo "Loading initrd"
40+
initrd "/rhcos/initramfs.img"
41+
}
42+
fi
43+
----
44+
45+
. Use the following commands to download {op-system} image files from the mirror repo for PXE.
46+
47+
.. Enter the following command to assign the `RHCOS_URL` variable the follow 4.12 URL:
48+
+
49+
[source,terminal]
50+
----
51+
$ export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/
52+
----
53+
54+
.. Enter the following command to navigate to the `/var/lib/tftpboot/rhcos` directory:
55+
+
56+
[source,terminal]
57+
----
58+
$ cd /var/lib/tftpboot/rhcos
59+
----
60+
61+
.. Enter the following command to download the specified {op-system} kernel file from the URL stored in the `RHCOS_URL` variable:
62+
+
63+
[source,terminal]
64+
----
65+
$ wget ${RHCOS_URL}/rhcos-live-kernel-ppc64le -o kernel
66+
----
67+
68+
.. Enter the following command to download the {op-system} `initramfs` file from the URL stored in the `RHCOS_URL` variable:
69+
+
70+
[source,terminal]
71+
----
72+
$ wget ${RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -o initramfs.img
73+
----
74+
75+
.. Enter the following command to navigate to the `/var//var/www/html/install/` directory:
76+
+
77+
[source,terminal]
78+
----
79+
$ cd /var//var/www/html/install/
80+
----
81+
82+
.. Enter the following command to download, and save, the {op-system} `root filesystem` image file from the URL stored in the `RHCOS_URL` variable:
83+
+
84+
[source,terminal]
85+
----
86+
$ wget ${RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -o rootfs.img
87+
----
88+
89+
. To create the ignition file for a {sno} cluster, you must create the `install-config.yaml` file.
90+
91+
.. Enter the following command to create the work directory that holds the file:
92+
+
93+
[source,terminal]
94+
----
95+
$ mkdir -p ~/sno-work
96+
----
97+
98+
.. Enter the following command to navigate to the `~/sno-work` directory:
99+
+
100+
[source,terminal]
101+
----
102+
$ cd ~/sno-work
103+
----
104+
105+
.. Use the following sample file can to create the required `install-config.yaml` in the `~/sno-work` directory:
106+
+
107+
[source,yaml]
108+
----
109+
apiVersion: v1
110+
baseDomain: <domain> <1>
111+
compute:
112+
- name: worker
113+
replicas: 0 <2>
114+
controlPlane:
115+
name: master
116+
replicas: 1 <3>
117+
metadata:
118+
name: <name> <4>
119+
networking: <5>
120+
clusterNetwork:
121+
- cidr: 10.128.0.0/14
122+
hostPrefix: 23
123+
machineNetwork:
124+
- cidr: 10.0.0.0/16 <6>
125+
networkType: OVNKubernetes
126+
serviceNetwork:
127+
- 172.30.0.0/16
128+
platform:
129+
none: {}
130+
bootstrapInPlace:
131+
installationDisk: /dev/disk/by-id/<disk_id> <7>
132+
pullSecret: '<pull_secret>' <8>
133+
sshKey: |
134+
<ssh_key> <9>
135+
----
136+
<1> Add the cluster domain name.
137+
<2> Set the `compute` replicas to `0`. This makes the control plane node schedulable.
138+
<3> Set the `controlPlane` replicas to `1`. In conjunction with the previous `compute` setting, this setting ensures that the cluster runs on a single node.
139+
<4> Set the `metadata` name to the cluster name.
140+
<5> Set the `networking` details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters.
141+
<6> Set the `cidr` value to match the subnet of the {sno} cluster.
142+
<7> Set the path to the installation disk drive, for example, `/dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2`.
143+
<8> Copy the {cluster-manager-url-pull} and add the contents to this configuration setting.
144+
<9> Add the public SSH key from the administration host so that you can log in to the cluster after installation.
145+
146+
. Download the `openshift-install` image to create the ignition file and copy it to the `http` directory.
147+
148+
.. Enter the following command to download the `openshift-install-linux-4.12.0` .tar file:
149+
+
150+
[source,terminal]
151+
----
152+
$ wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz
153+
----
154+
155+
.. Enter the following command to unpack the `openshift-install-linux-4.12.0.tar.gz` archive:
156+
+
157+
[source,terminal]
158+
----
159+
$ tar xzvf openshift-install-linux-4.12.0.tar.gz
160+
----
161+
162+
.. Enter the following command to
163+
+
164+
[source,terminal]
165+
----
166+
$ ./openshift-install --dir=~/sno-work create create single-node-ignition-config
167+
----
168+
169+
.. Enter the following command to create the ignition file:
170+
+
171+
[source,terminal]
172+
----
173+
$ cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign
174+
----
175+
176+
.. Enter the following command to restore SELinux file for the `/var/www/html` directory:
177+
+
178+
[source,terminal]
179+
----
180+
$ restorecon -vR /var/www/html || true
181+
----
182+
+
183+
Bastion now has all the required files and is properly configured in order to install {sno}.

0 commit comments

Comments
 (0)