|
| 1 | +# Adding a Host to an Assisted Installed OpenShift Cluster on Oracle Cloud (OCI) |
| 2 | + |
| 3 | +This guide provides detailed instructions on adding a host to an OpenShift cluster installed via the Assisted Installer, specifically in the Oracle Cloud Infrastructure (OCI). The process includes generating a discovery ISO, creating a custom image, configuring OCI load balancers, launching a new instance, and approving the host in the OpenShift console. |
| 4 | + |
| 5 | +--- |
| 6 | + |
| 7 | +## Prerequisites |
| 8 | + |
| 9 | +Before starting, ensure the following: |
| 10 | + |
| 11 | +- A functioning OpenShift cluster installed via the Assisted Installer on OCI. |
| 12 | +- You have access to the OpenShift Assisted Cluster and the OpenShift console. |
| 13 | +- You have privileges to manage instances and load balancers within OCI |
| 14 | + |
| 15 | + |
| 16 | +## Steps |
| 17 | +### 1. Create Add Host Discovery ISO |
| 18 | + |
| 19 | +1. Log in to the **OpenShift Console** (https://console.redhat.com/openshift/), go to your cluster list and select your cluster. |
| 20 | +2. Navigate to **Add Hosts** tab. |
| 21 | + |
| 22 | +<img src="files/1. clusteroverview.png" width=300x align="top">|<img src="files/2. addHost1.png" width=300x align="top"> |
| 23 | + |
| 24 | +3. Click on **Add Host** button. |
| 25 | +4. Follow the wizard to configure and generate the **Discovery ISO**, you can add an SSH public key to this ISO if you later require direct SSH access. |
| 26 | +5. Once the ISO is generated, download it locally. |
| 27 | + |
| 28 | +<img src="files/3. DownloadISO.png" width=300x align="top"> |
| 29 | + |
| 30 | +### 2. Create Custom Image in Oracle Cloud (OCI) Based on Add Host Discovery ISO |
| 31 | + |
| 32 | +OCI requires a custom image to boot a new instance with the Discovery ISO embedded. This is a different ISO that what was used for creating the initial cluster! |
| 33 | + |
| 34 | +#### Create Custom Image Using OCI Commands or Console |
| 35 | + |
| 36 | +1. **Upload the ISO to an Object Storage Bucket**: |
| 37 | + - Upload the discovery ISO to an OCI Object Storage bucket in your tenancy. |
| 38 | + - <img src="files/4. uploadISO.png" width=300x align="top"> |
| 39 | + |
| 40 | +2. **Create a Custom Image from the Discovery ISO**: |
| 41 | + - Go to **Compute > Custom Images** (in the OCI Console) |
| 42 | + - Click **Import-discovery-image` |
| 43 | + - **Operating System**: RHEL |
| 44 | + - **Bucket / Object Name**: Select the bucket where you uploaded the ISO file and select under object name the ISO file. |
| 45 | + - Set the **Image Type** to: QCOW2 |
| 46 | + - Set the **Launch Mode** to: Paravirtualized mode |
| 47 | + - click on **Import image** |
| 48 | + - <img src="files/5. importImage.png" width=300x align="top"> |
| 49 | + |
| 50 | +3. **Modify Image Capabilities** |
| 51 | + - After the custom image is created, click on **Edit image capabilities** |
| 52 | + - Set the firmware available options to ONLY **UEFI_64** |
| 53 | + - <img src="files/6. editImageCapabilities.png" width=300x align="top"> |
| 54 | + |
| 55 | + |
| 56 | +### 3. Modify the Oracle Cloud Infrastructure (OCI) Load Balancer |
| 57 | + |
| 58 | +To allow the new host to communicate with the OpenShift cluster, you need to modify your OCI OpenShift APP Load Balancer to allow traffic on port **22624**. This port is used for Machine Config Server (MCS) communication. By default only the internal API load balancer is configured for this. |
| 59 | + |
| 60 | +1. Navigate to **Networking > Load Balancer**. |
| 61 | +2. Select the **api apps** load balancer used by your OpenShift cluster. |
| 62 | +3. Create a new backend set and set the health check to: |
| 63 | + - Protocol: HTTP |
| 64 | + - Port: 22624 |
| 65 | + - Interval: 10000 |
| 66 | + - Timeout: 3000 |
| 67 | + - Number of retries: 3 |
| 68 | + - Status code: 200 |
| 69 | + - URL path: /healthz |
| 70 | + - response: .* |
| 71 | + - <img src="files/7. CreateBackend.png" width=300x align="top"> |
| 72 | + |
| 73 | +4. Add the Management Nodes to this backend set, by clicking on the **Add Backends** option. Set the port for each backend to 22624. |
| 74 | + |
| 75 | +<img src="files/8. AddBackenNodes.png" width=400x align="top"> |
| 76 | + |
| 77 | + |
| 78 | +5. Wait until your backend is in healthy state, then under **Listeners** menu, add a istener to: |
| 79 | + - Allow incoming traffic on port **22624** (Machine Configuration). |
| 80 | + - Ensure the listener forwards this traffic to the newly created backend. |
| 81 | + |
| 82 | +<img src="files/9. CreateListener.png" width=400x align="top"> |
| 83 | + |
| 84 | +5. Modify the NSG (Network Security Group) assigned to this load balancer to allowin incomming traffic on TCP/22624 from the intenal VCN Network |
| 85 | + - Go to the main page of the load balancer |
| 86 | + - In the Load Balancer Information section you will see the assigned NSG. Click on this NSG |
| 87 | + - Add a rule for incoming (ingress) traffic. Set the source CIDR range to the CIDR range of your VCN. The Protocol to TCP and destination port to 22624 |
| 88 | + - <img src="files/10. NSG-LB.png" width=500x align="top"> |
| 89 | + |
| 90 | + |
| 91 | +### 4. Launch a New Instance Using the Custom Image |
| 92 | + |
| 93 | +Once the custom image is created and the load balancer is configured, you can launch a new instance as worker node that will register with the OpenShift cluster. |
| 94 | + |
| 95 | +1. In the **OCI Console**, go to **Compute > Instances**. |
| 96 | +2. Click **Create Instance** and configure the instance: |
| 97 | + - **Image**: Choose the custom image you created (`openshift-discovery-image`). |
| 98 | + - **Shape**: Select an appropriate shape (e.g., VM.Standard.E4.Flex). |
| 99 | + - **Network**: Attach the instance to the correct VCN and subnet that the OpenShift cluster uses. Usse the private subnet for your instance. |
| 100 | + - It is recommended to have an openshift worknode with min 4 cores and 16 GB Ram. |
| 101 | + - The worker node needs at minimum a 100GB Disk and it is recommended to have this 30 VPUs assigned |
| 102 | + - Click on **create** to launch the instance |
| 103 | + |
| 104 | +<img src="files/11. AddNode1.png" width=300x align="top"> | <img src="files/12. AddNode2.png" width=300x align="top"> |
| 105 | + |
| 106 | +3. **NSG Assignment**: When the node is being created you can click on the **edit** link behing the Network Security Groups in the primary VNIC section. |
| 107 | + - Set the NSG to the NSG for the Openshift Worker Nodes (cluster-compute-nsg) |
| 108 | + - <img src="files/13. NodeSetNSG.png" width=300x align="top"> |
| 109 | + |
| 110 | +4. **Set the correct tag**: After the instance is up an running (Green: Running state). Add the correct openshift tag |
| 111 | + - Navigate to **[More Actions]** on the main page of the instance |
| 112 | + - Click on **Add Tags** |
| 113 | + - Select the Tag Namespace used for this Openshift cluster. Likely the name of your cluster |
| 114 | + - Set the tag key to **compute** |
| 115 | + - <img src="files/14. AddTag.png" width=300x align="top"> |
| 116 | + |
| 117 | +### 5. Install ready nodes in the Openshift console (https://console.redhat.com/openshift/) |
| 118 | + |
| 119 | +It will take a few minutes, but at somepoint your new node should show on the **Add hosts** tab. Wait for the host to become in the **Ready state**. It likely will first show **Insufficient**, just be patient. |
| 120 | + |
| 121 | +When the node is in ready state, you can click on the **[Install Ready Nodes]** |
| 122 | + |
| 123 | +<img src="files/15. WaitNodeReady.png" width=400x align="top"> |
| 124 | + |
| 125 | +This will take some time. When it get to the **Installed** state, the node will reboot and after a few minute shoud up as node in your Cluster Console. |
| 126 | + |
| 127 | +### 5. Approve the Host in the OpenShift Cluster Console |
| 128 | + |
| 129 | +After the new instance boots and registers with the OpenShift cluster, it must be approved from the OpenShift console. |
| 130 | + |
| 131 | +1. Log in to your **OpenShift Web Console** of your cluster. |
| 132 | +2. Go to **Compute > Nodes**. |
| 133 | +3. You new worker node should appear here. |
| 134 | + |
| 135 | +<img src="files/16. NewWorker.png" width=400x align="top"> |
| 136 | + |
| 137 | +4. Click on the Discovered link and **Aprove** that the node is added to your cluster. |
| 138 | + |
| 139 | +<img src="files/17. NewWorkerApprove.png" width=400x align="top"> |
| 140 | + |
| 141 | +5. As a final step you likely also need to approve the new nodes certificate. Click on the **Not Ready** link and approve the Certificate Signing process. |
| 142 | + |
| 143 | +<img src="files/18. NewWorkerApprove2.png" width=400x align="top"> |
| 144 | + |
| 145 | +You node will now be accepted as a new worker node for your Openshift cluster and you will automatically start seeing pods running on this new node. |
| 146 | +--- |
| 147 | + |
| 148 | +## Conclusion |
| 149 | + |
| 150 | +Following this guide, you will successfully add a new host to your OpenShift cluster on Oracle Cloud Infrastructure (OCI). The new host will be automatically configured and integrated into the cluster after it is approved via the OpenShift web console. |
| 151 | + |
| 152 | + |
| 153 | +# License |
| 154 | +Copyright (c) 2024 Oracle and/or its affiliates. |
| 155 | +Licensed under the Universal Permissive License (UPL), Version 1.0. |
| 156 | +See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details. |
0 commit comments