|
| 1 | +# Install OpenShift Cluster in OCI Using Agent-Based Installation method (1/2) |
| 2 | + |
| 3 | +This guide walks you through the detailed steps for installing an OpenShift cluster on Oracle Cloud Infrastructure (OCI) using the agent-based installation method. This installation approach is referred to as a connected installation, meaning that both the OpenShift worker and control plane nodes require outbound internet access during the installation process. |
| 4 | + |
| 5 | +This section is a Part 1 of the "how-to" guide. |
| 6 | + |
| 7 | +Reviewed: 27.01.2025 |
| 8 | + |
| 9 | +## High Level Steps |
| 10 | + |
| 11 | +The following outlines the key steps required to perform the OpenShift agent-based connected installation in Oracle Cloud Infrastructure (OCI): |
| 12 | + |
| 13 | +<img src="images/High-Level Installation Steps.jpg" width=500x align="top"> |
| 14 | + |
| 15 | +## Prerequisites |
| 16 | + |
| 17 | +Before starting, ensure the following: |
| 18 | + |
| 19 | +- You have access to OCI tenancy with required previliges to provision necessary OCI Infrastructure resources. |
| 20 | +- OpenShift Installer |
| 21 | + |
| 22 | + |
| 23 | +## Steps |
| 24 | +### 1. Download the OpenShift Client, Installer and Pull Secret |
| 25 | + |
| 26 | +1. Log in to the **RedHat OpenShift Console** [Red Hat OpenShift Console](https://console.redhat.com/openshift/), go to your cluster list and **Create cluster** |
| 27 | +2. Under the Cloud option -> select **Oracle Cloud Infrastructure(virtual machines)** option |
| 28 | +3. Select **Local Agent-based** installation method. |
| 29 | + |
| 30 | + <img src="images/2.jpg" width=500x align="top"> |
| 31 | + |
| 32 | +4. Download the OpenShift Installer, Pull Secret and Command line interface according to your client operating system. In this example, the MacOS is selected for Installer and the CLI. |
| 33 | + |
| 34 | + <img src="images/3.jpg" width=500x align="top"> |
| 35 | + |
| 36 | + |
| 37 | + |
| 38 | +### 2. Extract the binaries to the client machine |
| 39 | + |
| 40 | +Extract the OpenShift Installer, CLI and move them to the executable path. |
| 41 | + |
| 42 | +1. Extract the Openshift Client and move the **kubectl** and **oc** files to **/usr/local/bin** or executable path of your machine. |
| 43 | + |
| 44 | +2. Extract the OpenShift Installer and move **openshift-install** to your executable path, in this example, it is **/us/local/bin** |
| 45 | + |
| 46 | + <img src="images/4.jpg" width=500x align="top"> |
| 47 | + |
| 48 | + |
| 49 | +3. Validate the version of OpenShift installer by running the following command. |
| 50 | + ``` |
| 51 | + openshift-install version |
| 52 | + ``` |
| 53 | + <img src="images/5.jpg" width=500x align="top"> |
| 54 | + |
| 55 | +You are all set to start the installation. |
| 56 | + |
| 57 | +### 3. Create OCI Resources for OpenShift Installation |
| 58 | + |
| 59 | +1. Create the necessary OCI resources by following the instructions from the documentation below: |
| 60 | + |
| 61 | +- [OpenShift Installation Pre-Requisites](https://docs.oracle.com/en-us/iaas/Content/openshift-on-oci/install-prereq.htm) |
| 62 | + |
| 63 | +- [OpenShift Agent Pre-requisites](https://docs.oracle.com/en-us/iaas/Content/openshift-on-oci/agent-prereq.htm) |
| 64 | + |
| 65 | +1. (**OR**) You can also use the below terraform script to create this resources using OCI Resource Manager stack. The script can be downloaded from the same GitHUb link here |
| 66 | + |
| 67 | + **oci-openshift-agent-provision-resources.zip** |
| 68 | + |
| 69 | + 1. Login to the OCI console, select your region and a compartment where you would like to have the resources created. |
| 70 | + 2. Navigate to the Hamburger Menu -> Developer Services -> Resource Manager -> Stacks |
| 71 | + 3. Click on **Create Stack** -> and upload the zip file **oci-openshift-agent-provision-resources.zip** |
| 72 | + 4. Fill in the details to create OCI resources required for the agent-based installation as follows: |
| 73 | + 1. cluster_name - name of the OCP cluster |
| 74 | + 2. compartment_ocid - this will be automatically populated |
| 75 | + 3. enable_private_dns -> Enable |
| 76 | + 4. load_balancer_shape_details_maximum_bandwidth_in_mbps - Use default or specify the size |
| 77 | + 5. load_balancer_shape_details_minimum_bandwidth_in_mbps - Use default or specify the size |
| 78 | + 6. private_cidr - private subnet for the OCP cluster nodes |
| 79 | + 7. public_subnet - public subnet for the OCP cluster nodes |
| 80 | + 8. region - should auto populate, specify otherwise |
| 81 | + 9. tenancy_ocid - should auto populate |
| 82 | + 10. vcn_cidr - IPV4 CIDR blocks for the VCN for your OCP cluster |
| 83 | + 11. vcn_dns_label - DNS label for VCN (optional) |
| 84 | + 12. zone_dns - name of the cluster's DNS zone. The zone_dsn value must be a valid hostname. |
| 85 | + 5. click **Run Apply** to create resources |
| 86 | + 6. Once after the job is successful, obtain the output by navigating Stacks -> jobs -> output |
| 87 | + |
| 88 | +Refer to the screenshot below |
| 89 | + |
| 90 | +<img src="images/33.jpg" width=500x align="top"> |
| 91 | +<img src="images/34.jpg" width=500x align="top"> |
| 92 | + |
| 93 | +**oci_ccm_config** output |
| 94 | + |
| 95 | +<img src="images/35.jpg" width=500x align="top"> |
| 96 | + |
| 97 | +### 4. Prepare agent-config.yaml and install-config.yaml files |
| 98 | + |
| 99 | +Before you begin, make sure to create a directory/folder and maintain the folder structure as shown in the image below. Make sure to create a folder and machine config files where you have extracted openshift installer, client and kubectl. |
| 100 | + |
| 101 | + In the example below, a folder with name "**demo**" created which contains all the necessary config files. |
| 102 | + |
| 103 | + <img src="images/6.jpg" width=500x align="top"> |
| 104 | + |
| 105 | +Now, we will create and update the install-config and agent-config files. |
| 106 | + |
| 107 | +1. Create and Update the **install-config.yaml**. Copy the contents below and update the details according to your environment. You can also use the sample **config-files.zip** from the GitHub repo. |
| 108 | + |
| 109 | +``` |
| 110 | +apiVersion: v1 |
| 111 | +metadata: |
| 112 | + name: ocpdemo <-- Replace this with the name of the OCP Cluster |
| 113 | +baseDomain: ocpdemo.local <-- Domain name |
| 114 | +networking: |
| 115 | + clusterNetwork: |
| 116 | + - cidr: 10.128.0.0/14 |
| 117 | + hostPrefix: 23 |
| 118 | + networkType: OVNKubernetes |
| 119 | + machineNetwork: |
| 120 | + - cidr: 10.50.0.0/16 <-- OCI VCN CIDR |
| 121 | + serviceNetwork: |
| 122 | + - 172.30.0.0/16 |
| 123 | +compute: |
| 124 | + - architecture: amd64 |
| 125 | + hyperthreading: Enabled |
| 126 | + name: worker |
| 127 | + replicas: 0 |
| 128 | +controlPlane: |
| 129 | + architecture: amd64 |
| 130 | + hyperthreading: Enabled |
| 131 | + name: master |
| 132 | + replicas: 3 <-- no. of control plane nodes |
| 133 | +platform: |
| 134 | + external: |
| 135 | + platformName: oci <-- Platform name |
| 136 | + cloudControllerManager: External |
| 137 | +sshKey: 'ssh-rsa AAAAB3NzaC1yc.......' <-- SSH public key |
| 138 | +pullSecret: '{"authsbmNfpreGdyYUp3ZElWYU1FZjAzZjhmTENKUW52MHpDWUJrSDRpUHBQY19aUGtTNWNYQTNmSE9sSnJ0cHRad2xvWHp' <-- Insert pull secret obtained from Step 1.4 |
| 139 | +
|
| 140 | +``` |
| 141 | + |
| 142 | +2. Update the **agent-config.yaml** |
| 143 | + |
| 144 | +``` |
| 145 | +apiVersion: v1alpha1 |
| 146 | +metadata: |
| 147 | + name: ocpdemo |
| 148 | + namespace: ocpdemo |
| 149 | +rendezvousIP: 10.55.16.20 <-- Select free IP from the VCN CIDR. This will be assigned to the first control plane node. |
| 150 | +``` |
| 151 | + |
| 152 | +3. Update the **oci-ccm.yml** |
| 153 | + |
| 154 | +- You need to update the **oci-ccm-04-cloud-controller-manager-config.yaml** section in the file. Refer to the example screenshot below. |
| 155 | + |
| 156 | + <img src="images/7.jpg" width=500x align="top"> |
| 157 | + |
| 158 | +- Obtain the OCID values of a Compartment, VCN, Load Balancer Subnet and Security lists for the resources created in Step 3, if the resources are created manually. |
| 159 | +- If you have used the resource manager and a Terraform script to create resources, then you must navigate in OCI console to obtain the value for Hamburger Menu -> Developer Services -> Resource Manager -> Stack -> Open Stack -> Outputs -> copy the values of **oci_ccm-config** Key. |
| 160 | + |
| 161 | +4. Update the **oci-csi.yml** |
| 162 | + |
| 163 | +- You need to update the **# oci-csi-01-config.yaml** section in the file. Refer to the example screenshot below. |
| 164 | + |
| 165 | + <img src="images/8.jpg" width=500x align="top"> |
| 166 | + |
| 167 | + |
| 168 | +- Obtain the OCID values of a Compartment, VCN, Load Balancer Subnet and Security lists for the resources created in Step 3, if the resources are created manually. |
| 169 | +- If you have used the terraform script to create resources using resource manager, then you must navigate in OCI console to obtain the value for Hamburger Menu -> Developer Services -> Resource Manager -> Stack -> Open Stack -> Outputs -> copy the values of **oci_ccm-config** Key. |
| 170 | + |
| 171 | +### 5. Generate Minimum Installation ISO |
| 172 | + |
| 173 | +To generate the agent ISO file for the OpenShift installation, follow these steps: |
| 174 | + |
| 175 | +- Navigate to the directory where all the configuration and manifest files are stored. For example, if the files are saved in the "demo" folder, navigate to that folder. |
| 176 | + |
| 177 | +Run the following command to generate the minimal agent ISO file: |
| 178 | + |
| 179 | +``` |
| 180 | +openshift-install agent create image --log-level debug |
| 181 | +
|
| 182 | +``` |
| 183 | +- Replace ./demo with the actual path where your configuration files are stored. This command will create the agent ISO file using the provided configurations. |
| 184 | +Once the command completes successfully, the agent ISO file will be generated in the specified directory. |
| 185 | + |
| 186 | +Refer to the openshift-install agent command line output from the sample screenshot below. |
| 187 | + |
| 188 | +<img src="images/9.jpg" width=500x align="top"> |
| 189 | + |
| 190 | +The command will generate the agent ISO along with an **auth** directory. This directory will contain the following important files: |
| 191 | + |
| 192 | + |
| 193 | + |
| 194 | +- **kubeconfig** – This file is used to authenticate and configure access to the OpenShift cluster. |
| 195 | +- **kubeadmin-password** – This file contains the initial password for the kubeadmin user to access the cluster. |
| 196 | +- **rendezvousIP** – This file contains the IP address specified in the agent-config.yaml file, which is used for agent communication. |
| 197 | + |
| 198 | +<img src="images/36.jpg" width=200x align="top"> |
| 199 | + |
| 200 | + |
| 201 | + |
| 202 | +### 5. Create Custom Image in OCI |
| 203 | + |
| 204 | +1. Login to OCI console and upload the ISO file to OCI Object Storage bucket. |
| 205 | +2. Once the upload is successful, create a custom image using **Import image** option. Select **RHEL** Operating system and **QCOW2** image type while importing the ISO. Levae the default selection for Launch mode as **Paravirtualized mode**. Refer to the screenshot below. |
| 206 | + |
| 207 | +<img src="images/10.jpg" width=500x align="top"> |
| 208 | + |
| 209 | +3. Once the image is successfully imported, click on **Edit Image Capabilities** and unselect **BIOS** option. Leave the remaining options as default. |
| 210 | + |
| 211 | +<img src="images/16.jpg" width=500x align="top"> |
| 212 | + |
| 213 | +### 6. Provision Control Plane Nodes in OCI |
| 214 | + |
| 215 | +1. Create first OCI instance using custom image created in Step 5. Make sure to use the following setting for the control plane VM. |
| 216 | + - Select the flex shape and assign the recommended resources for OCPU and Memory. |
| 217 | + - Select the VCN and Private subnet created in Step 3. |
| 218 | + - Under Primary VNIC IP addresses - Select **Manually assign private IPv4 address** option and provide the IPv4 address of a **rendezvousIP** supplied in **agent-config.yaml** file. Refer to the sample screenshot below. |
| 219 | + |
| 220 | + <img src="images/11.jpg" width=500x align="top"> |
| 221 | + |
| 222 | + - Click on **Use network security groups to control traffic** under Advanced options and select the controlplane NSG. Refer to the sample screenshot below. |
| 223 | + |
| 224 | + <img src="images/12.jpg" width=500x align="top"> |
| 225 | + |
| 226 | + - Do not add SSH key as it is already embedded in the ISO supplied in **install-config.yaml** file. |
| 227 | + - Modify the boot volume size and VPU based on the Red Hat recommended guidelines. Refer to the example screenshot below |
| 228 | + <img src="images/22.jpg" width=500x align="top"> |
| 229 | + - Modify the tag namespace in Management tab under advanced option. Use controlplane specific tag namespace, key and a value. Refer to the sample screenshot below. |
| 230 | + |
| 231 | + <img src="images/13.jpg" width=500x align="top"> |
| 232 | + |
| 233 | +### 7. Add Control Plane node to the API Apps and Int Load Balancer |
| 234 | + |
| 235 | +1. In this step, you will be updating the OCI Apps load balancer with new control plane to ensure successful communication with the API listener load balancer. To do that, perform the below tasks: |
| 236 | +- Navigate to Hamburger Menu -> Networking -> Virtual Cloud Network(VCN)-> Load Balancer and select **####-openshift_api_apps_lb** load balancer. |
| 237 | +- Select the API backend sets -> Backends and Click on **Add backends** to the and add the first control plane node provisioned in Step 6. Refer to the sample screenshot below |
| 238 | + |
| 239 | +<img src="images/24.jpg" width=500x align="top"> |
| 240 | + |
| 241 | +- Repeat the steps for HTTP and HTTPS backend sets. |
| 242 | + |
| 243 | +2. In this step, you will be updating the OCI API Internal load balancer with new control plane node. follow the same procedure as described in step 7.1 and update the **####-openshift_api_int_lb** |
| 244 | +3. Create and Update **api_backend**, **infra-mcs** and **infra-mcs_2** with ports **6443**, **26623** & **22624** respectively. |
| 245 | + |
| 246 | + |
| 247 | +### 8. Install OpenShift Cluster |
| 248 | + |
| 249 | +In this step, you will install the OpenShift from the client machine where you have extracted the **openshift-install** binaries along with **pull secret** and **kubectl** (Step 2). |
| 250 | + |
| 251 | +1. Run the following command to install the OpenShift cluster |
| 252 | + |
| 253 | +``` |
| 254 | +openshift-install agent wait-for install-complete --log-level debug |
| 255 | +
|
| 256 | +``` |
| 257 | +Find below the sample screenshot of the command output. |
| 258 | + |
| 259 | +<img src="images/27.jpg" width=500x align="top"> |
| 260 | + |
| 261 | +2. Now, login to the control plane node with the **rendezvous IP** and run the following command to check the logs. |
| 262 | + |
| 263 | +``` |
| 264 | +journalctl -u assisted-service.service |
| 265 | +``` |
| 266 | + |
| 267 | + |
| 268 | +**OR** |
| 269 | + |
| 270 | +``` |
| 271 | +journalctl -l |
| 272 | +
|
| 273 | +``` |
| 274 | + |
| 275 | +<img src="images/28-1.jpg" width=500x align="top"> |
| 276 | + |
| 277 | +<img src="images/29.jpg" width=500x align="top"> |
| 278 | + |
| 279 | +3. Create two additional control plane VMs by following the same procedure you did for Control Plane VM1. |
| 280 | + |
| 281 | +**Note**: You can use DHCP IP address for the additional Control Plane VMs |
| 282 | + |
| 283 | +4. Once all the control plane VMs are up, add them to the API Apps and API Int Load balancer. |
| 284 | +5. Monitor the progress of **journalctl** command output and logs from openshift-install command. |
| 285 | +6. Once the installation is successful, you will see the **Installation Completed** message as an output of the journalctl command. Refer to the screenshot below. |
| 286 | + |
| 287 | +<img src="images/30.jpg" width=500x align="top"> |
| 288 | + |
| 289 | +### 9. Validate OpenShift Installation |
| 290 | + |
| 291 | +1. Navigate to your working directory to identify **auth** directory which contains the two files **kubeadmin-password** and **kubeconfig** |
| 292 | +2. Open a browser and access the the openshift console using load balancer DNS name. Refer to the sampel screenshot below. The username is **kubeadmin** and password can be obtained from **kubeadmin-password** file. |
| 293 | + |
| 294 | +<img src="images/31.jpg" width=500x align="top"> |
| 295 | + |
| 296 | +3. Validate the status panel. The status should look green and there should not be any alerts. |
| 297 | +4. Navigate to the **Compute**-> **Nodes**. The three control plane nodes should appear in **Ready** state without any error messages. Refer to the screeshot below. |
| 298 | + |
| 299 | +<img src="images/32.jpg" width=500x align="top"> |
| 300 | + |
| 301 | + |
| 302 | +**Note:** Follow the Part 2 of this Tutorial series to complete the OpenShift installation. |
0 commit comments