|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * nodes/nodes/nodes-sno-worker-nodes.adoc |
| 4 | + |
| 5 | +:_content-type: PROCEDURE |
| 6 | +[id="ai-adding-worker-nodes-to-cluster_{context}"] |
| 7 | += Adding worker nodes using the Assisted Installer REST API |
| 8 | + |
| 9 | +You can add worker nodes to clusters using the Assisted Installer REST API. |
| 10 | + |
| 11 | +.Prerequisites |
| 12 | + |
| 13 | +* Install the OpenShift Cluster Manager CLI (`ocm`). |
| 14 | +
|
| 15 | +* Log in to link:https://console.redhat.com/openshift/assisted-installer/clusters[{cluster-manager}] as a user with cluster creation privileges. |
| 16 | +
|
| 17 | +* Install `jq`. |
| 18 | +
|
| 19 | +* Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. |
| 20 | +
|
| 21 | +.Procedure |
| 22 | + |
| 23 | +. Authenticate against the Assisted Installer REST API and generate a JSON web token (JWT) for your session. The generated JWT token is valid for 15 minutes only. |
| 24 | + |
| 25 | +. Set the `$API_URL` variable by running the following command: |
| 26 | ++ |
| 27 | +[source,terminal] |
| 28 | +---- |
| 29 | +$ export API_URL=<api_url> <1> |
| 30 | +---- |
| 31 | +<1> Replace `<api_url>` with the Assisted Installer API URL, for example, `https://api.openshift.com` |
| 32 | + |
| 33 | +. Import the {sno} cluster by running the following commands: |
| 34 | + |
| 35 | +.. Set the `$OPENSHIFT_CLUSTER_ID` variable. Log in to the cluster and run the following command: |
| 36 | ++ |
| 37 | +[source,terminal] |
| 38 | +---- |
| 39 | +$ export OPENSHIFT_CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') |
| 40 | +---- |
| 41 | + |
| 42 | +.. Set the `$CLUSTER_REQUEST` variable that is used to import the cluster: |
| 43 | ++ |
| 44 | +[source,terminal] |
| 45 | +---- |
| 46 | +$ export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$OPENSHIFT_CLUSTER_ID" '{ |
| 47 | + "api_vip_dnsname": "<api_vip>", <1> |
| 48 | + "openshift_cluster_id": $openshift_cluster_id, |
| 49 | + "name": "<openshift_cluster_name>" <2> |
| 50 | +}') |
| 51 | +---- |
| 52 | +<1> Replace `<api_vip>` with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, `api.compute-1.example.com`. |
| 53 | +<2> Replace `<openshift_cluster_name>` with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. |
| 54 | + |
| 55 | +.. Import the cluster and set the `$CLUSTER_ID` variable. Run the following command: |
| 56 | ++ |
| 57 | +[source,terminal] |
| 58 | +---- |
| 59 | +$ CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer ${JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ |
| 60 | + -d "$CLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id') |
| 61 | +---- |
| 62 | + |
| 63 | +. Generate the `InfraEnv` resource for the cluster and set the `$INFRA_ENV_ID` variable by running the following commands: |
| 64 | + |
| 65 | +.. Download the pull secret file from Red Hat OpenShift Cluster Manager at link:console.redhat.com/openshift/install/pull-secret[console.redhat.com]. |
| 66 | + |
| 67 | +.. Set the `$INFRA_ENV_REQUEST` variable: |
| 68 | ++ |
| 69 | +[source,terminal] |
| 70 | +---- |
| 71 | +export INFRA_ENV_REQUEST=$(jq --null-input \ |
| 72 | + --slurpfile pull_secret <path_to_pull_secret_file> \ //<1> |
| 73 | + --arg ssh_pub_key "$(cat <path_to_ssh_pub_key>)" \ //<2> |
| 74 | + --arg cluster_id "$CLUSTER_ID" '{ |
| 75 | + "name": "<infraenv_name>", <3> |
| 76 | + "pull_secret": $pull_secret[0] | tojson, |
| 77 | + "cluster_id": $cluster_id, |
| 78 | + "ssh_authorized_key": $ssh_pub_key, |
| 79 | + "image_type": "<iso_image_type>" <4> |
| 80 | +}') |
| 81 | +---- |
| 82 | +<1> Replace `<path_to_pull_secret_file>` with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at link:console.redhat.com/openshift/install/pull-secret[console.redhat.com]. |
| 83 | +<2> Replace `<path_to_ssh_pub_key>` with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. |
| 84 | +<3> Replace `<infraenv_name>` with the plain text name for the `InfraEnv` resource. |
| 85 | +<4> Replace `<iso_image_type>` with the ISO image type, either `full-iso` or `minimal-iso`. |
| 86 | + |
| 87 | +.. Post the `$INFRA_ENV_REQUEST` to the link:https://api.openshift.com/?urls.primaryName=assisted-service%20service#/installer/RegisterInfraEnv[/v2/infra-envs] API and set the `$INFRA_ENV_ID` variable: |
| 88 | ++ |
| 89 | +[source,terminal] |
| 90 | +---- |
| 91 | +$ INFRA_ENV_ID=$(curl "$API_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer ${JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "$INFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id') |
| 92 | +---- |
| 93 | + |
| 94 | +. Get the URL of the discovery ISO for the cluster worker node by running the following command: |
| 95 | ++ |
| 96 | +[source,terminal] |
| 97 | +---- |
| 98 | +$ curl -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID" -H "Authorization: Bearer ${JWT_TOKEN}" | jq -r '.download_url' |
| 99 | +---- |
| 100 | ++ |
| 101 | +.Example output |
| 102 | +[source,terminal] |
| 103 | +---- |
| 104 | +https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.11 |
| 105 | +---- |
| 106 | + |
| 107 | +. Download the ISO: |
| 108 | ++ |
| 109 | +[source,terminal] |
| 110 | +---- |
| 111 | +$ curl -L -s '<iso_url>' --output rhcos-live-minimal.iso <1> |
| 112 | +---- |
| 113 | +<1> Replace `<iso_url>` with the URL for the ISO from the previous step. |
| 114 | + |
| 115 | +. Boot the new worker host from the downloaded `rhcos-live-minimal.iso`. |
| 116 | + |
| 117 | +. Get the list of hosts in the cluster that are _not_ installed. Keep running the following command until the new host shows up: |
| 118 | ++ |
| 119 | +[source,terminal] |
| 120 | +---- |
| 121 | +$ curl -s "$API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID" -H "Authorization: Bearer ${JWT_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id' |
| 122 | +---- |
| 123 | ++ |
| 124 | +.Example output |
| 125 | +[source,terminal] |
| 126 | +---- |
| 127 | +2294ba03-c264-4f11-ac08-2f1bb2f8c296 |
| 128 | +---- |
| 129 | + |
| 130 | +. Set the `$HOST_ID` variable for the new worker node, for example: |
| 131 | ++ |
| 132 | +[source,terminal] |
| 133 | +---- |
| 134 | +$ HOST_ID=<host_id> <1> |
| 135 | +---- |
| 136 | +<1> Replace `<host_id>` with the host ID from the previous step. |
| 137 | + |
| 138 | +. Check that the host is ready to install by running the following command: |
| 139 | ++ |
| 140 | +[NOTE] |
| 141 | +==== |
| 142 | +Ensure that you copy the entire command including the complete `jq` expression. |
| 143 | +==== |
| 144 | ++ |
| 145 | +[source,terminal] |
| 146 | +---- |
| 147 | +$ curl -s $API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID -H "Authorization: Bearer ${JWT_TOKEN}" | jq ' |
| 148 | +def host_name($host): |
| 149 | + if (.suggested_hostname // "") == "" then |
| 150 | + if (.inventory // "") == "" then |
| 151 | + "Unknown hostname, please wait" |
| 152 | + else |
| 153 | + .inventory | fromjson | .hostname |
| 154 | + end |
| 155 | + else |
| 156 | + .suggested_hostname |
| 157 | + end; |
| 158 | +
|
| 159 | +def is_notable($validation): |
| 160 | + ["failure", "pending", "error"] | any(. == $validation.status); |
| 161 | +
|
| 162 | +def notable_validations($validations_info): |
| 163 | + [ |
| 164 | + $validations_info // "{}" |
| 165 | + | fromjson |
| 166 | + | to_entries[].value[] |
| 167 | + | select(is_notable(.)) |
| 168 | + ]; |
| 169 | +
|
| 170 | +{ |
| 171 | + "Hosts validations": { |
| 172 | + "Hosts": [ |
| 173 | + .hosts[] |
| 174 | + | select(.status != "installed") |
| 175 | + | { |
| 176 | + "id": .id, |
| 177 | + "name": host_name(.), |
| 178 | + "status": .status, |
| 179 | + "notable_validations": notable_validations(.validations_info) |
| 180 | + } |
| 181 | + ] |
| 182 | + }, |
| 183 | + "Cluster validations info": { |
| 184 | + "notable_validations": notable_validations(.validations_info) |
| 185 | + } |
| 186 | +} |
| 187 | +' -r |
| 188 | +---- |
| 189 | ++ |
| 190 | +.Example output |
| 191 | +[source,terminal] |
| 192 | +---- |
| 193 | +{ |
| 194 | + "Hosts validations": { |
| 195 | + "Hosts": [ |
| 196 | + { |
| 197 | + "id": "97ec378c-3568-460c-bc22-df54534ff08f", |
| 198 | + "name": "localhost.localdomain", |
| 199 | + "status": "insufficient", |
| 200 | + "notable_validations": [ |
| 201 | + { |
| 202 | + "id": "ntp-synced", |
| 203 | + "status": "failure", |
| 204 | + "message": "Host couldn't synchronize with any NTP server" |
| 205 | + }, |
| 206 | + { |
| 207 | + "id": "api-domain-name-resolved-correctly", |
| 208 | + "status": "error", |
| 209 | + "message": "Parse error for domain name resolutions result" |
| 210 | + }, |
| 211 | + { |
| 212 | + "id": "api-int-domain-name-resolved-correctly", |
| 213 | + "status": "error", |
| 214 | + "message": "Parse error for domain name resolutions result" |
| 215 | + }, |
| 216 | + { |
| 217 | + "id": "apps-domain-name-resolved-correctly", |
| 218 | + "status": "error", |
| 219 | + "message": "Parse error for domain name resolutions result" |
| 220 | + } |
| 221 | + ] |
| 222 | + } |
| 223 | + ] |
| 224 | + }, |
| 225 | + "Cluster validations info": { |
| 226 | + "notable_validations": [] |
| 227 | + } |
| 228 | +} |
| 229 | +---- |
| 230 | + |
| 231 | +. When the previous command shows that the host is ready, start the installation using the link:https://api.openshift.com/?urls.primaryName=assisted-service%20service#/installer/v2InstallHost[/v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install] API by running the following command: |
| 232 | ++ |
| 233 | +[source,terminal] |
| 234 | +---- |
| 235 | +$ curl -X POST -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/hosts/$HOST_ID/actions/install" -H "Authorization: Bearer ${JWT_TOKEN}" |
| 236 | +---- |
| 237 | + |
| 238 | +. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. |
| 239 | ++ |
| 240 | +[IMPORTANT] |
| 241 | +==== |
| 242 | +You must approve the CSRs to complete the installation. |
| 243 | +==== |
| 244 | ++ |
| 245 | +Keep running the following API call to monitor the cluster installation: |
| 246 | ++ |
| 247 | +[source,terminal] |
| 248 | +---- |
| 249 | +$ curl -s "$API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID" -H "Authorization: Bearer ${JWT_TOKEN}" | jq '{ |
| 250 | + "Cluster day-2 hosts": |
| 251 | + [ |
| 252 | + .hosts[] |
| 253 | + | select(.status != "installed") |
| 254 | + | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} |
| 255 | + ] |
| 256 | +}' |
| 257 | +---- |
| 258 | ++ |
| 259 | +.Example output |
| 260 | +[source,terminal] |
| 261 | +---- |
| 262 | +{ |
| 263 | + "Cluster day-2 hosts": [ |
| 264 | + { |
| 265 | + "id": "a1c52dde-3432-4f59-b2ae-0a530c851480", |
| 266 | + "requested_hostname": "control-plane-1", |
| 267 | + "status": "added-to-existing-cluster", |
| 268 | + "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs", |
| 269 | + "progress": { |
| 270 | + "current_stage": "Done", |
| 271 | + "installation_percentage": 100, |
| 272 | + "stage_started_at": "2022-07-08T10:56:20.476Z", |
| 273 | + "stage_updated_at": "2022-07-08T10:56:20.476Z" |
| 274 | + }, |
| 275 | + "status_updated_at": "2022-07-08T10:56:20.476Z", |
| 276 | + "updated_at": "2022-07-08T10:57:15.306369Z", |
| 277 | + "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3", |
| 278 | + "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae", |
| 279 | + "created_at": "2022-07-06T22:54:57.161614Z" |
| 280 | + } |
| 281 | + ] |
| 282 | +} |
| 283 | +---- |
| 284 | + |
| 285 | +. Optional: Run the following command to see all the events for the cluster: |
| 286 | ++ |
| 287 | +[source,terminal] |
| 288 | +---- |
| 289 | +$ curl -s "$API_URL/api/assisted-install/v2/events?cluster_id=$CLUSTER_ID" -H "Authorization: Bearer ${JWT_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}' |
| 290 | +---- |
| 291 | ++ |
| 292 | +.Example output |
| 293 | +[source,terminal] |
| 294 | +---- |
| 295 | +{"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} |
| 296 | +{"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} |
| 297 | +{"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} |
| 298 | +{"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} |
| 299 | +{"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} |
| 300 | +{"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} |
| 301 | +---- |
| 302 | + |
| 303 | +. Log in to the cluster and approve the pending CSRs to complete the installation. |
| 304 | + |
| 305 | +.Verification |
| 306 | + |
| 307 | +* Check that the new worker node was successfully added to the cluster with a status of `Ready`: |
| 308 | ++ |
| 309 | +[source,terminal] |
| 310 | +---- |
| 311 | +$ oc get nodes |
| 312 | +---- |
| 313 | ++ |
| 314 | +.Example output |
| 315 | +[source,terminal] |
| 316 | +---- |
| 317 | +NAME STATUS ROLES AGE VERSION |
| 318 | +control-plane-1.example.com Ready master,worker 56m v1.24.0+beaaed6 |
| 319 | +compute-1.example.com Ready worker 11m v1.24.0+beaaed6 |
| 320 | +---- |
0 commit comments