You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
= Adding worker nodes to {sno} clusters with GitOps ZTP
9
8
10
-
You can add one or more worker nodes to existing {sno} clusters to increase CPU resources.
9
+
You can add one or more worker nodes to existing {sno} clusters to increase available CPU resources in the cluster.
11
10
12
11
.Prerequisites
13
12
14
-
* Install and configure {rh-rhacm} 2.6 or later running on{product-title} 4.11 or later on a bare-metal cluster
15
-
* Install {cgu-operator-full}
16
-
* Install OpenShift GitOps Operator
17
-
* Run {product-title} 4.12 or later in the zero touch provisioning (ZTP) container
18
-
* Deploy an {sno} cluster through ZTP
13
+
* Install and configure {rh-rhacm} 2.6 or later in an{product-title} 4.11 or later bare-metal hub cluster
14
+
* Install {cgu-operator-full} in the hub cluster
15
+
* Install {gitops-title} in the hub cluster
16
+
* Use the GitOps ZTP `ztp-site-generate` container image version 4.12 or later
17
+
* Deploy a managed {sno} cluster with GitOps ZTP
19
18
* Configure the Central Infrastructure Management as described in the {rh-rhacm} documentation
20
19
* Configure the DNS serving the cluster to resolve the internal API endpoint `api-int.<cluster_name>.<base_domain>`
21
20
22
21
.Procedure
23
22
24
-
. If you deployed your cluster using the `example-sno.yaml``SiteConfig` manifest, add your new worker node to the `spec.clusters['example-sno'].nodes` list:
23
+
. If you deployed your cluster by using the `example-sno.yaml``SiteConfig` manifest, add your new worker node to the `spec.clusters['example-sno'].nodes` list:
25
24
+
26
25
[source,yaml]
27
26
----
@@ -79,13 +78,13 @@ metadata:
79
78
type: Opaque
80
79
----
81
80
82
-
. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application.
83
-
81
+
. Commit the changes in Git, and then push to the Git repository that is being monitored by the GitOps ZTP ArgoCD application.
82
+
+
84
83
When the ArgoCD `cluster` application synchronizes, two new manifests appear on the hub cluster generated by the ZTP plugin:
85
-
84
+
+
86
85
* `BareMetalHost`
87
86
* `NMStateConfig`
88
-
87
+
+
89
88
[IMPORTANT]
90
89
====
91
90
The `cpuset` field should not be configured for the worker node. Workload partitioning for worker nodes is added through management policies after the node installation is complete.
@@ -95,31 +94,29 @@ The `cpuset` field should not be configured for the worker node. Workload partit
95
94
96
95
You can monitor the installation process in several ways.
97
96
98
-
. Check if the preprovisioning images are created by running the following command:
97
+
* Check if the preprovisioning images are created by running the following command:
. When the worker node installation completes, its certificates are approved automatically. At this point, the worker appears in the `ManagedClusterInfo` status:
153
+
.. When the worker node installation is finished, the worker node certificates are approved automatically. At this point, the worker appears in the `ManagedClusterInfo` status. Run the following command to see the status:
When you add worker nodes to increase available CPU resources, the original {sno} cluster retains the control plane node role.
10
+
You can expand {sno} clusters with GitOps ZTP. When you add worker nodes to {sno} clusters, the original {sno} cluster retains the control plane node role.
11
11
12
12
[NOTE]
13
13
====
14
-
Although there is no specified limit on the number of worker nodes that you can add, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes.
14
+
Although there is no specified limit on the number of worker nodes that you can add to a {sno} cluster, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes.
15
15
====
16
16
17
-
If workload partitioning is required on the worker node, the policies configuring the worker node must be deployed and remediated before installing the node. This way, the workload partitioning `MachineConfig` objects are rendered and associated with the `worker``MachineConfig` pool before the `MachineConfig` ignition is downloaded by the installing worker node.
17
+
If you require workload partitioning on the worker node, you must deploy and remediate the managed cluster policies on the hub cluster before installing the node. This way, the workload partitioning `MachineConfig` objects are rendered and associated with the `worker``MachineConfig` pool before the GitOps ZTP workflow applies the `MachineConfig` ignition file to the worker node.
18
18
19
19
The recommended procedure order is remediating policies, then installing the worker node.
20
20
If you create the workload partitioning manifests after node installation, you must manually drain the node and delete all the pods managed by daemonsets. When the managing daemonsets create the new pods, the new pods undergo the workload partitioning process.
21
21
22
-
:FeatureName: Adding worker nodes to {sno} clusters
23
-
22
+
:FeatureName: Adding worker nodes to {sno} clusters with GitOps ZTP
You can configure the additional worker node with a DU profile.
11
10
@@ -17,4 +16,4 @@ You can apply a RAN distributed unit (DU) profile to the worker node cluster usi
17
16
* `ns.yaml`
18
17
* `kustomization.yaml`
19
18
20
-
Configuring the DU profile on the worker node is considered an upgrade. To initiate the upgrade flow, you must update the existing policies or create additional ones. Then, you must create a `ClusterGroupUpgrade` CR to reconcile the policies in the group of clusters.
19
+
Configuring the DU profile on the worker node is considered an upgrade. To initiate the upgrade flow, you must update the existing policies or create additional ones. Then, you must create a `ClusterGroupUpgrade` CR to reconcile the policies in the group of clusters.
The PTP configuration resources and SR-IOV network node policies use `node-role.kubernetes.io/master: ""` as the node selector. If the additional worker nodes have the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker nodes. However, the node selector must be changed to select both node types, for example with the `"node-role.kubernetes.io/worker"` label.
9
+
The PTP configuration resources and SR-IOV network node policies use `node-role.kubernetes.io/master: ""` as the node selector. If the additional worker nodes have the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker nodes. However, the node selector must be changed to select both node types, for example with the `"node-role.kubernetes.io/worker"` label.
You can add one or more worker nodes to an existing {sno} cluster to increase CPU resources used by the original {sno} control plane node. The addition of worker nodes does not require any downtime for the existing {sno}.
9
+
You can add one or more worker nodes to an existing {sno} cluster with GitOps ZTP. Adding worker nodes does not require any downtime for the existing {sno}.
0 commit comments