Skip to content

Commit 3ad5f89

Browse files
authored
Merge pull request #55356 from aireilly/OCPBUGS-6802
OCPBUGS-6802 - Remedy unclear topic titles for GitOps ZTP worker nodes
2 parents 454b586 + e69b0eb commit 3ad5f89

7 files changed

+42
-47
lines changed

modules/ztp-adding-worker-nodes.adoc

Lines changed: 21 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,26 @@
11
// Module included in the following assemblies:
2-
// Epic CNF-5335 (4.11), Story TELCODOCS-643
3-
// scalability_and_performance/ztp-deploying-disconnected.adoc
2+
//
3+
// * scalability_and_performance/ztp_far_edge/ztp-sno-additional-worker-node.adoc
44

55
:_content-type: PROCEDURE
66
[id="ztp-additional-worker-sno-proc_{context}"]
7-
= Adding worker nodes to {sno} clusters
8-
include::../_attributes/common-attributes.adoc[]
7+
= Adding worker nodes to {sno} clusters with GitOps ZTP
98

10-
You can add one or more worker nodes to existing {sno} clusters to increase CPU resources.
9+
You can add one or more worker nodes to existing {sno} clusters to increase available CPU resources in the cluster.
1110

1211
.Prerequisites
1312

14-
* Install and configure {rh-rhacm} 2.6 or later running on {product-title} 4.11 or later on a bare-metal cluster
15-
* Install {cgu-operator-full}
16-
* Install OpenShift GitOps Operator
17-
* Run {product-title} 4.12 or later in the zero touch provisioning (ZTP) container
18-
* Deploy an {sno} cluster through ZTP
13+
* Install and configure {rh-rhacm} 2.6 or later in an {product-title} 4.11 or later bare-metal hub cluster
14+
* Install {cgu-operator-full} in the hub cluster
15+
* Install {gitops-title} in the hub cluster
16+
* Use the GitOps ZTP `ztp-site-generate` container image version 4.12 or later
17+
* Deploy a managed {sno} cluster with GitOps ZTP
1918
* Configure the Central Infrastructure Management as described in the {rh-rhacm} documentation
2019
* Configure the DNS serving the cluster to resolve the internal API endpoint `api-int.<cluster_name>.<base_domain>`
2120
2221
.Procedure
2322

24-
. If you deployed your cluster using the `example-sno.yaml` `SiteConfig` manifest, add your new worker node to the `spec.clusters['example-sno'].nodes` list:
23+
. If you deployed your cluster by using the `example-sno.yaml` `SiteConfig` manifest, add your new worker node to the `spec.clusters['example-sno'].nodes` list:
2524
+
2625
[source,yaml]
2726
----
@@ -79,13 +78,13 @@ metadata:
7978
type: Opaque
8079
----
8180

82-
. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application.
83-
81+
. Commit the changes in Git, and then push to the Git repository that is being monitored by the GitOps ZTP ArgoCD application.
82+
+
8483
When the ArgoCD `cluster` application synchronizes, two new manifests appear on the hub cluster generated by the ZTP plugin:
85-
84+
+
8685
* `BareMetalHost`
8786
* `NMStateConfig`
88-
87+
+
8988
[IMPORTANT]
9089
====
9190
The `cpuset` field should not be configured for the worker node. Workload partitioning for worker nodes is added through management policies after the node installation is complete.
@@ -95,31 +94,29 @@ The `cpuset` field should not be configured for the worker node. Workload partit
9594

9695
You can monitor the installation process in several ways.
9796

98-
. Check if the preprovisioning images are created by running the following command:
97+
* Check if the preprovisioning images are created by running the following command:
9998
+
10099
[source,terminal]
101100
----
102101
$ oc get ppimg -n example-sno
103102
----
104103
+
105104
.Example output
106-
+
107105
[source,terminal]
108106
----
109107
NAMESPACE NAME READY REASON
110108
example-sno example-sno True ImageCreated
111109
example-sno example-node2 True ImageCreated
112110
----
113111
114-
. Check the state of the bare-metal hosts:
112+
* Check the state of the bare-metal hosts:
115113
+
116114
[source,terminal]
117115
----
118116
$ oc get bmh -n example-sno
119117
----
120118
+
121119
.Example output
122-
+
123120
[source,terminal]
124121
----
125122
NAME STATE CONSUMER ONLINE ERROR AGE
@@ -128,15 +125,16 @@ example-node2 provisioning true 4m50s <1>
128125
----
129126
<1> The `provisioning` state indicates that node booting from the installation media is in progress.
130127
131-
. Continuously monitor the installation process:
128+
* Continuously monitor the installation process:
129+
130+
.. Watch the agent install process by running the following command:
132131
+
133132
[source,terminal]
134133
----
135134
$ oc get agent -n example-sno --watch
136135
----
137136
+
138137
.Example output
139-
+
140138
[source,terminal]
141139
----
142140
NAME CLUSTER APPROVED ROLE STAGE
@@ -152,7 +150,7 @@ NAME CLUSTER APPROVED ROLE STAGE
152150
14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done
153151
----
154152

155-
. When the worker node installation completes, its certificates are approved automatically. At this point, the worker appears in the `ManagedClusterInfo` status:
153+
.. When the worker node installation is finished, the worker node certificates are approved automatically. At this point, the worker appears in the `ManagedClusterInfo` status. Run the following command to see the status:
156154
+
157155
[source,terminal]
158156
----
@@ -161,9 +159,8 @@ jsonpath='{range .status.nodeList[*]}{.name}{"\t"}{.conditions}{"\t"}{.labels}{"
161159
----
162160
+
163161
.Example output
164-
+
165162
[source,terminal]
166163
----
167164
example-sno [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/master":"","node-role.kubernetes.io/worker":""}
168165
example-node2 [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/worker":""}
169-
----
166+
----
Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,23 @@
11
// Module included in the following assemblies:
2-
// Epic CNF-5335 (4.11), Story TELCODOCS-643
3-
// scalability_and_performance/ztp-deploying-disconnected.adoc
2+
//
3+
// * scalability_and_performance/ztp_far_edge/ztp-sno-additional-worker-node.adoc
4+
45

56
:_content-type: CONCEPT
67
[id="ztp-additional-worker-sno_{context}"]
7-
= {sno-caps} cluster expansion with worker nodes
8-
include::../_attributes/common-attributes.adoc[]
8+
= Expanding {sno} clusters with GitOps ZTP
99

10-
When you add worker nodes to increase available CPU resources, the original {sno} cluster retains the control plane node role.
10+
You can expand {sno} clusters with GitOps ZTP. When you add worker nodes to {sno} clusters, the original {sno} cluster retains the control plane node role.
1111

1212
[NOTE]
1313
====
14-
Although there is no specified limit on the number of worker nodes that you can add, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes.
14+
Although there is no specified limit on the number of worker nodes that you can add to a {sno} cluster, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes.
1515
====
1616

17-
If workload partitioning is required on the worker node, the policies configuring the worker node must be deployed and remediated before installing the node. This way, the workload partitioning `MachineConfig` objects are rendered and associated with the `worker` `MachineConfig` pool before the `MachineConfig` ignition is downloaded by the installing worker node.
17+
If you require workload partitioning on the worker node, you must deploy and remediate the managed cluster policies on the hub cluster before installing the node. This way, the workload partitioning `MachineConfig` objects are rendered and associated with the `worker` `MachineConfig` pool before the GitOps ZTP workflow applies the `MachineConfig` ignition file to the worker node.
1818

1919
The recommended procedure order is remediating policies, then installing the worker node.
2020
If you create the workload partitioning manifests after node installation, you must manually drain the node and delete all the pods managed by daemonsets. When the managing daemonsets create the new pods, the new pods undergo the workload partitioning process.
2121

22-
:FeatureName: Adding worker nodes to {sno} clusters
23-
22+
:FeatureName: Adding worker nodes to {sno} clusters with GitOps ZTP
2423
include::snippets/technology-preview.adoc[]
Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
11
// Module included in the following assemblies:
2-
// Epic CNF-5335 (4.11), Story TELCODOCS-643
3-
// scalability_and_performance/ztp-deploying-disconnected.adoc
2+
//
3+
// * scalability_and_performance/ztp_far_edge/ztp-sno-additional-worker-node.adoc
44

55
:_content-type: CONCEPT
66
[id="ztp-additional-worker-apply-du-profile_{context}"]
77
= Applying profiles to the worker node
8-
include::../_attributes/common-attributes.adoc[]
98

109
You can configure the additional worker node with a DU profile.
1110

@@ -17,4 +16,4 @@ You can apply a RAN distributed unit (DU) profile to the worker node cluster usi
1716
* `ns.yaml`
1817
* `kustomization.yaml`
1918
20-
Configuring the DU profile on the worker node is considered an upgrade. To initiate the upgrade flow, you must update the existing policies or create additional ones. Then, you must create a `ClusterGroupUpgrade` CR to reconcile the policies in the group of clusters.
19+
Configuring the DU profile on the worker node is considered an upgrade. To initiate the upgrade flow, you must update the existing policies or create additional ones. Then, you must create a `ClusterGroupUpgrade` CR to reconcile the policies in the group of clusters.

modules/ztp-worker-node-daemon-selector-compatibility.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
// Module included in the following assemblies:
2-
// Epic CNF-5335 (4.11), Story TELCODOCS-643
3-
// scalability_and_performance/ztp-deploying-disconnected.adoc
2+
//
3+
// * scalability_and_performance/ztp_far_edge/ztp-sno-additional-worker-node.adoc
44

55
:_content-type: PROCEDURE
66
[id="ztp-additional-worker-daemon-selector-comp_{context}"]
@@ -65,4 +65,4 @@ spec:
6565
Changing the `daemonNodeSelector` field causes temporary PTP synchronization loss and SR-IOV connectivity loss.
6666
====
6767

68-
. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application.
68+
. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application.
Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
// Module included in the following assemblies:
2-
// Epic CNF-5335 (4.11), Story TELCODOCS-643
3-
// scalability_and_performance/ztp-deploying-disconnected.adoc
2+
//
3+
// * scalability_and_performance/ztp_far_edge/ztp-sno-additional-worker-node.adoc
44

55
:_content-type: CONCEPT
66
[id="ztp-additional-worker-node-selector-comp_{context}"]
77
= PTP and SR-IOV node selector compatibility
88

9-
The PTP configuration resources and SR-IOV network node policies use `node-role.kubernetes.io/master: ""` as the node selector. If the additional worker nodes have the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker nodes. However, the node selector must be changed to select both node types, for example with the `"node-role.kubernetes.io/worker"` label.
9+
The PTP configuration resources and SR-IOV network node policies use `node-role.kubernetes.io/master: ""` as the node selector. If the additional worker nodes have the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker nodes. However, the node selector must be changed to select both node types, for example with the `"node-role.kubernetes.io/worker"` label.

modules/ztp-worker-node-preparing-policies.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
// Module included in the following assemblies:
2-
// Epic CNF-5335 (4.11), Story TELCODOCS-643
3-
// scalability_and_performance/ztp-deploying-disconnected.adoc
2+
//
3+
// * scalability_and_performance/ztp_far_edge/ztp-sno-additional-worker-node.adoc
44

55
:_content-type: PROCEDURE
66
[id="ztp-additional-worker-policies_{context}"]
@@ -125,4 +125,4 @@ spec:
125125
remediationStrategy:
126126
maxConcurrency: 1
127127
EOF
128-
----
128+
----

scalability_and_performance/ztp_far_edge/ztp-sno-additional-worker-node.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9-
You can add one or more worker nodes to an existing {sno} cluster to increase CPU resources used by the original {sno} control plane node. The addition of worker nodes does not require any downtime for the existing {sno}.
9+
You can add one or more worker nodes to an existing {sno} cluster with GitOps ZTP. Adding worker nodes does not require any downtime for the existing {sno}.
1010

1111
[role="_additional-resources"]
1212
.Additional resources

0 commit comments

Comments
 (0)