Skip to content

Commit 83ae845

Browse files
committed
new node tuning topic for hosted control planes
1 parent 42b6692 commit 83ae845

File tree

3 files changed

+299
-0
lines changed

3 files changed

+299
-0
lines changed
Lines changed: 155 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/using-node-tuning-operator.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="advanced-node-tuning-hosted-cluster_{context}"]
7+
= Advanced node tuning for hosted clusters by setting kernel boot parameters
8+
9+
:FeatureName: Hosted control planes
10+
include::snippets/technology-preview.adoc[]
11+
12+
For more advanced tuning in hosted control planes, which requires setting kernel boot parameters, you can also use the Node Tuning Operator. The following example shows how you can create a node pool with huge pages reserved.
13+
14+
.Procedure
15+
16+
. Create a `ConfigMap` object that contains a `Tuned` object manifest for creating 10 huge pages that are 2 MB in size. Save this `ConfigMap` manifest in a file named `tuned-hugepages.yaml`:
17+
+
18+
[source,yaml]
19+
----
20+
apiVersion: v1
21+
kind: ConfigMap
22+
metadata:
23+
name: tuned-hugepages
24+
namespace: clusters
25+
data:
26+
tuning: |
27+
apiVersion: tuned.openshift.io/v1
28+
kind: Tuned
29+
metadata:
30+
name: hugepages
31+
namespace: openshift-cluster-node-tuning-operator
32+
spec:
33+
profile:
34+
- data: |
35+
[main]
36+
summary=Boot time configuration for hugepages
37+
include=openshift-node
38+
[bootloader]
39+
cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50
40+
name: openshift-node-hugepages
41+
recommend:
42+
- priority: 20
43+
profile: openshift-node-hugepages
44+
----
45+
+
46+
[NOTE]
47+
====
48+
The `.spec.recommend.match` field is intentionally left blank. In this case, this `Tuned` object is applied to all nodes in the node pool where this `ConfigMap` object is referenced. Group nodes with the same hardware configuration into the same node pool. Otherwise, TuneD operands can calculate conflicting kernel parameters for two or more nodes that share the same node pool.
49+
====
50+
51+
. Create the `ConfigMap` object in the management cluster:
52+
+
53+
[source,terminal]
54+
----
55+
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-hugepages.yaml
56+
----
57+
58+
. Create a `NodePool` manifest YAML file, customize the upgrade type of the `NodePool`, and reference the `ConfigMap` object that you created in the `spec.tuningConfig` section. Create the `NodePool` manifest and save it in a file named `hugepages-nodepool.yaml` by using the `hypershift` CLI:
59+
+
60+
[source,yaml]
61+
----
62+
NODEPOOL_NAME=hugepages-example
63+
INSTANCE_TYPE=m5.2xlarge
64+
NODEPOOL_REPLICAS=2
65+
66+
hypershift create nodepool aws \
67+
--cluster-name $CLUSTER_NAME \
68+
--name $NODEPOOL_NAME \
69+
--node-count $NODEPOOL_REPLICAS \
70+
--instance-type $INSTANCE_TYPE \
71+
--render > hugepages-nodepool.yaml
72+
----
73+
74+
. In the `hugepages-nodepool.yaml` file, set `.spec.management.upgradeType` to `InPlace`, and set `.spec.tuningConfig` to reference the `tuned-hugepages` `ConfigMap` object that you created.
75+
+
76+
[source,yaml]
77+
----
78+
apiVersion: hypershift.openshift.io/v1alpha1
79+
kind: NodePool
80+
metadata:
81+
name: hugepages-nodepool
82+
namespace: clusters
83+
...
84+
spec:
85+
management:
86+
...
87+
upgradeType: InPlace
88+
...
89+
tuningConfig:
90+
- name: tuned-hugepages
91+
----
92+
+
93+
[NOTE]
94+
====
95+
To avoid the unnecessary re-creation of nodes when you apply the new `MachineConfig` objects, set `.spec.management.upgradeType` to `InPlace`. If you use the `Replace` upgrade type, nodes are fully deleted and new nodes can replace them when you apply the new kernel boot parameters that the TuneD operand calculated.
96+
====
97+
98+
. Create the `NodePool` in the management cluster:
99+
+
100+
[source,terminal]
101+
----
102+
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f hugepages-nodepool.yaml
103+
----
104+
105+
.Verification
106+
107+
After the nodes are available, the containerized TuneD daemon calculates the required kernel boot parameters based on the applied TuneD profile. After the nodes are ready and reboot once to apply the generated `MachineConfig` object, you can verify that the TuneD profile is applied and that the kernel boot parameters are set.
108+
109+
. List the `Tuned` objects in the hosted cluster:
110+
+
111+
[source,terminal]
112+
----
113+
$ oc --kubeconfig="$HC_KUBECONFIG" get Tuneds -n openshift-cluster-node-tuning-operator
114+
----
115+
+
116+
.Example output
117+
[source,terminal]
118+
----
119+
NAME AGE
120+
default 123m
121+
hugepages-8dfb1fed 1m23s
122+
rendered 123m
123+
----
124+
125+
. List the `Profile` objects in the hosted cluster:
126+
+
127+
[source,terminal]
128+
----
129+
$ oc --kubeconfig="$HC_KUBECONFIG" get Profiles -n openshift-cluster-node-tuning-operator
130+
----
131+
+
132+
.Example output
133+
[source,terminal]
134+
----
135+
NAME TUNED APPLIED DEGRADED AGE
136+
nodepool-1-worker-1 openshift-node True False 132m
137+
nodepool-1-worker-2 openshift-node True False 131m
138+
hugepages-nodepool-worker-1 openshift-node-hugepages True False 4m8s
139+
hugepages-nodepool-worker-2 openshift-node-hugepages True False 3m57s
140+
----
141+
+
142+
Both of the worker nodes in the new `NodePool` have the `openshift-node-hugepages` profile applied.
143+
144+
. To confirm that the tuning was applied correctly, start a debug shell on a node and check `/proc/cmdline`.
145+
+
146+
[source,terminal]
147+
----
148+
$ oc --kubeconfig="$HC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline
149+
----
150+
+
151+
.Example output
152+
[source,terminal]
153+
----
154+
BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-... hugepagesz=2M hugepages=50
155+
----
Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/using-node-tuning-operator.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="node-tuning-hosted-cluster_{context}"]
7+
= Configuring node tuning in a hosted cluster
8+
9+
//# Manage node-level tuning with the Node Tuning Operator
10+
11+
:FeatureName: Hosted control planes
12+
include::snippets/technology-preview.adoc[]
13+
14+
To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain `Tuned` objects and referencing those config maps in your node pools.
15+
16+
.Procedure
17+
18+
. Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a `Tuned` manifest defines a profile that sets `vm.dirty_ratio` to 55 on nodes that contain the `tuned-1-node-label` node label with any value. Save the following `ConfigMap` manifest in a file named `tuned-1.yaml`:
19+
+
20+
[source,yaml]
21+
----
22+
apiVersion: v1
23+
kind: ConfigMap
24+
metadata:
25+
name: tuned-1
26+
namespace: clusters
27+
data:
28+
tuning: |
29+
apiVersion: tuned.openshift.io/v1
30+
kind: Tuned
31+
metadata:
32+
name: tuned-1
33+
namespace: openshift-cluster-node-tuning-operator
34+
spec:
35+
profile:
36+
- data: |
37+
[main]
38+
summary=Custom OpenShift profile
39+
include=openshift-node
40+
[sysctl]
41+
vm.dirty_ratio="55"
42+
name: tuned-1-profile
43+
recommend:
44+
- priority: 20
45+
profile: tuned-1-profile
46+
----
47+
+
48+
[NOTE]
49+
====
50+
If you do not add any labels to an entry in the `spec.recommend` section of the Tuned spec, node-pool-based matching is assumed, so the highest priority profile in the `spec.recommend` section is applied to nodes in the pool. Although you can achieve more fine-grained node-label-based matching by setting a label value in the Tuned `.spec.recommend.match` section, node labels will not persist during an upgrade unless you set the `.spec.management.upgradeType` value of the node pool to `InPlace`.
51+
====
52+
53+
. Create the `ConfigMap` object in the management cluster:
54+
+
55+
[source, terminal]
56+
----
57+
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-1.yaml
58+
----
59+
60+
. Reference the `ConfigMap` object in the `spec.tuningConfig` field of the node pool, either by editing a node pool or creating one. In this example, assume that you have only one `NodePool`, named `nodepool-1`, which contains 2 nodes.
61+
+
62+
[source,yaml]
63+
----
64+
apiVersion: hypershift.openshift.io/v1alpha1
65+
kind: NodePool
66+
metadata:
67+
...
68+
name: nodepool-1
69+
namespace: clusters
70+
...
71+
spec:
72+
...
73+
tuningConfig:
74+
- name: tuned-1
75+
status:
76+
...
77+
----
78+
+
79+
[NOTE]
80+
====
81+
You can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster.
82+
====
83+
84+
.Verification
85+
86+
Now that you have created the `ConfigMap` object that contains a `Tuned` manifest and referenced it in a `NodePool`, the Node Tuning Operator syncs the `Tuned` objects into the hosted cluster. You can verify which `Tuned` objects are defined and which TuneD profiles are applied to each node.
87+
88+
. List the `Tuned` objects in the hosted cluster:
89+
+
90+
[source,terminal]
91+
----
92+
$ oc --kubeconfig="$HC_KUBECONFIG" get Tuneds -n openshift-cluster-node-tuning-operator
93+
----
94+
+
95+
.Example output
96+
[source,terminal]
97+
----
98+
NAME AGE
99+
default 7m36s
100+
rendered 7m36s
101+
tuned-1 65s
102+
----
103+
104+
. List the `Profile` objects in the hosted cluster:
105+
+
106+
[source,terminal]
107+
----
108+
$ oc --kubeconfig="$HC_KUBECONFIG" get Profiles -n openshift-cluster-node-tuning-operator
109+
----
110+
+
111+
.Example output
112+
[source,terminal]
113+
----
114+
NAME TUNED APPLIED DEGRADED AGE
115+
nodepool-1-worker-1 tuned-1-profile True False 7m43s
116+
nodepool-1-worker-2 tuned-1-profile True False 7m14s
117+
----
118+
+
119+
[NOTE]
120+
====
121+
If no custom profiles are created, the `openshift-node` profile is applied by default.
122+
====
123+
124+
. To confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values:
125+
+
126+
[source,terminal]
127+
----
128+
$ oc --kubeconfig="$HC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio
129+
----
130+
+
131+
.Example output
132+
[source,terminal]
133+
----
134+
vm.dirty_ratio = 55
135+
----

scalability_and_performance/using-node-tuning-operator.adoc

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,3 +22,12 @@ include::modules/custom-tuning-specification.adoc[leveloffset=+1]
2222
include::modules/custom-tuning-example.adoc[leveloffset=+1]
2323

2424
include::modules/node-tuning-operator-supported-tuned-daemon-plug-ins.adoc[leveloffset=+1]
25+
26+
include::modules/node-tuning-hosted-cluster.adoc[leveloffset=+1]
27+
28+
include::modules/advanced-node-tuning-hosted-cluster.adoc[leveloffset=+1]
29+
30+
[role="_additional-resources"]
31+
.Additional resources
32+
33+
For more information about hosted control planes, see link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/multicluster_engine_overview#hosted-control-planes-intro[Using hosted control plane clusters (Technology Preview)].

0 commit comments

Comments
 (0)