|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * virt/virtual_machines/advanced_vm_management/virt-configuring-cluster-realtime-workloads.adoc |
| 4 | + |
| 5 | +:_mod-docs-content-type: PROCEDURE |
| 6 | +[id="virt-configuring-cluster-real-time_{context}"] |
| 7 | += Configuring a cluster for real-time workloads |
| 8 | + |
| 9 | +You can configure an {product-title} cluster to run real-time workloads. |
| 10 | + |
| 11 | +.Prerequisites |
| 12 | +* You have access to the cluster as a user with `cluster-admin` permissions. |
| 13 | +* You have installed the OpenShift CLI (`oc`). |
| 14 | +* You have installed the Node Tuning Operator. |
| 15 | +
|
| 16 | +.Procedure |
| 17 | + |
| 18 | +. Label a subset of the compute nodes with a custom role, for example, `worker-realtime`: |
| 19 | ++ |
| 20 | +[source,terminal] |
| 21 | +---- |
| 22 | +$ oc label node <node_name> node-role.kubernetes.io/worker-realtime="" |
| 23 | +---- |
| 24 | ++ |
| 25 | +[NOTE] |
| 26 | +==== |
| 27 | +You must use the default `master` role for {sno} and compact clusters. |
| 28 | +==== |
| 29 | + |
| 30 | +. Create a new `MachineConfigPool` manifest that contains the `worker-realtime` label in the `spec.machineConfigSelector` object: |
| 31 | ++ |
| 32 | +.Example `MachineConfigPool` manifest |
| 33 | +[source,yaml] |
| 34 | +---- |
| 35 | +apiVersion: machineconfiguration.openshift.io/v1 |
| 36 | +kind: MachineConfigPool |
| 37 | +metadata: |
| 38 | + name: worker-realtime |
| 39 | + labels: |
| 40 | + machineconfiguration.openshift.io/role: worker-realtime |
| 41 | +spec: |
| 42 | + machineConfigSelector: |
| 43 | + matchExpressions: |
| 44 | + - key: machineconfiguration.openshift.io/role |
| 45 | + operator: In |
| 46 | + values: |
| 47 | + - worker |
| 48 | + - worker-realtime |
| 49 | + nodeSelector: |
| 50 | + matchLabels: |
| 51 | + node-role.kubernetes.io/worker-realtime: "" |
| 52 | +---- |
| 53 | ++ |
| 54 | +[NOTE] |
| 55 | +==== |
| 56 | +You do not need to create a new `MachineConfigPool` manifest for {sno} and compact clusters. |
| 57 | +==== |
| 58 | + |
| 59 | +. If you created a new `MachineConfigPool` manifest in step 2, apply it to the cluster by using the following command: |
| 60 | ++ |
| 61 | +[source,terminal] |
| 62 | +---- |
| 63 | +$ oc apply -f <real_time_mcp>.yaml |
| 64 | +---- |
| 65 | + |
| 66 | +. Create a `PerformanceProfile` manifest that applies to the labeled nodes and the machine config pool that you created in the previous steps: |
| 67 | ++ |
| 68 | +.Example `PerformanceProfile` manifest |
| 69 | +[source,yaml] |
| 70 | +---- |
| 71 | +apiVersion: performance.openshift.io/v2 |
| 72 | +kind: PerformanceProfile |
| 73 | +metadata: |
| 74 | + name: profile-1 |
| 75 | +spec: |
| 76 | + cpu: |
| 77 | + isolated: 4-39,44-79 |
| 78 | + reserved: 0-3,40-43 |
| 79 | + globallyDisableIrqLoadBalancing: true |
| 80 | + hugepages: |
| 81 | + defaultHugepagesSize: 1G |
| 82 | + pages: |
| 83 | + - count: 8 |
| 84 | + size: 1G |
| 85 | + realTimeKernel: |
| 86 | + enabled: true |
| 87 | + workloadHints: |
| 88 | + highPowerConsumption: true |
| 89 | + realTime: true |
| 90 | + nodeSelector: |
| 91 | + node-role.kubernetes.io/worker-realtime: "" |
| 92 | + numa: |
| 93 | + topologyPolicy: single-numa-node |
| 94 | +---- |
| 95 | + |
| 96 | +. Apply the `PerformanceProfile` manifest: |
| 97 | ++ |
| 98 | +[source,terminal] |
| 99 | +---- |
| 100 | +$ oc apply -f <real_time_pp>.yaml |
| 101 | +---- |
| 102 | ++ |
| 103 | +[NOTE] |
| 104 | +==== |
| 105 | +The compute nodes automatically reboot twice after you apply the `MachineConfigPool` and `PerformanceProfile` manifests. This process might take a long time. |
| 106 | +==== |
| 107 | + |
| 108 | +. Retrieve the name of the generated `RuntimeClass` resource from the `status.runtimeClass` field of the `PerformanceProfile` object: |
| 109 | ++ |
| 110 | +[source,terminal] |
| 111 | +---- |
| 112 | +$ oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}' |
| 113 | +---- |
| 114 | + |
| 115 | +. Set the previously obtained `RuntimeClass` name as the default container runtime class for the `virt-launcher` pods by editing the `HyperConverged` custom resource (CR): |
| 116 | ++ |
| 117 | +[source,terminal,subs="attributes+"] |
| 118 | +---- |
| 119 | +$ oc patch hyperconverged kubevirt-hyperconverged -n {CNVNamespace} \ |
| 120 | + --type='json' -p='[{"op": "add", "path": "/spec/defaultRuntimeClass", "value":"<runtimeclass_name>"}]' |
| 121 | +---- |
| 122 | ++ |
| 123 | +[NOTE] |
| 124 | +==== |
| 125 | +Editing the `HyperConverged` CR changes a global setting that affects all VMs that are created after the change is applied. |
| 126 | +==== |
| 127 | + |
| 128 | +. If your real-time-enabled compute nodes use simultaneous multithreading (SMT), enable the `alignCPUs` feature gate by editing the `HyperConverged` CR: |
| 129 | ++ |
| 130 | +[source,terminal,subs="attributes+"] |
| 131 | +---- |
| 132 | +$ oc patch hyperconverged kubevirt-hyperconverged -n {CNVNamespace} \ |
| 133 | + --type='json' -p='[{"op": "replace", "path": "/spec/featureGates/alignCPUs", "value": true}]' |
| 134 | +---- |
| 135 | ++ |
| 136 | +[NOTE] |
| 137 | +==== |
| 138 | +Enabling `alignCPUs` allows {VirtProductName} to request up to two additional dedicated CPUs to bring the total CPU count to an even parity when using |
| 139 | +emulator thread isolation. |
| 140 | +==== |
0 commit comments