You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Applying a performance profile allows you to make use of the workload partitioning feature. An appropriately configured performance profile specifies the `isolated` and `reserved` CPUs. The recommended way to create a performance profile is to use the Performance Profile Creator (PPC) tool to create the performance profile.
With workload partitioning, cluster management pods are annotated to correctly partition them into a specified CPU affinity. These pods operate normally within the minimum size CPU configuration specified by the reserved value in the Performance Profile. Additional Day 2 Operators that make use of workload partitioning should be taken into account when calculating how many reserved CPU cores should be set aside for the platform.
10
+
11
+
Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities.
12
+
13
+
[NOTE]
14
+
====
15
+
Workload partitioning can only be enabled during cluster installation. You cannot disable workload partitioning postinstallation.
16
+
====
17
+
18
+
Use this procedure to enable workload partitioning cluster wide:
19
+
20
+
.Procedure
21
+
22
+
* In the `install-config.yaml` file, add the additional field `cpuPartitioningMode` and set it to `AllNodes`.
23
+
+
24
+
[source,yaml]
25
+
----
26
+
apiVersion: v1
27
+
baseDomain: devcluster.openshift.com
28
+
cpuPartitioningMode: AllNodes <1>
29
+
compute:
30
+
- architecture: amd64
31
+
hyperthreading: Enabled
32
+
name: worker
33
+
platform: {}
34
+
replicas: 3
35
+
controlPlane:
36
+
architecture: amd64
37
+
hyperthreading: Enabled
38
+
name: master
39
+
platform: {}
40
+
replicas: 3
41
+
----
42
+
<1> Sets up a cluster for CPU partitioning at install time. The default value is `None`.
In resource-constrained environments, you can use workload partitioning to isolate {product-title} services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs.
9
+
Workload partitioning separates compute node CPU resources into distinct CPU sets. The primary objective is to keep platform pods on the specified cores to avoid interrupting the CPUs the customer workloads are running on.
10
10
11
-
The minimum number of reserved CPUs required for the cluster management is four CPU Hyper-Threads (HTs).
12
-
With workload partitioning, you annotate the set of cluster management pods and a set of typical add-on Operators for inclusion in the cluster management workload partition.
13
-
These pods operate normally within the minimum size CPU configuration.
14
-
Additional Operators or workloads outside of the set of minimum cluster management pods require additional CPUs to be added to the workload partition.
11
+
Workload partitioning isolates {product-title} services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. This ensures that the remaining CPUs in the cluster deployment are untouched and available exclusively for non-platform workloads. The minimum number of reserved CPUs required for the cluster management is four CPU Hyper-Threads (HTs).
15
12
16
-
Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities.
13
+
In the context of enabling workload partitioning and managing CPU resources effectively, nodes that are not configured correctly will not be permitted to join the cluster through a node admission webhook. When the workload partitioning feature is enabled, the machine config pools for control plane and worker will be supplied with configurations for nodes to use. Adding new nodes to these pools will make sure they are correctly configured before joining the cluster.
17
14
18
-
The following changes are required for workload partitioning:
15
+
Currently, nodes must have uniform configurations per machine config pool to ensure that correct CPU affinity is set across all nodes within that pool. After admission, nodes within the cluster identify themselves as supporting a new resource type called `management.workload.openshift.io/cores` and accurately report their CPU capacity. Workload partitioning can be enabled during cluster installation only by adding the additional field `cpuPartitioningMode` to the `install-config.yaml` file.
16
+
17
+
When workload partitioning is enabled, the `management.workload.openshift.io/cores` resource allows the scheduler to correctly assign pods based on the `cpushares` capacity of the host, not just the default `cpuset`. This ensures more precise allocation of resources for workload partitioning scenarios.
18
+
19
+
Workload partitioning ensures that CPU requests and limits specified in the pod's configuration are respected. In {product-title} 4.16 or later, accurate CPU usage limits are set for platform pods through CPU partitioning. As workload partitioning uses the custom resource type of `management.workload.openshift.io/cores`, the values for requests and limits are the same due to a requirement by Kubernetes for extended resources. However, the annotations modified by workload partitioning correctly reflect the desired limits.
19
20
20
-
. In the `install-config.yaml` file, add the additional field: `cpuPartitioningMode`.
21
-
+
22
-
[source,yaml]
23
-
----
24
-
apiVersion: v1
25
-
baseDomain: devcluster.openshift.com
26
-
cpuPartitioningMode: AllNodes <1>
27
-
compute:
28
-
- architecture: amd64
29
-
hyperthreading: Enabled
30
-
name: worker
31
-
platform: {}
32
-
replicas: 3
33
-
controlPlane:
34
-
architecture: amd64
35
-
hyperthreading: Enabled
36
-
name: master
37
-
platform: {}
38
-
replicas: 3
39
-
----
40
-
<1> Sets up a cluster for CPU partitioning at install time. The default value is `None`.
41
-
+
42
21
[NOTE]
43
22
====
44
-
Workload partitioning can only be enabled during cluster installation. You cannot disable workload partitioning postinstallation.
23
+
Extended resources cannot be overcommitted, so request and limit must be equal if both are present in a container spec.
45
24
====
46
25
47
-
. In the performance profile, specify the `isolated` and `reserved` CPUs.
Workload partitioning introduces an extended `management.workload.openshift.io/cores` resource type for platform pods.
58
-
kubelet advertises the resources and CPU requests by pods allocated to the pool within the corresponding resource.
59
-
When workload partitioning is enabled, the `management.workload.openshift.io/cores` resource allows the scheduler to correctly assign pods based on the `cpushares` capacity of the host, not just the default `cpuset`.
* For the recommended workload partitioning configuration for {sno} clusters, see xref:../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-du-enabling-workload-partitioning_sno-configure-for-vdu[Workload partitioning].
0 commit comments