Skip to content

Commit 57cd24e

Browse files
authored
Merge pull request #75078 from kquinn1204/TELCODOCS-1771
Telcodocs 1771 Extend workload partitioning to support cpu limits
2 parents fed5de3 + c564cc7 commit 57cd24e

File tree

3 files changed

+77
-39
lines changed

3 files changed

+77
-39
lines changed
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/enabling-workload-partitioning.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="performance-profile-workload-partitioning_{context}"]
7+
= Performance profiles and workload partitioning
8+
9+
Applying a performance profile allows you to make use of the workload partitioning feature. An appropriately configured performance profile specifies the `isolated` and `reserved` CPUs. The recommended way to create a performance profile is to use the Performance Profile Creator (PPC) tool to create the performance profile.
10+
11+
12+
13+
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/enabling-workload-partitioning.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="enabling-workload-partitioning_{context}"]
7+
= Enabling workload partitioning
8+
9+
With workload partitioning, cluster management pods are annotated to correctly partition them into a specified CPU affinity. These pods operate normally within the minimum size CPU configuration specified by the reserved value in the Performance Profile. Additional Day 2 Operators that make use of workload partitioning should be taken into account when calculating how many reserved CPU cores should be set aside for the platform.
10+
11+
Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities.
12+
13+
[NOTE]
14+
====
15+
Workload partitioning can only be enabled during cluster installation. You cannot disable workload partitioning postinstallation.
16+
====
17+
18+
Use this procedure to enable workload partitioning cluster wide:
19+
20+
.Procedure
21+
22+
* In the `install-config.yaml` file, add the additional field `cpuPartitioningMode` and set it to `AllNodes`.
23+
+
24+
[source,yaml]
25+
----
26+
apiVersion: v1
27+
baseDomain: devcluster.openshift.com
28+
cpuPartitioningMode: AllNodes <1>
29+
compute:
30+
- architecture: amd64
31+
hyperthreading: Enabled
32+
name: worker
33+
platform: {}
34+
replicas: 3
35+
controlPlane:
36+
architecture: amd64
37+
hyperthreading: Enabled
38+
name: master
39+
platform: {}
40+
replicas: 3
41+
----
42+
<1> Sets up a cluster for CPU partitioning at install time. The default value is `None`.

scalability_and_performance/enabling-workload-partitioning.adoc

Lines changed: 22 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -6,59 +6,42 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9-
In resource-constrained environments, you can use workload partitioning to isolate {product-title} services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs.
9+
Workload partitioning separates compute node CPU resources into distinct CPU sets. The primary objective is to keep platform pods on the specified cores to avoid interrupting the CPUs the customer workloads are running on.
1010

11-
The minimum number of reserved CPUs required for the cluster management is four CPU Hyper-Threads (HTs).
12-
With workload partitioning, you annotate the set of cluster management pods and a set of typical add-on Operators for inclusion in the cluster management workload partition.
13-
These pods operate normally within the minimum size CPU configuration.
14-
Additional Operators or workloads outside of the set of minimum cluster management pods require additional CPUs to be added to the workload partition.
11+
Workload partitioning isolates {product-title} services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. This ensures that the remaining CPUs in the cluster deployment are untouched and available exclusively for non-platform workloads. The minimum number of reserved CPUs required for the cluster management is four CPU Hyper-Threads (HTs).
1512

16-
Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities.
13+
In the context of enabling workload partitioning and managing CPU resources effectively, nodes that are not configured correctly will not be permitted to join the cluster through a node admission webhook. When the workload partitioning feature is enabled, the machine config pools for control plane and worker will be supplied with configurations for nodes to use. Adding new nodes to these pools will make sure they are correctly configured before joining the cluster.
1714

18-
The following changes are required for workload partitioning:
15+
Currently, nodes must have uniform configurations per machine config pool to ensure that correct CPU affinity is set across all nodes within that pool. After admission, nodes within the cluster identify themselves as supporting a new resource type called `management.workload.openshift.io/cores` and accurately report their CPU capacity. Workload partitioning can be enabled during cluster installation only by adding the additional field `cpuPartitioningMode` to the `install-config.yaml` file.
16+
17+
When workload partitioning is enabled, the `management.workload.openshift.io/cores` resource allows the scheduler to correctly assign pods based on the `cpushares` capacity of the host, not just the default `cpuset`. This ensures more precise allocation of resources for workload partitioning scenarios.
18+
19+
Workload partitioning ensures that CPU requests and limits specified in the pod's configuration are respected. In {product-title} 4.16 or later, accurate CPU usage limits are set for platform pods through CPU partitioning. As workload partitioning uses the custom resource type of `management.workload.openshift.io/cores`, the values for requests and limits are the same due to a requirement by Kubernetes for extended resources. However, the annotations modified by workload partitioning correctly reflect the desired limits.
1920

20-
. In the `install-config.yaml` file, add the additional field: `cpuPartitioningMode`.
21-
+
22-
[source,yaml]
23-
----
24-
apiVersion: v1
25-
baseDomain: devcluster.openshift.com
26-
cpuPartitioningMode: AllNodes <1>
27-
compute:
28-
- architecture: amd64
29-
hyperthreading: Enabled
30-
name: worker
31-
platform: {}
32-
replicas: 3
33-
controlPlane:
34-
architecture: amd64
35-
hyperthreading: Enabled
36-
name: master
37-
platform: {}
38-
replicas: 3
39-
----
40-
<1> Sets up a cluster for CPU partitioning at install time. The default value is `None`.
41-
+
4221
[NOTE]
4322
====
44-
Workload partitioning can only be enabled during cluster installation. You cannot disable workload partitioning postinstallation.
23+
Extended resources cannot be overcommitted, so request and limit must be equal if both are present in a container spec.
4524
====
4625

47-
. In the performance profile, specify the `isolated` and `reserved` CPUs.
48-
+
49-
.Recommended performance profile configuration
26+
include::modules/enabling-workload-partitioning.adoc[leveloffset=+1]
27+
28+
include::modules/create-perf-profile-workload-partitioning.adoc[leveloffset=+1]
29+
30+
[role="_additional-resources"]
31+
.Additional resources
32+
33+
* xref:../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-about-the-profile-creator-tool_cnf-low-latency-perf-profile[About the Performance Profile Creator]
34+
35+
36+
== Sample performance profile configuration
5037
[source,yaml]
5138
----
5239
include::snippets/ztp_PerformanceProfile.yaml[]
5340
----
54-
+
55-
include::snippets/performance-profile-workload-partitioning.adoc[]
5641

57-
Workload partitioning introduces an extended `management.workload.openshift.io/cores` resource type for platform pods.
58-
kubelet advertises the resources and CPU requests by pods allocated to the pool within the corresponding resource.
59-
When workload partitioning is enabled, the `management.workload.openshift.io/cores` resource allows the scheduler to correctly assign pods based on the `cpushares` capacity of the host, not just the default `cpuset`.
42+
include::snippets/performance-profile-workload-partitioning.adoc[]
6043

6144
[role="_additional-resources"]
6245
.Additional resources
6346

64-
* For the recommended workload partitioning configuration for {sno} clusters, see xref:../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-du-enabling-workload-partitioning_sno-configure-for-vdu[Workload partitioning].
47+
* xref:../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-du-enabling-workload-partitioning_sno-configure-for-vdu[Recommended single-node OpenShift cluster configuration for vDU application workloads -> Workload partitioning]

0 commit comments

Comments
 (0)