Skip to content

Commit 088316d

Browse files
Merge pull request #33927 from StephenJamesSmith/telcodocs-157
TELCODOCS-157: workload partitioning
2 parents dae529f + 52fa709 commit 088316d

10 files changed

+680
-1
lines changed

modules/cnf-deploying-the-du-infrastructure-profile.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
//
44
// *cnf-provisioning-and-deploying-a-distributed-unit.adoc
55

6-
[id="scalability_and_performance/cnf-deploying-the-du-infrastructure-profile_{context}"]
6+
[id="cnf-deploying-the-du-infrastructure-profile_{context}"]
77
= Deploying the DU infrastructure profile
88

99
[id="cnf-creating-the-performance-addon-operator-and-du-performance-profile_{context}"]
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
// Module included in the following assemblies:
2+
//
3+
// *scalability_and_performance/cnf-provisioning-and-installing-a-distributed-unit.adoc
4+
5+
[id="cnf-du-configuring-a-performance-profile-to-support-workload-partitioning.adoc_{context}"]
6+
7+
= Configuring a performance profile to support workload partitioning
8+
9+
After you have configured workload partitioning, you need to ensure that the Performance Addon Operator has been installed and that you configured a performance profile.
10+
11+
The reserved CPU IDs in the performance profile must match the workload partitioning CPU IDs.
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
// Module included in the following assemblies:
2+
//
3+
// *scalability_and_performance/cnf-provisioning-and-installing-a-distributed-unit.adoc
4+
5+
[id="cnf-du-configuring-workload-partitioning_{context}"]
6+
7+
= Configuring workload partitioning
8+
9+
The following procedure outlines a high level, end to end workflow that installs a cluster with workload partitioning enabled and pods that
10+
are correctly scheduled to run on the management CPU partition.
11+
12+
. Create a machine config manifest to configure CRI-O to partition management workloads. The cpuset that you specify must match the reserved cpuset that you specified in the performance-addon-operator profile.
13+
14+
. Create another machine config manifest to write a configuration file for kubelet to enable the same workload partition. The file is only readable by the kubelet.
15+
16+
. Run `openshift-install` to create the standard manifests, adds their extra manifests from steps 1 and 2, then creates the cluster.
17+
18+
. For pods and namespaces that are correctly annotated, the CPU request values are zeroed out and converted to `<workload-type>.workload.openshift.io/cores`. This modified resource allows the pods to be constrained to the restricted CPUs.
19+
20+
. The single node cluster starts with management components constrained to a subset of available CPUs.
Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
// Module included in the following assemblies:
2+
//
3+
// *scalability_and_performance/cnf-provisioning-and-installing-a-distributed-unit.adoc
4+
5+
[id="cnf-du-creating-a-machine-config-manifest-for-workload-partitioning_{context}"]
6+
7+
= Creating a machine config manifest for workload partitioning
8+
9+
Part of configuring workload partitioning requires you to provide a `MachineConfig` manifest during installation to configure CRI-O and kubelet for the workload types.
10+
11+
The manifest, without the encoded file content, looks like this:
12+
13+
[source,yaml]
14+
----
15+
apiVersion: machineconfiguration.openshift.io/v1
16+
kind: MachineConfig
17+
metadata:
18+
labels:
19+
machineconfiguration.openshift.io/role: master
20+
name: 02-master-workload-partitioning
21+
spec:
22+
config:
23+
ignition:
24+
version: 3.2.0
25+
storage:
26+
files:
27+
- contents:
28+
source: data:text/plain;charset=utf-8;base64,<01-workload-partitioning-content>
29+
mode: 420
30+
overwrite: true
31+
path: /etc/crio/crio.conf.d/01-workload-partitioning
32+
user:
33+
name: root
34+
- contents:
35+
source: data:text/plain;charset=utf-8;base64,<openshift-workload-pinning content>
36+
mode: 420
37+
overwrite: true
38+
path: /etc/kubernetes/openshift-workload-pinning
39+
user:
40+
name: root
41+
----
42+
43+
The contents of `/etc/crio/crio.conf.d/01-workload-partitioning` should look like this.
44+
45+
[source,yaml]
46+
----
47+
[crio.runtime.workloads.management]
48+
activation_annotation = "target.workload.openshift.io/management"
49+
annotation_prefix = "resources.workload.openshift.io"
50+
resources = { "cpushares" = 0, "cpuset" = "0-1,52-53" } <1>
51+
----
52+
<1> The `cpuset` value will vary based on the installation.
53+
54+
If hyperthreading is enabled, specify both threads of each core. The `cpuset` must match the reserved CPU set specified in the performance profile.
55+
56+
57+
This content should be base64 encoded and provided in the `01-workload-partitioning-content` in the manifest above.
58+
59+
The contents of `/etc/kubernetes/openshift-workload-pinning` should look like this:
60+
61+
----
62+
{
63+
"management": {
64+
"cpuset": "0-1,52-53" <1>
65+
}
66+
}
67+
----
68+
<1> The `cpuset` must match the value in `/etc/crio/crio.conf.d/01-workload-partitioning`.
69+
70+
This content should be base64 encoded and provided in the `openshift-workload-pinning-content` in the preceding manifest.
71+
72+
[NOTE]
73+
====
74+
The `cpuset` specified must match the reserved `cpuset` specified in the Performance Addon Operator profile.
75+
====
76+
77+
[NOTE]
78+
====
79+
In this release, configuring machines for workload partitioning must be enabled during installation to work correctly.
80+
Once enabled, changes to the machine configs that enable the feature are not supported.
81+
====
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
// Module included in the following assemblies:
2+
//
3+
// *scalability_and_performance/cnf-provisioning-and-deploying-a-distributed-unit.adoc
4+
5+
[id="cnf-du-crio-configuration-for-workload-partitioning_{context}"]
6+
7+
= CRI-O configuration for workload partitioning
8+
9+
In support of workload partitioning, CRI-O supports new configuration settings. The configuration file is delivered to a host as part of a machine config.
10+
11+
[source,terminal]
12+
----
13+
[crio.runtime.workloads.{workload-type}]
14+
activation_annotation = "target.workload.openshift.io/<workload-type>" <1>
15+
annotation_prefix = "resources.workload.openshift.io" <2>
16+
resources = { "cpushares" = 0, "cpuset" = "0-1,52-53" } <3>
17+
----
18+
<1> Use the `activation_annotation` field to match pods that should be treated as having the workload type. The annotation key on the pod is compared for an exact match against the value specified in the configuration file. In this release, the only supported workload-type is `management`.
19+
20+
<2> The `annotation_prefix` is the start of the annotation key that passes settings from the admission hook down to CRI-O.
21+
22+
<3> The `resources` map associates annotation suffixes with default values. CRI-O defines a well-known set of resources and other values are not allowed. The `cpuset` value must match the kubelet configuration file and the reserved `cpuset` in the applied PerformanceProfile.
23+
24+
In the management workload case, it is configured as follows:
25+
26+
[source,terminal]
27+
----
28+
[crio.runtime.workloads.management]
29+
activation_annotation = "target.workload.openshift.io/management"
30+
annotation_prefix = "resources.workload.openshift.io"
31+
resources = { "cpushares" = 0, "cpuset" = "0-1,52-53" }
32+
----
33+
34+
Pods that have the `target.workload.openshift.io/management` annotation will have their `cpuset` configured to the value from the appropriate workload configuration. The CPU shares for each container in the pod are configured according to the `management.workload.openshift.io/cores` resource limit, which ensures the pod's CPU shares are enforced.

0 commit comments

Comments
 (0)