Skip to content

Commit 2af5bd4

Browse files
authored
Merge pull request #43406 from StephenJamesSmith/TELCODOCS-299-workload-partitioning
TELCODOCS-299-workload-partitioning
2 parents e0f3fbc + 1e735c7 commit 2af5bd4

File tree

3 files changed

+97
-0
lines changed

3 files changed

+97
-0
lines changed

_topic_maps/_topic_map.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2195,6 +2195,9 @@ Topics:
21952195
- Name: Provisioning and deploying a distributed unit (DU)
21962196
File: cnf-provisioning-and-deploying-a-distributed-unit
21972197
Distros: openshift-webscale
2198+
- Name: Workload partitioning on single node OpenShift
2199+
File: sno-du-enabling-workload-partitioning-on-single-node-openshift
2200+
Distros: openshift-origin,openshift-enterprise
21982201
- Name: Deploying distributed units at scale in a disconnected environment
21992202
File: ztp-deploying-disconnected
22002203
Distros: openshift-origin,openshift-enterprise
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/sno-du-enabling-workload-partitioning-on-single-node-openshift.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="sno-du-enabling-workload-partitioning_{context}"]
7+
= Enabling workload partitioning
8+
9+
Use the following procedure to enable workload partitioning for your single node deployments.
10+
11+
.Procedure
12+
13+
. To enable workload partitioning, you must provide a `MachineConfig` manifest during installation to configure CRI-O and kubelet to know about the workload types. The following example shows a manifest without the encoded file content:
14+
+
15+
[source,yaml]
16+
----
17+
apiVersion: machineconfiguration.openshift.io/v1
18+
kind: MachineConfig
19+
metadata:
20+
labels:
21+
machineconfiguration.openshift.io/role: master
22+
name: 02-master-workload-partitioning
23+
spec:
24+
config:
25+
ignition:
26+
version: 3.2.0
27+
storage:
28+
files:
29+
- contents:
30+
source: data:text/plain;charset=utf-8;base64,encoded-content-here
31+
mode: 420
32+
overwrite: true
33+
path: /etc/crio/crio.conf.d/01-workload-partitioning
34+
user:
35+
name: root
36+
- contents:
37+
source: data:text/plain;charset=utf-8;base64,encoded-content-here
38+
mode: 420
39+
overwrite: true
40+
path: /etc/kubernetes/openshift-workload-pinning
41+
user:
42+
name: root
43+
----
44+
45+
. Provide the contents of `/etc/crio/crio.conf.d/01-workload-partitioning` as the workload partitioning encoded content. The `cpuset` value varies based on the deployment:
46+
+
47+
[source,yaml]
48+
----
49+
cat <<EOF | base64 -w0
50+
[crio.runtime.workloads.management]
51+
activation_annotation = "target.workload.openshift.io/management"
52+
annotation_prefix = "resources.workload.openshift.io"
53+
resources = { "cpushares" = 0, "cpuset" = "0-1,52-53" }
54+
EOF
55+
----
56+
57+
. Provide the contents of `/etc/kubernetes/openshift-workload-pinning` as the workload pinning encoded content. The `cpuset` value varies based on the deployment:
58+
+
59+
[source,yaml]
60+
----
61+
cat <<EOF | base64 -w0
62+
{
63+
"management": {
64+
"cpuset": "0-1,52-53"
65+
}
66+
}
67+
EOF
68+
----
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
:_content-type: ASSEMBLY
2+
[id="sno-du-enabling-workload-partitioning-on-single-node-openshift"]
3+
= Workload partitioning on single node OpenShift
4+
include::modules/common-attributes.adoc[]
5+
:context: sno-du-enabling-workload-partitioning-on-single-node-openshift
6+
7+
toc::[]
8+
9+
In resource-constrained environments, such as single node production deployments, it is advantageous to reserve most of the CPU resources for your own workloads and configure {product-title} to run on a fixed number of CPUs within the host. In these environments, management workloads, including the control plane, need to be configured to use fewer resources than they might by default in normal clusters. You can isolate the {product-title} services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs.
10+
11+
When you use workload partitioning, the CPU resources used by {product-title} for cluster management are isolated to a partitioned set of CPU resources on a single node cluster. This partitioning isolates cluster management functions to the defined number of CPUs. All cluster management functions operate solely on that `cpuset` configuration.
12+
13+
The minimum number of reserved CPUs required for the management partition for a single node cluster is four CPU Hyper threads (HTs). The set of pods that make up the baseline {product-title} installation and a set of typical add-on Operators are annotated for inclusion in the management workload partition. These pods operate normally within the minimum size `cpuset` configuration. Inclusion of Operators or workloads outside of the set of accepted management pods requires additional CPU HTs to be added to that partition.
14+
15+
Workload partitioning isolates the user workloads away from the platform workloads using the normal scheduling capabilities of Kubernetes to manage the number of pods that can be placed onto those cores, and avoids mixing cluster management workloads and user workloads.
16+
17+
When using workload partitioning, you must install the Performance Addon Operator and apply the performance profile:
18+
19+
* Workload partitioning pins the {product-title} infrastructure pods to a defined `cpuset` configuration.
20+
* The Performance Addon Operator performance profile pins the systemd services to a defined `cpuset` configuration.
21+
* This `cpuset` configuration must match.
22+
23+
Workload partitioning introduces a new extended resource of `<workload-type>.workload.openshift.io/cores`
24+
for each defined CPU pool, or workload-type. Kubelet advertises these new resources and CPU requests by pods allocated to the pool are accounted for within the corresponding resource rather than the typical `cpu` resource. When workload partitioning is enabled, the `<workload-type>.workload.openshift.io/cores` resource allows access to the CPU capacity of the host, not just the default CPU pool.
25+
26+
include::modules/sno-du-enabling-workload-partitioning.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)