Skip to content

Commit c44f248

Browse files
committed
TELCODOCS-1013-fixes
1 parent ca530dd commit c44f248

File tree

2 files changed

+22
-14
lines changed

2 files changed

+22
-14
lines changed

modules/ztp-sno-du-enabling-workload-partitioning.adoc

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,8 @@ activation_annotation = "target.workload.openshift.io/management"
2626
annotation_prefix = "resources.workload.openshift.io"
2727
resources = { "cpushares" = 0, "cpuset" = "0-1,52-53" } <1>
2828
----
29-
<1> The `CPUs` value varies based on the installation.
30-
+
31-
If Hyper-Threading is enabled, specify both threads for each core. The `CPUs` value must match the reserved CPU set specified in the performance profile.
29+
<1> The `cpuset` value varies based on the installation.
30+
If Hyper-Threading is enabled, specify both threads for each core. The `cpuset` value must match the reserved CPUs that you define in the `spec.cpu.reserved` field in the performance profile.
3231
3332
* When configured in the cluster, the contents of `/etc/kubernetes/openshift-workload-pinning` should look like this:
3433
+
@@ -40,7 +39,7 @@ If Hyper-Threading is enabled, specify both threads for each core. The `CPUs` va
4039
}
4140
}
4241
----
43-
<1> The `cpuset` must match the `CPUs` value in `/etc/crio/crio.conf.d/01-workload-partitioning`.
42+
<1> The `cpuset` must match the `cpuset` value in `/etc/crio/crio.conf.d/01-workload-partitioning`.
4443
4544
.Verification
4645

scalability_and_performance/sno-du-enabling-workload-partitioning-on-single-node-openshift.adoc

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,22 +6,31 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9-
In resource-constrained environments, such as {sno} deployments, it is advantageous to reserve most of the CPU resources for your own workloads and configure {product-title} to run on a fixed number of CPUs within the host. In these environments, management workloads, including the control plane, need to be configured to use fewer resources than they might by default in normal clusters. You can isolate the {product-title} services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs.
9+
In resource-constrained environments, such as {sno} deployments, use workload partitioning to isolate {product-title} services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs.
1010

11-
When you use workload partitioning, the CPU resources used by {product-title} for cluster management are isolated to a partitioned set of CPU resources on a single-node cluster. This partitioning isolates cluster management functions to the defined number of CPUs. All cluster management functions operate solely on that `cpuset` configuration.
11+
The minimum number of reserved CPUs required for the cluster management in {sno} is four CPU Hyper-Threads (HTs).
12+
With workload partitioning, you annotate the set of cluster management pods and a set of typical add-on Operators for inclusion in the cluster management workload partition.
13+
These pods operate normally within the minimum size CPU configuration.
14+
Additional Operators or workloads outside of the set of minimum cluster management pods require additional CPUs to be added to the workload partition.
1215

13-
The minimum number of reserved CPUs required for the management partition for a single-node cluster is four CPU Hyper threads (HTs). The set of pods that make up the baseline {product-title} installation and a set of typical add-on Operators are annotated for inclusion in the management workload partition. These pods operate normally within the minimum size `cpuset` configuration. Inclusion of Operators or workloads outside of the set of accepted management pods requires additional CPU HTs to be added to that partition.
16+
Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities.
1417

15-
Workload partitioning isolates the user workloads away from the platform workloads using the normal scheduling capabilities of Kubernetes to manage the number of pods that can be placed onto those cores, and avoids mixing cluster management workloads and user workloads.
18+
The following is an overview of the configurations required for workload partitioning:
1619

17-
When applying workload partitioning, use the Node Tuning Operator to implement the performance profile:
20+
* Workload partitioning that uses `/etc/crio/crio.conf.d/01-workload-partitioning` pins the {product-title} infrastructure pods to a defined `cpuset` configuration.
1821
19-
* Workload partitioning pins the {product-title} infrastructure pods to a defined `cpuset` configuration.
20-
* The performance profile pins the systemd services to a defined `cpuset` configuration.
21-
* This `cpuset` configuration must match.
22+
* The performance profile pins cluster services such as systemd and kubelet to the CPUs that are defined in the `spec.cpu.reserved` field.
23+
+
24+
[NOTE]
25+
====
26+
Using the Node Tuning Operator, you can configure the performance profile to also pin system-level apps for a complete workload partitioning configuration on the node.
27+
====
2228
23-
Workload partitioning introduces a new extended resource of `<workload-type>.workload.openshift.io/cores`
24-
for each defined CPU pool, or workload-type. Kubelet advertises these new resources and CPU requests by pods allocated to the pool are accounted for within the corresponding resource rather than the typical `cpu` resource. When workload partitioning is enabled, the `<workload-type>.workload.openshift.io/cores` resource allows access to the CPU capacity of the host, not just the default CPU pool.
29+
* The CPUs that you specify in the performance profile `spec.cpu.reserved` field and the workload partitioning `cpuset` field must match.
30+
31+
Workload partitioning introduces an extended `<workload-type>.workload.openshift.io/cores` resource for each defined CPU pool, or _workload type_.
32+
Kubelet advertises the resources and CPU requests by pods allocated to the pool within the corresponding resource.
33+
When workload partitioning is enabled, the `<workload-type>.workload.openshift.io/cores` resource allows access to the CPU capacity of the host, not just the default CPU pool.
2534

2635
[role="_additional-resources"]
2736
.Additional resources

0 commit comments

Comments
 (0)