Skip to content

Commit e12884a

Browse files
authored
Merge pull request #95111 from amolnar-rh/OCPBUGS-54188
OCPBUGS-54188: Update Pod interactions with Topology Manager policies
2 parents ba0fb2b + 26eafe2 commit e12884a

File tree

4 files changed

+20
-14
lines changed

4 files changed

+20
-14
lines changed

modules/pod-interactions-with-topology-manager.adoc

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[id="pod-interactions-with-topology-manager_{context}"]
66
= Pod interactions with Topology Manager policies
77

8-
The example `Pod` specs below help illustrate pod interactions with Topology Manager.
8+
The example `Pod` specs illustrate pod interactions with Topology Manager.
99

1010
The following pod runs in the `BestEffort` QoS class because no resource requests or limits are specified.
1111

@@ -32,9 +32,11 @@ spec:
3232
memory: "100Mi"
3333
----
3434

35-
If the selected policy is anything other than `none`, Topology Manager would not consider either of these `Pod` specifications.
35+
If the selected policy is anything other than `none`, Topology Manager would process all the pods and it enforces resource alignment only for the `Guaranteed` Qos `Pod` specification.
36+
When the Topology Manager policy is set to `none`, the relevant containers are pinned to any available CPU without considering NUMA affinity. This is the default behavior and it does not optimize for performance-sensitive workloads.
37+
Other values enable the use of topology awareness information from device plugins core resources, such as CPU and memory. The Topology Manager attempts to align the CPU, memory, and device allocations according to the topology of the node when the policy is set to other values than `none`. For more information about the available values, see _Topology Manager policies_.
3638

37-
The last example pod below runs in the Guaranteed QoS class because requests are equal to limits.
39+
The following example pod runs in the `Guaranteed` QoS class because requests are equal to limits.
3840

3941
[source,yaml]
4042
----
@@ -53,6 +55,6 @@ spec:
5355
example.com/device: "1"
5456
----
5557

56-
Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod.
58+
Topology Manager would consider this pod. The Topology Manager would consult the Hint Providers, which are the CPU Manager, the Device Manager, and the Memory Manager, to get topology hints for the pod.
5759

58-
Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.
60+
Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.

modules/topology-manager-policies.adoc

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// * scaling_and_performance/using-topology-manager.adoc
44
// * post_installation_configuration/node-tasks.adoc
55

6-
[id="topology_manager_policies_{context}"]
6+
[id="topology-manager-policies_{context}"]
77
= Topology Manager policies
88

99
Topology Manager aligns `Pod` resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the `Pod` resources.
@@ -16,15 +16,14 @@ This is the default policy and does not perform any topology alignment.
1616

1717
`best-effort` policy::
1818

19-
For each container in a pod with the `best-effort` topology management policy, kubelet calls each Hint Provider to discover their resource
20-
availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node.
19+
For each container in a pod with the `best-effort` topology management policy, kubelet tries to align all the required resources on a NUMA node according to the preferred NUMA node affinity for that container. Even if the allocation is not possible due to insufficient resources, the Topology Manager still admits the pod but the allocation is shared with other NUMA nodes.
2120

2221
`restricted` policy::
2322

24-
For each container in a pod with the `restricted` topology management policy, kubelet calls each Hint Provider to discover their resource
25-
availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not
26-
preferred, Topology Manager rejects this pod from the node, resulting in a pod in a `Terminated` state with a pod admission failure.
23+
For each container in a pod with the `restricted` topology management policy, kubelet determines the theoretical minimum number of NUMA nodes that can fulfill the request. If the actual allocation requires more than the that number of NUMA nodes, the Topology Manager rejects the admission, placing the pod in a `Terminated` state. If the number of NUMA nodes can fulfill the request, the Topology Manager admits the pod and the pod starts running.
2724

2825
`single-numa-node` policy::
2926

30-
For each container in a pod with the `single-numa-node` topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure.
27+
For each container in a pod with the `single-numa-node` topology management policy, kubelet admits the pod if all the resources required by the pod can be allocated on the same NUMA node. If a single NUMA node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a `Terminated` state with a pod admission failure.
28+
29+

scalability_and_performance/telco-core-rds.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,7 @@ include::modules/telco-core-scheduling.adoc[leveloffset=+2]
174174

175175
* xref:../scalability_and_performance/cnf-numa-aware-scheduling.adoc#cnf-numa-aware-scheduling[Scheduling NUMA-aware workloads]
176176

177-
* xref:../scalability_and_performance/using-cpu-manager.adoc#topology_manager_policies_using-cpu-manager-and-topology_manager[Topology Manager policies]
177+
* xref:../scalability_and_performance/using-cpu-manager.adoc#topology-manager-policies_using-cpu-manager-and-topology-manager[Topology Manager policies]
178178

179179
include::modules/telco-core-node-configuration.adoc[leveloffset=+2]
180180

scalability_and_performance/using-cpu-manager.adoc

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
[id='using-cpu-manager']
33
= Using CPU Manager and Topology Manager
44
include::_attributes/common-attributes.adoc[]
5-
:context: using-cpu-manager-and-topology_manager
5+
:context: using-cpu-manager-and-topology-manager
66

77
toc::[]
88

@@ -31,3 +31,8 @@ include::modules/topology-manager-policies.adoc[leveloffset=+1]
3131
include::modules/setting-up-topology-manager.adoc[leveloffset=+1]
3232

3333
include::modules/pod-interactions-with-topology-manager.adoc[leveloffset=+1]
34+
35+
[role="_additional-resources"]
36+
.Additional resources
37+
38+
* xref:../scalability_and_performance/using-cpu-manager.adoc#topology-manager-policies_using-cpu-manager-and-topology-manager[Topology Manager policies]

0 commit comments

Comments
 (0)