You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/concepts/policy/node-resource-managers.md
+60-47Lines changed: 60 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,16 +9,18 @@ weight: 50
9
9
10
10
<!-- overview -->
11
11
12
-
In order to support latency-critical and high-throughput workloads, Kubernetes offers a suite of Resource Managers. The managers aim to co-ordinate and optimise node's resources alignment for pods configured with a specific requirement for CPUs, devices, and memory (hugepages) resources.
12
+
In order to support latency-critical and high-throughput workloads, Kubernetes offers a suite of
13
+
Resource Managers. The managers aim to co-ordinate and optimise the alignment of node's resources for pods
14
+
configured with a specific requirement for CPUs, devices, and memory (hugepages) resources.
13
15
14
16
<!-- body -->
15
17
16
18
## Hardware topology alignment policies
17
19
18
20
_Topology Manager_ is a kubelet component that aims to coordinate the set of components that are
19
-
responsible for these optimizations. The the overall resource management process is governed using
20
-
the policy you specify.
21
-
To learn more, read [Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/).
21
+
responsible for these optimizations. The overall resource management process is governed using
22
+
the policy you specify. To learn more, read
23
+
[Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/).
22
24
23
25
## Policies for assigning CPUs to Pods
24
26
@@ -29,27 +31,30 @@ hardware (for example, sharing CPUs across multiple Pods) or allocate hardware b
29
31
resource (for example, assigning one of more CPUs for a Pod's exclusive use).
30
32
31
33
By default, the kubelet uses [CFS quota](https://en.wikipedia.org/wiki/Completely_Fair_Scheduler)
32
-
to enforce pod CPU limits. When the node runs many CPU-bound pods, the workload can move to different CPU cores depending on
33
-
whether the pod is throttled and which CPU cores are available at scheduling time. Many workloads are not sensitive to this migration and thus
34
+
to enforce pod CPU limits. When the node runs many CPU-bound pods, the workload can move to
35
+
different CPU cores depending on whether the pod is throttled and which CPU cores are available
36
+
at scheduling time. Many workloads are not sensitive to this migration and thus
34
37
work fine without any intervention.
35
38
36
-
However, in workloads where CPU cache affinity and scheduling latency significantly affect workload performance, the kubelet allows alternative CPU
39
+
However, in workloads where CPU cache affinity and scheduling latency significantly affect
40
+
workload performance, the kubelet allows alternative CPU
37
41
management policies to determine some placement preferences on the node.
38
42
This is implemented using the _CPU Manager_ and its policy.
39
43
There are two available policies:
40
44
41
45
-`none`: the `none` policy explicitly enables the existing default CPU
42
-
affinity scheme, providing no affinity beyond what the OS scheduler does
43
-
automatically. Limits on CPU usage for
44
-
[Guaranteed pods](/docs/concepts/workloads/pods/pod-qos/) and
-`static`: the `static` policy allows containers in `Guaranteed` pods with integer CPU
48
-
`requests` access to exclusive CPUs on the node. This exclusivity is enforced
49
-
using the [cpuset cgroup controller](https://www.kernel.org/doc/Documentation/cgroup-v2.txt).
52
+
`requests` access to exclusive CPUs on the node. This exclusivity is enforced
53
+
using the [cpuset cgroup controller](https://www.kernel.org/doc/Documentation/cgroup-v2.txt).
50
54
51
55
{{< note >}}
52
-
System services such as the container runtime and the kubelet itself can continue to run on these exclusive CPUs. The exclusivity only extends to other pods.
56
+
System services such as the container runtime and the kubelet itself can continue to run on
57
+
these exclusive CPUs. The exclusivity only extends to other pods.
53
58
{{< /note >}}
54
59
55
60
CPU Manager doesn't support offlining and onlining of CPUs at runtime.
@@ -64,12 +69,12 @@ CPUs reserved by these options are taken, in integer quantity, from the initial
64
69
core ID. This shared pool is the set of CPUs on which any containers in
65
70
`BestEffort` and `Burstable` pods run. Containers in `Guaranteed` pods with fractional
66
71
CPU `requests` also run on CPUs in the shared pool. Only containers that are
67
-
both part of a `Guaranteed` pod and have integer CPU `requests` are assigned
72
+
part of a `Guaranteed` pod and have integer CPU `requests` are assigned
68
73
exclusive CPUs.
69
74
70
75
{{< note >}}
71
76
The kubelet requires a CPU reservation greater than zero when the static policy is enabled.
72
-
This is because zero CPU reservation would allow the shared pool to become empty.
77
+
This is because a zero CPU reservation would allow the shared pool to become empty.
73
78
{{< /note >}}
74
79
75
80
As `Guaranteed` pods whose containers fit the requirements for being statically
@@ -144,7 +149,6 @@ The pod above runs in the `Guaranteed` QoS class because `requests` are equal to
144
149
And the container's resource limit for the CPU resource is an integer greater than
145
150
or equal to one. The `nginx` container is granted 2 exclusive CPUs.
146
151
147
-
148
152
```yaml
149
153
spec:
150
154
containers:
@@ -163,7 +167,6 @@ The pod above runs in the `Guaranteed` QoS class because `requests` are equal to
163
167
But the container's resource limit for the CPU resource is a fraction. It runs in
164
168
the shared pool.
165
169
166
-
167
170
```yaml
168
171
spec:
169
172
containers:
@@ -182,27 +185,38 @@ equal to one. The `nginx` container is granted 2 exclusive CPUs.
0 commit comments