You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/concepts/policy/node-resource-managers.md
+275-3Lines changed: 275 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,10 +13,282 @@ In order to support latency-critical and high-throughput workloads, Kubernetes o
13
13
14
14
<!-- body -->
15
15
16
-
The main manager, the Topology Manager, is a Kubelet component that co-ordinates the overall resource management process through its [policy](/docs/tasks/administer-cluster/topology-manager/).
16
+
## Hardware topology alignment policies
17
+
18
+
_Topology Manager_ is a kubelet component that aims to coordinate the set of components that are
19
+
responsible for these optimizations. The the overall resource management process is governed using
20
+
the policy you specify.
21
+
To learn more, read [Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/).
-`static`: the `static` policy allows containers in `Guaranteed` pods with integer CPU
48
+
`requests` access to exclusive CPUs on the node. This exclusivity is enforced
49
+
using the [cpuset cgroup controller](https://www.kernel.org/doc/Documentation/cgroup-v2.txt).
50
+
51
+
{{< note >}}
52
+
System services such as the container runtime and the kubelet itself can continue to run on these exclusive CPUs. The exclusivity only extends to other pods.
53
+
{{< /note >}}
54
+
55
+
CPU Manager doesn't support offlining and onlining of CPUs at runtime.
56
+
57
+
### Static policy
58
+
59
+
The static policy enables finer-grained CPU management and exclusive CPU assignment.
60
+
This policy manages a shared pool of CPUs that initially contains all CPUs in the
61
+
node. The amount of exclusively allocatable CPUs is equal to the total
62
+
number of CPUs in the node minus any CPU reservations set by the kubelet configuration.
63
+
CPUs reserved by these options are taken, in integer quantity, from the initial shared pool in ascending order by physical
64
+
core ID. This shared pool is the set of CPUs on which any containers in
65
+
`BestEffort` and `Burstable` pods run. Containers in `Guaranteed` pods with fractional
66
+
CPU `requests` also run on CPUs in the shared pool. Only containers that are
67
+
both part of a `Guaranteed` pod and have integer CPU `requests` are assigned
68
+
exclusive CPUs.
69
+
70
+
{{< note >}}
71
+
The kubelet requires a CPU reservation greater than zero when the static policy is enabled.
72
+
This is because zero CPU reservation would allow the shared pool to become empty.
73
+
{{< /note >}}
74
+
75
+
As `Guaranteed` pods whose containers fit the requirements for being statically
76
+
assigned are scheduled to the node, CPUs are removed from the shared pool and
77
+
placed in the cpuset for the container. CFS quota is not used to bound
78
+
the CPU usage of these containers as their usage is bound by the scheduling domain
79
+
itself. In others words, the number of CPUs in the container cpuset is equal to the integer
80
+
CPU `limit` specified in the pod spec. This static assignment increases CPU
81
+
affinity and decreases context switches due to throttling for the CPU-bound
82
+
workload.
83
+
84
+
Consider the containers in the following pod specs:
85
+
86
+
```yaml
87
+
spec:
88
+
containers:
89
+
- name: nginx
90
+
image: nginx
91
+
```
92
+
93
+
The pod above runs in the `BestEffort` QoS class because no resource `requests` or
94
+
`limits`are specified. It runs in the shared pool.
95
+
96
+
```yaml
97
+
spec:
98
+
containers:
99
+
- name: nginx
100
+
image: nginx
101
+
resources:
102
+
limits:
103
+
memory: "200Mi"
104
+
requests:
105
+
memory: "100Mi"
106
+
```
107
+
108
+
The pod above runs in the `Burstable` QoS class because resource `requests` do not
109
+
equal `limits` and the `cpu` quantity is not specified. It runs in the shared
110
+
pool.
111
+
112
+
```yaml
113
+
spec:
114
+
containers:
115
+
- name: nginx
116
+
image: nginx
117
+
resources:
118
+
limits:
119
+
memory: "200Mi"
120
+
cpu: "2"
121
+
requests:
122
+
memory: "100Mi"
123
+
cpu: "1"
124
+
```
125
+
126
+
The pod above runs in the `Burstable` QoS class because resource `requests` do not
127
+
equal `limits`. It runs in the shared pool.
128
+
129
+
```yaml
130
+
spec:
131
+
containers:
132
+
- name: nginx
133
+
image: nginx
134
+
resources:
135
+
limits:
136
+
memory: "200Mi"
137
+
cpu: "2"
138
+
requests:
139
+
memory: "200Mi"
140
+
cpu: "2"
141
+
```
142
+
143
+
The pod above runs in the `Guaranteed` QoS class because `requests` are equal to `limits`.
144
+
And the container's resource limit for the CPU resource is an integer greater than
145
+
or equal to one. The `nginx` container is granted 2 exclusive CPUs.
146
+
147
+
148
+
```yaml
149
+
spec:
150
+
containers:
151
+
- name: nginx
152
+
image: nginx
153
+
resources:
154
+
limits:
155
+
memory: "200Mi"
156
+
cpu: "1.5"
157
+
requests:
158
+
memory: "200Mi"
159
+
cpu: "1.5"
160
+
```
161
+
162
+
The pod above runs in the `Guaranteed` QoS class because `requests` are equal to `limits`.
163
+
But the container's resource limit for the CPU resource is a fraction. It runs in
164
+
the shared pool.
165
+
166
+
167
+
```yaml
168
+
spec:
169
+
containers:
170
+
- name: nginx
171
+
image: nginx
172
+
resources:
173
+
limits:
174
+
memory: "200Mi"
175
+
cpu: "2"
176
+
```
177
+
178
+
The pod above runs in the `Guaranteed` QoS class because only `limits` are specified
179
+
and `requests` are set equal to `limits` when not explicitly specified. And the
180
+
container's resource limit for the CPU resource is an integer greater than or
181
+
equal to one. The `nginx` container is granted 2 exclusive CPUs.
0 commit comments