You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An increasing number of systems leverage a combination of CPUs and hardware accelerators to support latency-critical execution and high-throughput parallel computation. These include workloads in fields such as telecommunications, scientific computing, machine learning, financial services and data analytics. Such hybrid systems comprise a high performance environment.
@@ -41,17 +35,12 @@ In order to extract the best performance, optimizations related to CPU isolation
41
35
<!--
42
36
_Topology Manager_ is a Kubelet component that aims to co-ordinate the set of components that are responsible for these optimizations.
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
52
43
53
-
54
-
55
44
<!-- steps -->
56
45
57
46
<!--
@@ -92,69 +81,85 @@ The hint is then stored in the Topology Manager for use by the *Hint Providers*
92
81
之后,建议会被存储在拓扑管理器中,供 *建议提供者* 进行资源分配决策时使用。
93
82
94
83
<!--
95
-
### Topology Manager Policies
84
+
### Enable the Topology Manager feature
85
+
86
+
Support for the Topology Manager requires `TopologyManager` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It is enabled by default starting with Kubernetes 1.18.
- Aligns the requested resources that Hint Provider provides topology hints for.
101
101
-->
102
-
当前拓扑管理器:
102
+
### 拓扑管理器策略
103
+
104
+
拓扑管理器目前:
105
+
106
+
- 对所有 QoS 类的 Pod 执行对齐操作
107
+
- 针对建议提供者所提供的拓扑建议,对请求的资源进行对齐
103
108
104
109
<!--
105
-
- Works on Nodes with the `static` CPU Manager Policy enabled. See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/)
106
-
- Works on Pods making CPU requests or Device requests via extended resources
110
+
If these conditions are met, Topology Manager will align the requested resources.
107
111
-->
108
-
- 在启用了 `static` CPU 管理器策略的节点上起作用。 请参阅[控制 CPU 管理策略](/docs/tasks/administer-cluster/cpu-management-policies/)
109
-
- 适用于通过扩展资源发出 CPU 请求或设备请求的 Pod
112
+
如果满足这些条件,则拓扑管理器将对齐请求的资源。
110
113
111
114
<!--
112
-
If these conditions are met, Topology Manager will align the requested resources.
115
+
To align CPU resources with other requested resources in a Pod Spec, the CPU Manager should be enabled and proper CPU Manager policy should be configured on a Node. See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
113
116
-->
114
-
如果满足这些条件,则拓扑管理器将调整请求的资源。
117
+
{{< note >}}
118
+
为了将 Pod 规约中的 CPU 资源与其他请求资源对齐,CPU 管理器需要被启用并且
119
+
节点上应配置了适当的 CPU 管理器策略。
120
+
参看[控制 CPU 管理策略](/zh/docs/tasks/administer-cluster/cpu-management-policies/).
121
+
{{< /note >}}
115
122
116
123
<!--
117
124
Topology Manager supports four allocation policies. You can set a policy via a Kubelet flag, `--topology-manager-policy`.
For each container in a Guaranteed Pod, kubelet, with `restricted` topology
174
176
management policy, calls each Hint Provider to discover their resource availability.
175
177
Using this information, the Topology Manager stores the
176
178
preferred NUMA Node affinity for that container. If the affinity is not preferred,
177
179
Topology Manager will reject this pod from the node. This will result in a pod in a `Terminated` state with a pod admission failure.
178
180
-->
179
-
对于 Guaranteed 类 Pod 中的每个容器, 配置了 `restricted` 拓扑管理策略的 kubelet 调用每个建议提供者以确定其资源可用性。。
181
+
### restricted 策略 {#policy-restricted}
182
+
183
+
对于 Guaranteed 类 Pod 中的每个容器, 配置了 `restricted` 拓扑管理策略的 kubelet
184
+
调用每个建议提供者以确定其资源可用性。。
180
185
使用此信息,拓扑管理器存储该容器的首选 NUMA 节点亲和性。
181
186
如果亲和性不是首选,则拓扑管理器将从节点中拒绝此 Pod 。
182
187
这将导致 Pod 处于 `Terminated` 状态,且 Pod 无法被节点接纳。
@@ -185,29 +190,30 @@ Topology Manager will reject this pod from the node. This will result in a pod i
185
190
Once the pod is in a `Terminated` state, the Kubernetes scheduler will **not** attempt to reschedule the pod. It is recommended to use a ReplicaSet or Deployment to trigger a redeploy of the pod.
186
191
An external control loop could be also implemented to trigger a redeployment of pods that have the `Topology Affinity` error.
For each container in a Guaranteed Pod, kubelet, with `single-numa-node` topology
204
207
management policy, calls each Hint Provider to discover their resource availability.
205
208
Using this information, the Topology Manager determines if a single NUMA Node affinity is possible.
206
209
If it is, Topology Manager will store this and the *Hint Providers* can then use this information when making the
207
210
resource allocation decision.
208
211
If, however, this is not possible then the Topology Manager will reject the pod from the node. This will result in a pod in a `Terminated` state with a pod admission failure.
209
212
-->
210
-
对于 Guaranteed 类 Pod 中的每个容器, 配置了 `single-numa-nodde` 拓扑管理策略的 kubelet 调用每个建议提供者以确定其资源可用性。
对于 Guaranteed 类 Pod 中的每个容器, 配置了 `single-numa-nodde` 拓扑管理策略的
216
+
kubelet 调用每个建议提供者以确定其资源可用性。
211
217
使用此信息,拓扑管理器确定单 NUMA 节点亲和性是否可能。
212
218
如果是这样,则拓扑管理器将存储此信息,然后 *建议提供者* 可以在做出资源分配决定时使用此信息。
213
219
如果不可能,则拓扑管理器将拒绝 Pod 运行于该节点。
@@ -217,17 +223,17 @@ If, however, this is not possible then the Topology Manager will reject the pod
217
223
Once the pod is in a `Terminated` state, the Kubernetes scheduler will **not** attempt to reschedule the pod. It is recommended to use a Deployment with replicas to trigger a redeploy of the Pod.
218
224
An external control loop could be also implemented to trigger a redeployment of pods that have the `Topology Affinity` error.
一旦 Pod 处于 `Terminated` 状态,Kubernetes 调度器将不会尝试重新调度该 Pod。
227
+
建议使用 ReplicaSet 或者 Deployment 来重新部署 Pod。
221
228
还可以通过实现外部控制环,以触发具有 `Topology Affinity` 错误的 Pod 的重新部署。
222
229
223
230
<!--
224
231
### Pod Interactions with Topology Manager Policies
225
-
-->
226
-
### Pod 与拓扑管理器策略的交互
227
232
228
-
<!--
229
233
Consider the containers in the following pod specs:
230
234
-->
235
+
### Pod 与拓扑管理器策略的交互
236
+
231
237
考虑以下 pod 规范中的容器:
232
238
233
239
```yaml
@@ -241,7 +247,7 @@ spec:
241
247
This pod runs in the `BestEffort` QoS class because no resource `requests` or
242
248
`limits`are specified.
243
249
-->
244
-
该 Pod 在 `BestEffort` QoS 类中运行,因为没有指定资源 `requests` 或 `limits`。
250
+
该 Pod 以 `BestEffort` QoS 类运行,因为没有指定资源 `requests` 或 `limits`。
245
251
246
252
```yaml
247
253
spec:
@@ -261,11 +267,12 @@ This pod runs in the `Burstable` QoS class because requests are less than limits
261
267
由于 requests 数少于 limits,因此该 Pod 以 `Burstable` QoS 类运行。
262
268
263
269
<!--
264
-
If the selected policy is anything other than `none` , Topology Manager would not consider either of these Pod
265
-
specifications.
270
+
If the selected policy is anything other than `none`, Topology Manager would consider these Pod specifications. The Topology Manager would consult the Hint Providers to get topology hints. In the case of the `static`, the CPU Manager policy would return default topology hint, because these Pods do not have explicity request CPU resources.
266
271
-->
267
-
如果选择的策略是 `none` 以外的任何其他策略,拓扑管理器不会考虑这些 Pod 中的任何一个规范。
268
-
272
+
如果选择的策略是 `none` 以外的任何其他策略,拓扑管理器都会评估这些 Pod 的规范。
273
+
拓扑管理器会咨询建议提供者,获得拓扑建议。
274
+
若策略为 `static`,则 CPU 管理器策略会返回默认的拓扑建议,因为这些 Pod
275
+
并没有显式地请求 CPU 资源。
269
276
270
277
```yaml
271
278
spec:
@@ -284,10 +291,10 @@ spec:
284
291
```
285
292
286
293
<!--
287
-
This pod runs in the `Guaranteed` QoS class because `requests` are equal to `limits`.
294
+
This pod with integer CPU request runs in the `Guaranteed` QoS class because
295
+
`requests`are equal to `limits`.
288
296
-->
289
-
此 Pod 在 `Guaranteed` QoS 类中运行,因为其 `requests` 值等于 `limits` 值。
290
-
297
+
此 Pod 以 `Guaranteed` QoS 类运行,因为其 `requests` 值等于 `limits` 值。
291
298
292
299
```yaml
293
300
spec:
@@ -302,48 +309,59 @@ spec:
302
309
example.com/deviceA: "1"
303
310
example.com/deviceB: "1"
304
311
```
312
+
305
313
<!--
306
314
This pod runs in the `BestEffort` QoS class because there are no CPU and memory requests.
307
315
-->
308
-
由于没有 CPU 和内存请求,因此该 Pod 在 `BestEffort` QoS 类中运行。
316
+
因为未指定 CPU 和内存请求,所以 Pod 以 `BestEffort` QoS 类运行。
309
317
310
318
<!--
311
319
The Topology Manager would consider both of the above pods. The Topology Manager would consult the Hint Providers, which are CPU and Device Manager to get topology hints for the pods.
312
-
In the case of the `Guaranteed` pod the `static` CPU Manager policy would return hints relating to the CPU request and the Device Manager would send back hints for the requested device.
313
-
-->
314
-
拓扑管理器将考虑以上两个 Pod。拓扑管理器将咨询 CPU 和设备管理器,以获取 Pod 的拓扑提示。
315
-
对于 `Guaranteed` Pod,`static` CPU 管理器策略将返回与 CPU 请求有关的提示,而设备管理器将返回有关所请求设备的提示。
316
320
317
-
<!--
318
-
In the case of the `BestEffort` pod the CPU Manager would send back the default hint as there is no CPU request and the Device Manager would send back the hints for each of the requested devices.
321
+
In the case of the `Guaranteed` pod with integer request, the `static` CPU Manager policy would return hints relating to the CPU request and the Device Manager would send back hints for the requested device.
319
322
-->
320
-
对于 `BestEffort` Pod,由于没有 CPU 请求,CPU 管理器将发送默认提示,而设备管理器将为每个请求的设备发送提示。
323
+
拓扑管理器将考虑以上两个 Pod。拓扑管理器将咨询建议提供者即 CPU 和设备管理器,以获取 Pod 的拓扑提示。
324
+
对于 `Guaranteed` 类的 CPU 请求数为整数的 Pod,`static` CPU 管理器策略将返回与 CPU 请求有关的提示,
325
+
而设备管理器将返回有关所请求设备的提示。
321
326
322
327
<!--
323
-
Using this information the Topology Manager calculates the optimal hint for the pod and stores this information, which will be used by the Hint Providers when they are making their resource assignments.
324
-
-->
325
-
使用此信息,拓扑管理器将为 Pod 计算最佳提示并存储该信息,并且供提示提供程序在进行资源分配时使用。
328
+
In the case of the `Guaranteed` pod with sharing CPU request, the `static` CPU Manager policy would return default topology hint as there is no exclusive CPU request and the Device Manager would send back hints for the requested device.
326
329
327
-
<!--
328
-
### Known Limitations
330
+
In the above two cases of the `Guaranteed` pod, the `none` CPU Manager policy would return default topology hint.
329
331
-->
330
-
### 已知的局限性
332
+
对于 `Guaranteed` 类的 CPU 请求可共享的 Pod,`static` CPU
333
+
管理器策略将返回默认的拓扑提示,因为没有排他性的 CPU 请求;而设备管理器
334
+
则针对所请求的设备返回有关提示。
335
+
336
+
在上述两种 `Guaranteed` Pod 的情况中,`none` CPU 管理器策略会返回默认的拓扑提示。
337
+
331
338
<!--
332
-
1. As of K8s 1.16 the Topology Manager is currently only guaranteed to work if a *single* container in the pod spec requires aligned resources. This is due to the hint generation being based on current resource allocations, and all containers in a pod generate hints before any resource allocation has been made. This results in unreliable hints for all but the first container in a pod.
333
-
*Due to this limitation if multiple pods/containers are considered by Kubelet in quick succession they may not respect the Topology Manager policy.
339
+
In the case of the `BestEffort` pod, the `static` CPU Manager policy would send back the default topology hint as there is no CPU request and the Device Manager would send back the hints for each of the requested devices.
334
340
-->
335
-
1. 从 K8s 1.16 开始,当前只能在保证 Pod 规范中的 *单个* 容器需要相匹配的资源时,拓扑管理器才能正常工作。这是由于生成的提示信息是基于当前资源分配的,并且 pod 中的所有容器都会在进行任何资源分配之前生成提示信息。这样会导致除 Pod 中的第一个容器以外的所有容器生成不可靠的提示信息。
对于 `BestEffort` Pod,由于没有 CPU 请求,`static` CPU 管理器策略将发送默认提示,
342
+
而设备管理器将为每个请求的设备发送提示。
337
343
338
344
<!--
339
-
2. The maximum number of NUMA nodes that Topology Manager will allow is 8, past this there will be a state explosion when trying to enumerate the possible NUMA affinities and generating their hints.
345
+
Using this information the Topology Manager calculates the optimal hint for the pod and stores this information, which will be used by the Hint Providers when they are making their resource assignments.
340
346
-->
341
-
2. 拓扑管理器允许的最大 NUMA 节点数为 8,并且在尝试枚举可能的 NUMA 关联并生成其提示信息时,将出现状态问题。
347
+
基于此信息,拓扑管理器将为 Pod 计算最佳提示并存储该信息,并且供
348
+
提示提供程序在进行资源分配时使用。
342
349
343
350
<!--
344
-
3. The scheduler is not topology-aware, so it is possible to be scheduled on a node and then fail on the node due to the Topology Manager.
1. The maximum number of NUMA nodes that Topology Manager allows is 8. With more than 8 NUMA nodes there will be a state explosion when trying to enumerate the possible NUMA affinities and generating their hints.
354
+
355
+
2. The scheduler is not topology-aware, so it is possible to be scheduled on a node and then fail on the node due to the Topology Manager.
356
+
357
+
3. The Device Manager and the CPU Manager are the only components to adopt the Topology Manager's HintProvider interface. This means that NUMA alignment can only be achieved for resources managed by the CPU Manager and the Device Manager. Memory or Hugepages are not considered by the Topology Manager for NUMA alignment.
0 commit comments