You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/zh/docs/tasks/administer-cluster/topology-manager.md
+139-5Lines changed: 139 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -92,25 +92,37 @@ Support for the Topology Manager requires `TopologyManager` [feature gate](/docs
92
92
从 Kubernetes 1.18 版本开始,这一特性默认是启用的。
93
93
94
94
<!--
95
-
### Topology Manager Policies
95
+
### Topology Manager Scopes and Policies
96
96
97
97
The Topology Manager currently:
98
-
99
98
- Aligns Pods of all QoS classes.
100
99
- Aligns the requested resources that Hint Provider provides topology hints for.
101
100
-->
102
-
### 拓扑管理器策略
101
+
### 拓扑管理器作用域和策略
103
102
104
103
拓扑管理器目前:
105
-
106
104
- 对所有 QoS 类的 Pod 执行对齐操作
107
105
- 针对建议提供者所提供的拓扑建议,对请求的资源进行对齐
108
106
109
107
<!--
110
-
If these conditions are met, Topology Manager will align the requested resources.
108
+
If these conditions are met, the Topology Manager will align the requested resources.
109
+
110
+
In order to customise how this alignment is carried out, the Topology Manager provides two distinct knobs: `scope` and `policy`.
111
111
-->
112
112
如果满足这些条件,则拓扑管理器将对齐请求的资源。
113
113
114
+
为了定制如何进行对齐,拓扑管理器提供了两种不同的方式:`scope` 和 `policy`。
115
+
116
+
<!--
117
+
The `scope` defines the granularity at which you would like resource alignment to be performed (e.g. at the `pod` or `container` level). And the `policy` defines the actual strategy used to carry out the alignment (e.g. `best-effort`, `restricted`, `single-numa-node`, etc.).
118
+
119
+
Details on the various `scopes` and `policies` available today can be found below.
To align CPU resources with other requested resources in a Pod Spec, the CPU Manager should be enabled and proper CPU Manager policy should be configured on a Node. See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
116
128
-->
@@ -120,6 +132,117 @@ To align CPU resources with other requested resources in a Pod Spec, the CPU Man
120
132
参看[控制 CPU 管理策略](/zh/docs/tasks/administer-cluster/cpu-management-policies/).
121
133
{{< /note >}}
122
134
135
+
<!--
136
+
### Topology Manager Scopes
137
+
138
+
The Topology Manager can deal with the alignment of resources in a couple of distinct scopes:
139
+
140
+
* `container` (default)
141
+
* `pod`
142
+
143
+
Either option can be selected at a time of the kubelet startup, with `--topology-manager-scope` flag.
Within this scope, the Topology Manager performs a number of sequential resource alignments, i.e., for each container (in a pod) a separate alignment is computed. In other words, there is no notion of grouping the containers to a specific set of NUMA nodes, for this particular scope. In effect, the Topology Manager performs an arbitrary alignment of individual containers to NUMA nodes.
165
+
-->
166
+
在该作用域内,拓扑管理器依次进行一系列的资源对齐,
167
+
也就是,对每一个容器(包含在一个 Pod 里)计算单独的对齐。
168
+
换句话说,在该特定的作用域内,没有根据特定的 NUMA 节点集来把容器分组的概念。
169
+
实际上,拓扑管理器会把单个容器任意地对齐到 NUMA 节点上。
170
+
171
+
<!--
172
+
The notion of grouping the containers was endorsed and implemented on purpose in the following scope, for example the `pod` scope.
173
+
-->
174
+
容器分组的概念是在以下的作用域内特别实现的,也就是 `pod` 作用域。
175
+
176
+
<!--
177
+
### pod scope
178
+
179
+
To select the `pod` scope, start the kubelet with the command line option `--topology-manager-scope=pod`.
This scope allows for grouping all containers in a pod to a common set of NUMA nodes. That is, the Topology Manager treats a pod as a whole and attempts to allocate the entire pod (all containers) to either a single NUMA node or a common set of NUMA nodes. The following examples illustrate the alignments produced by the Topology Manager on different occasions:
187
+
-->
188
+
该作用域允许把一个 Pod 里的所有容器作为一个分组,分配到一个共同的 NUMA 节点集。
189
+
也就是,拓扑管理器会把一个 Pod 当成一个整体,
190
+
并且试图把整个 Pod(所有容器)分配到一个单个的 NUMA 节点或者一个共同的 NUMA 节点集。
191
+
以下的例子说明了拓扑管理器在不同的场景下使用的对齐方式:
192
+
193
+
<!--
194
+
* all containers can be and are allocated to a single NUMA node;
195
+
* all containers can be and are allocated to a shared set of NUMA nodes.
196
+
-->
197
+
* 所有容器可以被分配到一个单一的 NUMA 节点;
198
+
* 所有容器可以被分配到一个共享的 NUMA 节点集。
199
+
200
+
<!--
201
+
The total amount of particular resource demanded for the entire pod is calculated according to [effective requests/limits](/docs/concepts/workloads/pods/init-containers/#resources) formula, and thus, this total value is equal to the maximum of:
Using the `pod` scope in tandem with `single-numa-node` Topology Manager policy is specifically valuable for workloads that are latency sensitive or for high-throughput applications that perform IPC. By combining both options, you are able to place all containers in a pod onto a single NUMA node; hence, the inter-NUMA communication overhead can be eliminated for that pod.
215
+
-->
216
+
`pod` 作用域与 `single-numa-node` 拓扑管理器策略一起使用,
217
+
对于延时敏感的工作负载,或者对于进行 IPC 的高吞吐量应用程序,都是特别有价值的。
218
+
把这两个选项组合起来,你可以把一个 Pod 里的所有容器都放到一个单个的 NUMA 节点,
219
+
使得该 Pod 消除了 NUMA 之间的通信开销。
220
+
221
+
<!--
222
+
In the case of `single-numa-node` policy, a pod is accepted only if a suitable set of NUMA nodes is present among possible allocations. Reconsider the example above:
223
+
-->
224
+
在 `single-numa-node` 策略下,只有当可能的分配方案中存在合适的 NUMA 节点集时,Pod 才会被接受。
225
+
重新考虑上述的例子:
226
+
227
+
<!--
228
+
* a set containing only a single NUMA node - it leads to pod being admitted,
229
+
* whereas a set containing more NUMA nodes - it results in pod rejection (because instead of one NUMA node, two or more NUMA nodes are required to satisfy the allocation).
230
+
-->
231
+
* 节点集只包含单个 NUMA 节点时,Pod 就会被接受,
232
+
* 然而,节点集包含多个 NUMA 节点时,Pod 就会被拒绝
233
+
(因为满足该分配方案需要两个或以上的 NUMA 节点,而不是单个 NUMA 节点)。
234
+
235
+
<!--
236
+
To recap, Topology Manager first computes a set of NUMA nodes and then tests it against Topology Manager policy, which either leads to the rejection or admission of the pod.
237
+
-->
238
+
简要地说,拓扑管理器首先计算出 NUMA 节点集,然后使用拓扑管理器策略来测试该集合,
239
+
从而决定拒绝或者接受 Pod。
240
+
241
+
<!--
242
+
### Topology Manager Policies
243
+
-->
244
+
### 拓扑管理器策略
245
+
123
246
<!--
124
247
Topology Manager supports four allocation policies. You can set a policy via a Kubelet flag, `--topology-manager-policy`.
125
248
There are four supported policies:
@@ -138,6 +261,17 @@ There are four supported policies:
138
261
*`restricted`
139
262
*`single-numa-node`
140
263
264
+
<!--
265
+
{{< note >}}
266
+
If Topology Manager is configured with the **pod** scope, the container, which is considered by the policy, is reflecting requirements of the entire pod, and thus each container from the pod will result with **the same** topology alignment decision.
0 commit comments