Skip to content

Commit fa2b8c8

Browse files
authored
Merge pull request #26363 from chenxuc/topo-manager
[zh] Sych web page for topo manager
2 parents 6db9ea6 + e54d16c commit fa2b8c8

File tree

1 file changed

+139
-5
lines changed

1 file changed

+139
-5
lines changed

content/zh/docs/tasks/administer-cluster/topology-manager.md

Lines changed: 139 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -92,25 +92,37 @@ Support for the Topology Manager requires `TopologyManager` [feature gate](/docs
9292
从 Kubernetes 1.18 版本开始,这一特性默认是启用的。
9393

9494
<!--
95-
### Topology Manager Policies
95+
### Topology Manager Scopes and Policies
9696
9797
The Topology Manager currently:
98-
9998
- Aligns Pods of all QoS classes.
10099
- Aligns the requested resources that Hint Provider provides topology hints for.
101100
-->
102-
### 拓扑管理器策略
101+
### 拓扑管理器作用域和策略
103102

104103
拓扑管理器目前:
105-
106104
- 对所有 QoS 类的 Pod 执行对齐操作
107105
- 针对建议提供者所提供的拓扑建议,对请求的资源进行对齐
108106

109107
<!--
110-
If these conditions are met, Topology Manager will align the requested resources.
108+
If these conditions are met, the Topology Manager will align the requested resources.
109+
110+
In order to customise how this alignment is carried out, the Topology Manager provides two distinct knobs: `scope` and `policy`.
111111
-->
112112
如果满足这些条件,则拓扑管理器将对齐请求的资源。
113113

114+
为了定制如何进行对齐,拓扑管理器提供了两种不同的方式:`scope``policy`
115+
116+
<!--
117+
The `scope` defines the granularity at which you would like resource alignment to be performed (e.g. at the `pod` or `container` level). And the `policy` defines the actual strategy used to carry out the alignment (e.g. `best-effort`, `restricted`, `single-numa-node`, etc.).
118+
119+
Details on the various `scopes` and `policies` available today can be found below.
120+
-->
121+
`scope` 定义了资源对齐时你所希望使用的粒度(例如,是在 `pod` 还是 `container` 级别)。
122+
`policy` 定义了对齐时实际使用的策略(例如,`best-effort``restricted``single-numa-node` 等等)。
123+
124+
可以在下文找到现今可用的各种 `scopes``policies` 的具体信息。
125+
114126
<!--
115127
To align CPU resources with other requested resources in a Pod Spec, the CPU Manager should be enabled and proper CPU Manager policy should be configured on a Node. See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
116128
-->
@@ -120,6 +132,117 @@ To align CPU resources with other requested resources in a Pod Spec, the CPU Man
120132
参看[控制 CPU 管理策略](/zh/docs/tasks/administer-cluster/cpu-management-policies/).
121133
{{< /note >}}
122134

135+
<!--
136+
### Topology Manager Scopes
137+
138+
The Topology Manager can deal with the alignment of resources in a couple of distinct scopes:
139+
140+
* `container` (default)
141+
* `pod`
142+
143+
Either option can be selected at a time of the kubelet startup, with `--topology-manager-scope` flag.
144+
-->
145+
### 拓扑管理器作用域
146+
147+
拓扑管理器可以在以下不同的作用域内进行资源对齐:
148+
149+
* `container` (默认)
150+
* `pod`
151+
152+
在 kubelet 启动时,可以使用 `--topology-manager-scope` 标志来选择其中任一选项。
153+
154+
<!--
155+
### container scope
156+
157+
The `container` scope is used by default.
158+
-->
159+
### 容器作用域
160+
161+
默认使用的是 `container` 作用域。
162+
163+
<!--
164+
Within this scope, the Topology Manager performs a number of sequential resource alignments, i.e., for each container (in a pod) a separate alignment is computed. In other words, there is no notion of grouping the containers to a specific set of NUMA nodes, for this particular scope. In effect, the Topology Manager performs an arbitrary alignment of individual containers to NUMA nodes.
165+
-->
166+
在该作用域内,拓扑管理器依次进行一系列的资源对齐,
167+
也就是,对每一个容器(包含在一个 Pod 里)计算单独的对齐。
168+
换句话说,在该特定的作用域内,没有根据特定的 NUMA 节点集来把容器分组的概念。
169+
实际上,拓扑管理器会把单个容器任意地对齐到 NUMA 节点上。
170+
171+
<!--
172+
The notion of grouping the containers was endorsed and implemented on purpose in the following scope, for example the `pod` scope.
173+
-->
174+
容器分组的概念是在以下的作用域内特别实现的,也就是 `pod` 作用域。
175+
176+
<!--
177+
### pod scope
178+
179+
To select the `pod` scope, start the kubelet with the command line option `--topology-manager-scope=pod`.
180+
-->
181+
### Pod 作用域
182+
183+
使用命令行选项 `--topology-manager-scope=pod` 来启动 kubelet,就可以选择 `pod` 作用域。
184+
185+
<!--
186+
This scope allows for grouping all containers in a pod to a common set of NUMA nodes. That is, the Topology Manager treats a pod as a whole and attempts to allocate the entire pod (all containers) to either a single NUMA node or a common set of NUMA nodes. The following examples illustrate the alignments produced by the Topology Manager on different occasions:
187+
-->
188+
该作用域允许把一个 Pod 里的所有容器作为一个分组,分配到一个共同的 NUMA 节点集。
189+
也就是,拓扑管理器会把一个 Pod 当成一个整体,
190+
并且试图把整个 Pod(所有容器)分配到一个单个的 NUMA 节点或者一个共同的 NUMA 节点集。
191+
以下的例子说明了拓扑管理器在不同的场景下使用的对齐方式:
192+
193+
<!--
194+
* all containers can be and are allocated to a single NUMA node;
195+
* all containers can be and are allocated to a shared set of NUMA nodes.
196+
-->
197+
* 所有容器可以被分配到一个单一的 NUMA 节点;
198+
* 所有容器可以被分配到一个共享的 NUMA 节点集。
199+
200+
<!--
201+
The total amount of particular resource demanded for the entire pod is calculated according to [effective requests/limits](/docs/concepts/workloads/pods/init-containers/#resources) formula, and thus, this total value is equal to the maximum of:
202+
* the sum of all app container requests,
203+
* the maximum of init container requests,
204+
for a resource.
205+
-->
206+
整个 Pod 所请求的某种资源总量是根据
207+
[有效 request/limit](/zh/docs/concepts/workloads/pods/init-containers/#resources)
208+
公式来计算的,
209+
因此,对某一种资源而言,该总量等于以下数值中的最大值:
210+
* 所有应用容器请求之和;
211+
* 初始容器请求的最大值。
212+
213+
<!--
214+
Using the `pod` scope in tandem with `single-numa-node` Topology Manager policy is specifically valuable for workloads that are latency sensitive or for high-throughput applications that perform IPC. By combining both options, you are able to place all containers in a pod onto a single NUMA node; hence, the inter-NUMA communication overhead can be eliminated for that pod.
215+
-->
216+
`pod` 作用域与 `single-numa-node` 拓扑管理器策略一起使用,
217+
对于延时敏感的工作负载,或者对于进行 IPC 的高吞吐量应用程序,都是特别有价值的。
218+
把这两个选项组合起来,你可以把一个 Pod 里的所有容器都放到一个单个的 NUMA 节点,
219+
使得该 Pod 消除了 NUMA 之间的通信开销。
220+
221+
<!--
222+
In the case of `single-numa-node` policy, a pod is accepted only if a suitable set of NUMA nodes is present among possible allocations. Reconsider the example above:
223+
-->
224+
`single-numa-node` 策略下,只有当可能的分配方案中存在合适的 NUMA 节点集时,Pod 才会被接受。
225+
重新考虑上述的例子:
226+
227+
<!--
228+
* a set containing only a single NUMA node - it leads to pod being admitted,
229+
* whereas a set containing more NUMA nodes - it results in pod rejection (because instead of one NUMA node, two or more NUMA nodes are required to satisfy the allocation).
230+
-->
231+
* 节点集只包含单个 NUMA 节点时,Pod 就会被接受,
232+
* 然而,节点集包含多个 NUMA 节点时,Pod 就会被拒绝
233+
(因为满足该分配方案需要两个或以上的 NUMA 节点,而不是单个 NUMA 节点)。
234+
235+
<!--
236+
To recap, Topology Manager first computes a set of NUMA nodes and then tests it against Topology Manager policy, which either leads to the rejection or admission of the pod.
237+
-->
238+
简要地说,拓扑管理器首先计算出 NUMA 节点集,然后使用拓扑管理器策略来测试该集合,
239+
从而决定拒绝或者接受 Pod。
240+
241+
<!--
242+
### Topology Manager Policies
243+
-->
244+
### 拓扑管理器策略
245+
123246
<!--
124247
Topology Manager supports four allocation policies. You can set a policy via a Kubelet flag, `--topology-manager-policy`.
125248
There are four supported policies:
@@ -138,6 +261,17 @@ There are four supported policies:
138261
* `restricted`
139262
* `single-numa-node`
140263

264+
<!--
265+
{{< note >}}
266+
If Topology Manager is configured with the **pod** scope, the container, which is considered by the policy, is reflecting requirements of the entire pod, and thus each container from the pod will result with **the same** topology alignment decision.
267+
{{< /note >}}
268+
-->
269+
{{< note >}}
270+
如果拓扑管理器配置使用 **Pod** 作用域,
271+
那么在策略考量一个容器时,该容器反映的是整个 Pod 的要求,
272+
于是该 Pod 里的每个容器都会得到 **相同的** 拓扑对齐决定。
273+
{{< /note >}}
274+
141275
<!--
142276
### none policy {#policy-none}
143277

0 commit comments

Comments
 (0)