Skip to content

Commit 615431c

Browse files
author
zhuzhenghao
committed
[zh] Resync memory-manager.md
1 parent 3f51242 commit 615431c

File tree

1 file changed

+35
-35
lines changed

1 file changed

+35
-35
lines changed

content/zh-cn/docs/tasks/administer-cluster/memory-manager.md

Lines changed: 35 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
title: 使用 NUMA 感知的内存管理器
33
content_type: task
44
min-kubernetes-server-version: v1.21
5+
weight: 410
56
---
67

78
<!--
@@ -13,6 +14,7 @@ reviewers:
1314
1415
content_type: task
1516
min-kubernetes-server-version: v1.21
17+
weight: 410
1618
-->
1719

1820
<!-- overview -->
@@ -37,9 +39,8 @@ Kubernetes 内存管理器(Memory Manager)为 `Guaranteed`
3739
或者会被某节点接受,或者被该节点拒绝。
3840

3941
<!--
40-
Moreover, the Memory Manager ensures that the memory which a pod
41-
requests is allocated from
42-
a minimum number of NUMA nodes.
42+
Moreover, the Memory Manager ensures that the memory which a pod requests
43+
is allocated from a minimum number of NUMA nodes.
4344
4445
The Memory Manager is only pertinent to Linux based hosts.
4546
-->
@@ -52,7 +53,7 @@ The Memory Manager is only pertinent to Linux based hosts.
5253
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
5354

5455
<!--
55-
To align memory resources with other requested resources in a Pod Spec:
56+
To align memory resources with other requested resources in a Pod spec:
5657
5758
- the CPU Manager should be enabled and proper CPU Manager policy should be configured on a Node.
5859
See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/);
@@ -123,7 +124,7 @@ The complete flow diagram concerning pod admission and deployment process is ill
123124

124125
<!--
125126
During this process, the Memory Manager updates its internal counters stored in
126-
[Node Map and Memory Maps][2] to manage guaranteed memory allocation.
127+
[Node Map and Memory Maps][2] to manage guaranteed memory allocation.
127128
128129
The Memory Manager updates the Node Map during the startup and runtime as follows.
129130
-->
@@ -158,8 +159,8 @@ The administrator must provide `--reserved-memory` flag when `Static` policy is
158159
### 运行时 {#runtime}
159160

160161
<!--
161-
Reference [Memory Manager KEP: Memory Maps at runtime (with examples)][6]
162-
illustrates how a successful pod deployment affects the Node Map, and it also relates to
162+
Reference [Memory Manager KEP: Memory Maps at runtime (with examples)][6] illustrates
163+
how a successful pod deployment affects the Node Map, and it also relates to
163164
how potential Out-of-Memory (OOM) situations are handled further by Kubernetes or operating system.
164165
-->
165166
参考文献 [Memory Manager KEP: Memory Maps at runtime (with examples)][6]
@@ -173,7 +174,7 @@ attempts to create a group that comprises several NUMA nodes and features extend
173174
The problem has been solved as elaborated in
174175
[Memory Manager KEP: How to enable the guaranteed memory allocation over many NUMA nodes?][3].
175176
Also, reference [Memory Manager KEP: Simulation - how the Memory Manager works? (by examples)][1]
176-
illustrates how the management of groups occurs.
177+
illustrates how the management of groups occurs.
177178
-->
178179
在内存管理器运作的语境中,一个重要的话题是对 NUMA 分组的管理。
179180
每当 Pod 的内存请求超出单个 NUMA 节点容量时,内存管理器会尝试创建一个包含多个
@@ -199,7 +200,7 @@ node stability (section [Reserved memory flag](#reserved-memory-flag)).
199200
[预留内存标志](#reserved-memory-flag))。
200201

201202
<!--
202-
### Policies
203+
### Policies
203204
-->
204205
### 策略 {#policies}
205206

@@ -222,7 +223,7 @@ This is the default policy and does not affect the memory allocation in any way.
222223
It acts the same as if the Memory Manager is not present at all.
223224
224225
The `None` policy returns default topology hint. This special hint denotes that Hint Provider
225-
(Memory Manger in this case) has no preference for NUMA affinity with any resource.
226+
(Memory Manager in this case) has no preference for NUMA affinity with any resource.
226227
-->
227228
#### None 策略 {#policy-none}
228229

@@ -234,7 +235,7 @@ The `None` policy returns default topology hint. This special hint denotes that
234235
<!--
235236
#### Static policy {#policy-static}
236237
237-
In the case of the `Guaranteed` pod, the `Static` Memory Manger policy returns topology hints
238+
In the case of the `Guaranteed` pod, the `Static` Memory Manager policy returns topology hints
238239
relating to the set of NUMA nodes where the memory can be guaranteed,
239240
and reserves the memory through updating the internal [NodeMap][2] object.
240241
@@ -275,7 +276,7 @@ The foregoing flags include `--kube-reserved`, `--system-reserved` and `--evicti
275276
The sum of their values will account for the total amount of reserved memory.
276277
277278
A new `--reserved-memory` flag was added to Memory Manager to allow for this total reserved memory
278-
to be split (by a node administrator) and accordingly reserved across many NUMA nodes.
279+
to be split (by a node administrator) and accordingly reserved across many NUMA nodes.
279280
-->
280281
Kubernetes 调度器在优化 Pod 调度过程时,会考虑“可分配的”内存。
281282
前面提到的标志包括 `--kube-reserved``--system-reserved``--eviction-threshold`
@@ -287,7 +288,7 @@ Kubernetes 调度器在优化 Pod 调度过程时,会考虑“可分配的”
287288
<!--
288289
The flag specifies a comma-separated list of memory reservations of different memory types per NUMA node.
289290
Memory reservations across multiple NUMA nodes can be specified using semicolon as separator.
290-
This parameter is only useful in the context of the Memory Manager feature.
291+
This parameter is only useful in the context of the Memory Manager feature.
291292
The Memory Manager will not use this reserved memory for the allocation of container workloads.
292293
293294
For example, if you have a NUMA node "NUMA0" with `10Gi` of memory available, and
@@ -336,7 +337,7 @@ Also, avoid the following configurations:
336337
(特定的 `<size>` 的大页面也必须存在)。
337338

338339
<!--
339-
Syntax:
340+
Syntax:
340341
-->
341342
语法:
342343

@@ -346,7 +347,7 @@ Syntax:
346347
* `N` (integer) - NUMA node index, e.g. `0`
347348
* `memory-type` (string) - represents memory type:
348349
* `memory` - conventional memory
349-
* `hugepages-2Mi` or `hugepages-1Gi` - hugepages
350+
* `hugepages-2Mi` or `hugepages-1Gi` - hugepages
350351
* `value` (string) - the quantity of reserved memory, e.g. `1Gi`
351352
-->
352353
* `N`(整数)- NUMA 节点索引,例如,`0`
@@ -378,11 +379,11 @@ or
378379
<!--
379380
When you specify values for `--reserved-memory` flag, you must comply with the setting that
380381
you prior provided via Node Allocatable Feature flags.
381-
That is, the following rule must be obeyed for each memory type:
382+
That is, the following rule must be obeyed for each memory type:
382383
383-
`sum(reserved-memory(i)) = kube-reserved + system-reserved + eviction-threshold`,
384+
`sum(reserved-memory(i)) = kube-reserved + system-reserved + eviction-threshold`,
384385
385-
where `i` is an index of a NUMA node.
386+
where `i` is an index of a NUMA node.
386387
-->
387388
当你为 `--reserved-memory` 标志指定取值时,必须要遵从之前通过节点可分配特性标志所设置的值。
388389
换言之,对每种内存类型而言都要遵从下面的规则:
@@ -395,7 +396,7 @@ where `i` is an index of a NUMA node.
395396
If you do not follow the formula above, the Memory Manager will show an error on startup.
396397
397398
In other words, the example above illustrates that for the conventional memory (`type=memory`),
398-
we reserve `3Gi` in total, i.e.:
399+
we reserve `3Gi` in total, i.e.:
399400
-->
400401
如果你不遵守上面的公式,内存管理器会在启动时输出错误信息。
401402

@@ -412,12 +413,12 @@ An example of kubelet command-line arguments relevant to the node Allocatable co
412413
* `--system-reserved=cpu=123m,memory=333Mi`
413414
* `--eviction-hard=memory.available<500Mi`
414415

415-
{{< note >}}
416+
{{< note >}}
416417
<!--
417418
The default hard eviction threshold is 100MiB, and **not** zero.
418419
Remember to increase the quantity of memory that you reserve by setting `--reserved-memory`
419420
by that hard eviction threshold. Otherwise, the kubelet will not start Memory Manager and
420-
display an error.
421+
display an error.
421422
-->
422423
默认的硬性驱逐阈值是 100MiB,**不是**零。
423424
请记得在使用 `--reserved-memory` 设置要预留的内存量时,加上这个硬性驱逐阈值。
@@ -430,10 +431,10 @@ Here is an example of a correct configuration:
430431
下面是一个正确配置的示例:
431432

432433
```shell
433-
--feature-gates=MemoryManager=true
434-
--kube-reserved=cpu=4,memory=4Gi
435-
--system-reserved=cpu=1,memory=1Gi
436-
--memory-manager-policy=Static
434+
--feature-gates=MemoryManager=true
435+
--kube-reserved=cpu=4,memory=4Gi
436+
--system-reserved=cpu=1,memory=1Gi
437+
--memory-manager-policy=Static
437438
--reserved-memory '0:memory=3Gi;1:memory=2148Mi'
438439
```
439440

@@ -527,7 +528,7 @@ became rejected at a node:
527528
- pod status - indicates topology affinity errors
528529
- system logs - include valuable information for debugging, e.g., about generated hints
529530
- state file - the dump of internal state of the Memory Manager
530-
(includes [Node Map and Memory Maps][2])
531+
(includes [Node Map and Memory Maps][2])
531532
- starting from v1.22, the [device plugin resource API](#device-plugin-resource-api) can be used
532533
to retrieve information about the memory reserved for containers
533534
-->
@@ -543,7 +544,7 @@ became rejected at a node:
543544
This error typically occurs in the following situations:
544545

545546
* a node has not enough resources available to satisfy the pod's request
546-
* the pod's request is rejected due to particular Topology Manager policy constraints
547+
* the pod's request is rejected due to particular Topology Manager policy constraints
547548

548549
The error appears in the status of a pod:
549550
-->
@@ -579,7 +580,7 @@ Warning TopologyAffinityError 10m kubelet, dell8 Resources cannot be alloca
579580

580581
Search system logs with respect to a particular pod.
581582

582-
The set of hints that Memory Manager generated for the pod can be found in the logs.
583+
The set of hints that Memory Manager generated for the pod can be found in the logs.
583584
Also, the set of hints generated by CPU Manager should be present in the logs.
584585
-->
585586
### 系统日志 {#system-logs}
@@ -595,7 +596,7 @@ The best hint should be also present in the logs.
595596

596597
The best hint indicates where to allocate all the resources.
597598
Topology Manager tests this hint against its current policy, and based on the verdict,
598-
it either admits the pod to the node or rejects it.
599+
it either admits the pod to the node or rejects it.
599600

600601
Also, search the logs for occurrences associated with the Memory Manager,
601602
e.g. to find out information about `cgroups` and `cpuset.mems` updates.
@@ -636,7 +637,7 @@ spec:
636637
cpu: "2"
637638
memory: 150Gi
638639
command: ["sleep","infinity"]
639-
```
640+
```
640641

641642
<!--
642643
Next, let us log into the node where it was deployed and examine the state file in
@@ -724,14 +725,14 @@ It can be deduced from the state file that the pod was pinned to both NUMA nodes
724725
0,
725726
1
726727
],
727-
```
728+
```
728729

729730
<!--
730731
Pinned term means that pod's memory consumption is constrained (through `cgroups` configuration)
731732
to these NUMA nodes.
732733

733734
This automatically implies that Memory Manager instantiated a new group that
734-
comprises these two NUMA nodes, i.e. `0` and `1` indexed NUMA nodes.
735+
comprises these two NUMA nodes, i.e. `0` and `1` indexed NUMA nodes.
735736
-->
736737
术语绑定(pinned)意味着 Pod 的内存使用被(通过 `cgroups` 配置)限制到这些 NUMA 节点。
737738

@@ -743,7 +744,7 @@ Notice that the management of groups is handled in a relatively complex manner,
743744
further elaboration is provided in Memory Manager KEP in [this][1] and [this][3] sections.
744745

745746
In order to analyse memory resources available in a group,the corresponding entries from
746-
NUMA nodes belonging to the group must be added up.
747+
NUMA nodes belonging to the group must be added up.
747748
-->
748749
注意 NUMA 分组的管理是有一个相对复杂的管理器处理的,
749750
相关逻辑的进一步细节可在内存管理器的 KEP 中[示例1][1]和[跨 NUMA 节点][3]节找到。
@@ -791,7 +792,7 @@ kubelet 提供了一个 `PodResourceLister` gRPC 服务来启用对资源和相
791792
<!--
792793
以下均为英文设计文档,因此其标题不翻译。
793794
-->
794-
- [Memory Manager KEP: Design Overview][4]
795+
- [Memory Manager KEP: Design Overview][4]
795796
- [Memory Manager KEP: Memory Maps at start-up (with examples)][5]
796797
- [Memory Manager KEP: Memory Maps at runtime (with examples)][6]
797798
- [Memory Manager KEP: Simulation - how the Memory Manager works? (by examples)][1]
@@ -804,4 +805,3 @@ kubelet 提供了一个 `PodResourceLister` gRPC 服务来启用对资源和相
804805
[4]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1769-memory-manager#design-overview
805806
[5]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1769-memory-manager#memory-maps-at-start-up-with-examples
806807
[6]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1769-memory-manager#memory-maps-at-runtime-with-examples
807-

0 commit comments

Comments
 (0)