Skip to content

Commit a048e3f

Browse files
committed
[zh] Sync administer-cluster/memory-manager.md
1 parent f1ddfbf commit a048e3f

File tree

2 files changed

+102
-18
lines changed

2 files changed

+102
-18
lines changed
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
title: WindowsCPUAndMemoryAffinity
3+
content_type: feature_gate
4+
5+
_build:
6+
list: never
7+
render: false
8+
9+
stages:
10+
- stage: alpha
11+
defaultValue: false
12+
fromVersion: "1.32"
13+
---
14+
15+
<!--
16+
Add CPU and Memory Affinity support to Windows nodes with [CPUManager](/docs/tasks/administer-cluster/cpu-management-policies/#windows-support),
17+
[MemoryManager](/docs/tasks/administer-cluster/memory-manager/#windows-support)
18+
and topology manager.
19+
-->
20+
使用 [CPUManager](/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/#windows-support)
21+
[MemoryManager](/zh-cn/docs/tasks/administer-cluster/memory-manager/#windows-support)
22+
和拓扑管理器,为 Windows 节点提供 CPU 和内存亲和性支持。

content/zh-cn/docs/tasks/administer-cluster/memory-manager.md

Lines changed: 80 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: 使用 NUMA 感知的内存管理器
33
content_type: task
4-
min-kubernetes-server-version: v1.21
4+
min-kubernetes-server-version: v1.32
55
weight: 410
66
---
77

@@ -13,7 +13,7 @@ reviewers:
1313
- derekwaynecarr
1414
1515
content_type: task
16-
min-kubernetes-server-version: v1.21
16+
min-kubernetes-server-version: v1.32
1717
weight: 410
1818
-->
1919

@@ -64,8 +64,8 @@ To align memory resources with other requested resources in a Pod spec:
6464

6565
- CPU 管理器应该被启用,并且在节点(Node)上要配置合适的 CPU 管理器策略,
6666
参见[控制 CPU 管理策略](/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/)
67-
- 拓扑管理器要被启用,并且要在节点上配置合适的拓扑管理器策略,参见
68-
[控制拓扑管理器策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/)
67+
- 拓扑管理器要被启用,并且要在节点上配置合适的拓扑管理器策略,
68+
参见[控制拓扑管理器策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/)
6969

7070
<!--
7171
Starting from v1.22, the Memory Manager is enabled by default through `MemoryManager`
@@ -87,9 +87,9 @@ in order to enable the Memory Manager feature.
8787
这样内存管理器特性才会被启用。
8888

8989
<!--
90-
## How Memory Manager Operates?
90+
## How does the Memory Manager Operate?
9191
-->
92-
## 内存管理器如何运作?
92+
## 内存管理器如何运作? {#how-does-the-memory-manager-operate}
9393

9494
<!--
9595
The Memory Manager currently offers the guaranteed memory (and hugepages) allocation
@@ -101,19 +101,19 @@ prepare and deploy a `Guaranteed` pod as illustrated in the section
101101
-->
102102
内存管理器目前为 Guaranteed QoS 类中的 Pod 提供可保证的内存(和大页面)分配能力。
103103
若要立即将内存管理器启用,可参照[内存管理器配置](#memory-manager-configuration)节中的指南,
104-
之后按[将 Pod 放入 Guaranteed QoS 类](#placing-a-pod-in-the-guaranteed-qos-class)
105-
节中所展示的,准备并部署一个 `Guaranteed` Pod。
104+
之后按[将 Pod 放入 Guaranteed QoS 类](#placing-a-pod-in-the-guaranteed-qos-class)节中所展示的,
105+
准备并部署一个 `Guaranteed` Pod。
106106

107107
<!--
108108
The Memory Manager is a Hint Provider, and it provides topology hints for
109109
the Topology Manager which then aligns the requested resources according to these topology hints.
110-
It also enforces `cgroups` (i.e. `cpuset.mems`) for pods.
110+
On Linux, it also enforces `cgroups` (i.e. `cpuset.mems`) for pods.
111111
The complete flow diagram concerning pod admission and deployment process is illustrated in
112112
[Memory Manager KEP: Design Overview][4] and below:
113113
-->
114114
内存管理器是一个提示驱动组件(Hint Provider),负责为拓扑管理器提供拓扑提示,
115115
后者根据这些拓扑提示对所请求的资源执行对齐操作。
116-
内存管理器也会为 Pods 应用 `cgroups` 设置(即 `cpuset.mems`)。
116+
在 Linux 上,内存管理器也会为 Pod 应用 `cgroups` 设置(即 `cpuset.mems`)。
117117
与 Pod 准入和部署流程相关的完整流程图在[Memory Manager KEP: Design Overview][4]
118118
下面也有说明。
119119

@@ -183,6 +183,21 @@ NUMA 节点的分组,从而扩展内存容量。解决这个问题的详细描
183183
中。同时,关于 NUMA 分组是如何管理的,你还可以参考文档
184184
[Memory Manager KEP: Simulation - how the Memory Manager works? (by examples)][1]
185185

186+
<!--
187+
### Windows Support
188+
-->
189+
### Windows 支持 {#windows-support}
190+
191+
{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}
192+
193+
<!--
194+
Windows support can be enabled via the `WindowsCPUAndMemoryAffinity` feature gate
195+
and it requires support in the container runtime.
196+
Only the [BestEffort Policy](#policy-best-effort) is supported on Windows.
197+
-->
198+
Windows 支持可以通过 `WindowsCPUAndMemoryAffinity` 特性门控来启用,
199+
并且需要容器运行时的支持。在 Windows 上,仅支持 [BestEffort 策略](#policy-best-effort)
200+
186201
<!--
187202
## Memory Manager configuration
188203
-->
@@ -208,13 +223,15 @@ node stability (section [Reserved memory flag](#reserved-memory-flag)).
208223
Memory Manager supports two policies. You can select a policy via a `kubelet` flag `--memory-manager-policy`:
209224
210225
* `None` (default)
211-
* `Static`
226+
* `Static` (Linux only)
227+
* `BestEffort` (Windows Only)
212228
-->
213229
内存管理器支持两种策略。你可以通过 `kubelet` 标志 `--memory-manager-policy`
214230
来选择一种策略:
215231

216-
* `None` (默认)
217-
* `Static`
232+
* `None`(默认)
233+
* `Static`(仅 Linux)
234+
* `BestEffort`(仅 Windows)
218235

219236
<!--
220237
#### None policy {#policy-none}
@@ -252,6 +269,38 @@ and does not reserve the memory in the internal [NodeMap][2] object.
252269
`BestEffort``Burstable` Pod 而言,因为不存在对有保障的内存资源的请求,
253270
`Static` 内存管理器策略会返回默认的拓扑提示,并且不会通过内部的[节点映射][2]对象来预留内存。
254271

272+
<!--
273+
#### BestEffort policy {#policy-best-effort}
274+
-->
275+
#### BestEffort 策略 {#policy-best-effort}
276+
277+
{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}
278+
279+
<!--
280+
This policy is only supported on Windows.
281+
282+
On Windows, NUMA node assignment works differently than Linux.
283+
There is no mechanism to ensure that Memory access only comes from a specific NUMA node.
284+
Instead the Windows scheduler will select the most optimal NUMA node based on the CPU(s) assignments.
285+
It is possible that Windows might use other NUMA nodes if deemed optimal by the Windows scheduler.
286+
-->
287+
此策略仅 Windows 上支持。
288+
289+
在 Windows 上,NUMA 节点分配方式与 Linux 上不同。
290+
没有机制确保内存访问仅来自特定的 NUMA 节点。
291+
相反,Windows 调度器将基于 CPU 的分配来选择最优的 NUMA 节点。
292+
如果 Windows 调度器认为其他 NUMA 节点更优,Windows 可能会使用其他节点。
293+
294+
<!--
295+
The policy does track the amount of memory available and requested through the internal [NodeMap][2].
296+
The memory manager will make a best effort at ensuring that enough memory is available on
297+
a NUMA node before making the assignment.
298+
This means that in most cases memory assignment should function as expected.
299+
-->
300+
此策略会通过内部的 [NodeMap][2] 跟踪可用和请求的内存量。
301+
内存管理器将尽力确保在进行分配之前,NUMA 节点上有足够的内存可用。
302+
这意味着在大多数情况下,内存分配的工作模式是符合预期的。
303+
255304
<!--
256305
### Reserved memory flag
257306
-->
@@ -431,22 +480,35 @@ Here is an example of a correct configuration:
431480
下面是一个正确配置的示例:
432481

433482
```shell
434-
--feature-gates=MemoryManager=true
435483
--kube-reserved=cpu=4,memory=4Gi
436484
--system-reserved=cpu=1,memory=1Gi
437485
--memory-manager-policy=Static
438486
--reserved-memory '0:memory=3Gi;1:memory=2148Mi'
439487
```
440488

489+
<!--
490+
Prior to Kubernetes 1.32, you also need to add
491+
-->
492+
在 Kubernetes 1.32 之前,你还需要添加:
493+
494+
```shell
495+
--feature-gates=MemoryManager=true
496+
```
497+
441498
<!--
442499
Let us validate the configuration above:
500+
501+
1. `kube-reserved + system-reserved + eviction-hard(default) = reserved-memory(0) + reserved-memory(1)`
502+
1. `4GiB + 1GiB + 100MiB = 3GiB + 2148MiB`
503+
1. `5120MiB + 100MiB = 3072MiB + 2148MiB`
504+
1. `5220MiB = 5220MiB` (which is correct)
443505
-->
444506
我们对上面的配置做一个检查:
445507

446508
1. `kube-reserved + system-reserved + eviction-hard(default) = reserved-memory(0) + reserved-memory(1)`
447509
1. `4GiB + 1GiB + 100MiB = 3GiB + 2148MiB`
448510
1. `5120MiB + 100MiB = 3072MiB + 2148MiB`
449-
1. `5220MiB = 5220MiB` (这是对的)
511+
1. `5220MiB = 5220MiB`(这是对的)
450512

451513
<!--
452514
## Placing a Pod in the Guaranteed QoS class
@@ -457,7 +519,7 @@ The Memory Manager provides specific topology hints to the Topology Manager for
457519
For pods in a QoS class other than `Guaranteed`, the Memory Manager provides default topology hints
458520
to the Topology Manager.
459521
-->
460-
## 将 Pod 放入 Guaranteed QoS 类 {#placing-a-pod-in-the-guaranteed-qos-class}
522+
## 将 Pod 放入 Guaranteed QoS 类 {#placing-a-pod-in-the-guaranteed-qos-class}
461523

462524
若所选择的策略不是 `None`,则内存管理器会辨识处于 `Guaranteed` QoS 类中的 Pod。
463525
内存管理器为每个 `Guaranteed` Pod 向拓扑管理器提供拓扑提示信息。
@@ -535,7 +597,7 @@ became rejected at a node:
535597
- Pod 状态 - 可表明拓扑亲和性错误
536598
- 系统日志 - 包含用来调试的有价值的信息,例如,关于所生成的提示信息
537599
- 状态文件 - 其中包含内存管理器内部状态的转储(包含[节点映射和内存映射][2])
538-
- 从 v1.22 开始,[设备插件资源 API](#device-plugin-resource-api)
600+
- 从 v1.22 开始,[设备插件资源 API](#device-plugin-resource-api)
539601
可以用来检索关于为容器预留的内存的信息
540602

541603
<!--
@@ -758,7 +820,7 @@ i.e., in the `"memory"` section of NUMA node `0` (`"free":0`) and NUMA node `1`
758820
So, the total amount of free "conventional" memory in this group is equal to `0 + 103739236352` bytes.
759821
-->
760822
例如,NUMA 分组中空闲的“常规”内存的总量可以通过将分组内所有 NUMA
761-
节点上空闲内存加和来计算,即将 NUMA 节点 `0` 和 NUMA 节点 `1` 的 `"memory"` 节
823+
节点上空闲内存加和来计算,即将 NUMA 节点 `0` 和 NUMA 节点 `1` 的 `"memory"` 节
762824
(分别是 `"free":0` 和 `"free": 103739236352`)相加,得到此分组中空闲的“常规”
763825
内存总量为 `0 + 103739236352` 字节。
764826

0 commit comments

Comments
 (0)