Skip to content

Commit b5472b7

Browse files
authored
Merge pull request #33529 from mengjiao-liu/sync-1.24-pod-overhead
[zh]Sync pod-overhead.md
2 parents 0239efe + 51cb915 commit b5472b7

File tree

1 file changed

+52
-50
lines changed

1 file changed

+52
-50
lines changed

content/zh/docs/concepts/scheduling-eviction/pod-overhead.md

Lines changed: 52 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -18,17 +18,17 @@ weight: 30
1818

1919
<!-- overview -->
2020

21-
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
21+
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
2222

2323
<!--
2424
When you run a Pod on a Node, the Pod itself takes an amount of system resources. These
2525
resources are additional to the resources needed to run the container(s) inside the Pod.
26-
_Pod Overhead_ is a feature for accounting for the resources consumed by the Pod infrastructure
27-
on top of the container requests & limits.
26+
In Kubernetes, _Pod Overhead_ is a way to account for the resources consumed by the Pod
27+
infrastructure on top of the container requests & limits.
2828
-->
2929

3030
在节点上运行 Pod 时,Pod 本身占用大量系统资源。这些是运行 Pod 内容器所需资源之外的资源。
31-
_POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。
31+
在 Kubernetes 中,_POD 开销_ 是一种方法,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。
3232

3333
<!-- body -->
3434

@@ -53,43 +53,39 @@ the Pod cgroup, and when carrying out Pod eviction ranking.
5353
类似地,kubelet 将在确定 Pod cgroups 的大小和执行 Pod 驱逐排序时也会考虑 Pod 开销。
5454

5555
<!--
56-
## Enabling Pod Overhead {#set-up}
56+
## Configuring Pod overhead {#set-up}
5757
-->
58-
## 启用 Pod 开销 {#set-up}
58+
## 配置 Pod 开销 {#set-up}
5959

6060
<!--
61-
You need to make sure that the `PodOverhead`
62-
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (it is on by default as of 1.18)
63-
across your cluster, and a `RuntimeClass` is utilized which defines the `overhead` field.
61+
You need to make sure a `RuntimeClass` is utilized which defines the `overhead` field.
6462
-->
65-
你需要确保在集群中启用了 `PodOverhead` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
66-
(在 1.18 默认是开启的),以及一个定义了 `overhead` 字段的 `RuntimeClass`
63+
你需要确保使用一个定义了 `overhead` 字段的 `RuntimeClass`
6764

6865
<!--
6966
## Usage example
7067
-->
7168
## 使用示例
7269

7370
<!--
74-
To use the PodOverhead feature, you need a RuntimeClass that defines the `overhead` field. As
75-
an example, you could use the following RuntimeClass definition with a virtualizing container runtime
76-
that uses around 120MiB per Pod for the virtual machine and the guest OS:
71+
To work with Pod overhead, you need a RuntimeClass that defines the `overhead` field. As
72+
an example, you could use the following RuntimeClass definition with a virtualization container
73+
runtime that uses around 120MiB per Pod for the virtual machine and the guest OS:
7774
-->
78-
要使用 PodOverhead 特性,需要一个定义了 `overhead` 字段的 RuntimeClass。
75+
要使用 Pod 开销,你需要一个定义了 `overhead` 字段的 RuntimeClass。
7976
作为例子,下面的 RuntimeClass 定义中包含一个虚拟化所用的容器运行时,
8077
RuntimeClass 如下,其中每个 Pod 大约使用 120MiB 用来运行虚拟机和寄宿操作系统:
8178

8279
```yaml
83-
---
84-
kind: RuntimeClass
8580
apiVersion: node.k8s.io/v1
81+
kind: RuntimeClass
8682
metadata:
87-
name: kata-fc
83+
name: kata-fc
8884
handler: kata-fc
8985
overhead:
90-
podFixed:
91-
memory: "120Mi"
92-
cpu: "250m"
86+
podFixed:
87+
memory: "120Mi"
88+
cpu: "250m"
9389
```
9490
9591
<!--
@@ -141,8 +137,7 @@ RuntimeClass 中定义的 `overhead`。如果 PodSpec 中已定义该字段,
141137
<!--
142138
After the RuntimeClass admission controller, you can check the updated PodSpec:
143139
-->
144-
在 RuntimeClass 准入控制器之后,可以检验一下已更新的 PodSpec:
145-
140+
在 RuntimeClass 准入控制器进行修改后,你可以查看更新后的 PodSpec:
146141
```bash
147142
kubectl get pod test-pod -o jsonpath='{.spec.overhead}'
148143
```
@@ -171,8 +166,10 @@ requests and the overhead, then looks for a node that has 2.25 CPU and 320 MiB o
171166
然后寻找具备 2.25 CPU 和 320 MiB 内存可用的节点。
172167
173168
<!--
174-
Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}
175-
for the Pod. It is within this pod that the underlying container runtime will create containers. -->
169+
Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< glossary_tooltip
170+
text="cgroup" term_id="cgroup" >}} for the Pod. It is within this pod that the underlying
171+
container runtime will create containers.
172+
-->
176173
一旦 Pod 被调度到了某个节点, 该节点上的 kubelet 将为该 Pod 新建一个
177174
{{< glossary_tooltip text="cgroup" term_id="cgroup" >}}。 底层容器运行时将在这个
178175
Pod 中创建容器。
@@ -189,8 +186,8 @@ Burstable QoS),kubelet 会为与该资源(CPU 的 `cpu.cfs_quota_us` 以
189186
相关的 Pod cgroup 设定一个上限。该上限基于 PodSpec 中定义的容器限制总量与 `overhead` 之和。
190187
191188
<!--
192-
For CPU, if the Pod is Guaranteed or Burstable QoS, the kubelet will set `cpu.shares` based on the sum of container
193-
requests plus the `overhead` defined in the PodSpec.
189+
For CPU, if the Pod is Guaranteed or Burstable QoS, the kubelet will set `cpu.shares` based on the
190+
sum of container requests plus the `overhead` defined in the PodSpec.
194191
-->
195192
对于 CPU,如果 Pod 的 QoS 是 Guaranteed 或者 Burstable,kubelet 会基于容器请求总量与
196193
PodSpec 中定义的 `overhead` 之和设置 `cpu.shares`。
@@ -199,6 +196,7 @@ PodSpec 中定义的 `overhead` 之和设置 `cpu.shares`。
199196
Looking at our example, verify the container requests for the workload:
200197
-->
201198
请看这个例子,验证工作负载的容器请求:
199+
202200
```bash
203201
kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
204202
```
@@ -207,6 +205,7 @@ kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
207205
The total container requests are 2000m CPU and 200MiB of memory:
208206
-->
209207
容器请求总计 2000m CPU 和 200MiB 内存:
208+
210209
```
211210
map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
212211
```
@@ -215,18 +214,19 @@ map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
215214
Check this against what is observed by the node:
216215
-->
217216
对照从节点观察到的情况来检查一下:
217+
218218
```bash
219219
kubectl describe node | grep test-pod -B2
220220
```
221221

222222
<!--
223-
The output shows 2250m CPU and 320MiB of memory are requested, which includes PodOverhead:
224-
-->
225-
该输出显示请求了 2250m CPU 以及 320MiB 内存,包含了 PodOverhead 在内
223+
The output shows requests for 2250m CPU, and for 320MiB of memory. The requests include Pod overhead:
224+
-->
225+
该输出显示请求了 2250m CPU 以及 320MiB 内存。请求包含了 Pod 开销在内
226226
```
227-
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
228-
--------- ---- ------------ ---------- --------------- ------------- ---
229-
default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m
227+
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
228+
--------- ---- ------------ ---------- --------------- ------------- ---
229+
default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m
230230
```
231231

232232
<!--
@@ -235,17 +235,18 @@ The output shows 2250m CPU and 320MiB of memory are requested, which includes Po
235235
## 验证 Pod cgroup 限制
236236

237237
<!--
238-
Check the Pod's memory cgroups on the node where the workload is running. In the following example, [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
238+
Check the Pod's memory cgroups on the node where the workload is running. In the following example,
239+
[`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
239240
is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an
240-
advanced example to show PodOverhead behavior, and it is not expected that users should need to check
241+
advanced example to show Pod overhead behavior, and it is not expected that users should need to check
241242
cgroups directly on the node.
242243
243244
First, on the particular node, determine the Pod identifier:
244245
-->
245246
在工作负载所运行的节点上检查 Pod 的内存 cgroups。在接下来的例子中,
246247
将在该节点上使用具备 CRI 兼容的容器运行时命令行工具
247248
[`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
248-
这是一个显示 PodOverhead 行为的高级示例, 预计用户不需要直接在节点上检查 cgroups。
249+
这是一个显示 Pod 开销行为的高级示例, 预计用户不需要直接在节点上检查 cgroups。
249250
首先在特定的节点上确定该 Pod 的标识符:
250251

251252
<!--
@@ -275,13 +276,15 @@ sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath
275276
The resulting cgroup path includes the Pod's `pause` container. The Pod level cgroup is one directory above.
276277
-->
277278
执行结果的 cgroup 路径中包含了该 Pod 的 `pause` 容器。Pod 级别的 cgroup 在即上一层目录。
279+
278280
```
279-
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
281+
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
280282
```
281283

282284
<!--
283-
In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verify the Pod level cgroup setting for memory:
284-
-->
285+
In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`.
286+
Verify the Pod level cgroup setting for memory:
287+
-->
285288
在这个例子中,该 Pod 的 cgroup 路径是 `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`
286289
验证内存的 Pod 级别 cgroup 设置:
287290

@@ -300,6 +303,7 @@ In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-94
300303
This is 320 MiB, as expected:
301304
-->
302305
和预期的一样,这一数值为 320 MiB。
306+
303307
```
304308
335544320
305309
```
@@ -310,24 +314,22 @@ This is 320 MiB, as expected:
310314
### 可观察性
311315

312316
<!--
313-
A `kube_pod_overhead` metric is available in [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
314-
to help identify when PodOverhead is being utilized and to help observe stability of workloads
315-
running with a defined Overhead. This functionality is not available in the 1.9 release of
316-
kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics
317-
from source in the meantime.
317+
Some `kube_pod_overhead_*` metrics are available in [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
318+
to help identify when Pod overhead is being utilized and to help observe stability of workloads
319+
running with a defined overhead.
318320
-->
319321
[kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过
320-
`kube_pod_overhead` 指标来协助确定何时使用 PodOverhead
322+
`kube_pod_overhead_*` 指标来协助确定何时使用 Pod 开销,
321323
以及协助观察以一个既定开销运行的工作负载的稳定性。
322324
该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。
323325
在此之前,用户需要从源代码构建 kube-state-metrics。
324326

325327
## {{% heading "whatsnext" %}}
326328

327329
<!--
328-
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
329-
* [PodOverhead Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
330+
* Learn more about [RuntimeClass](/docs/concepts/containers/runtime-class/)
331+
* Read the [PodOverhead Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
332+
enhancement proposal for extra context
330333
-->
331-
332-
* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
333-
* [PodOverhead 设计](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
334+
* 学习更多关于 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 的信息
335+
* 阅读 [PodOverhead 设计](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)增强建议以获取更多上下文

0 commit comments

Comments
 (0)