Skip to content

Commit 3b23a9a

Browse files
authored
Merge pull request #36094 from mengjiao-liu/fix-manage-resources-containers-page-zh
[zh-cn] Resync manage-resources-containers page
2 parents d3d497b + d03daa8 commit 3b23a9a

File tree

1 file changed

+49
-47
lines changed

1 file changed

+49
-47
lines changed

content/zh-cn/docs/concepts/configuration/manage-resources-containers.md

Lines changed: 49 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ title: Resource Management for Pods and Containers
1414
content_type: concept
1515
weight: 40
1616
feature:
17-
title: Automatic binpacking
17+
title: Automatic bin packing
1818
description: >
1919
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability.
2020
Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
@@ -155,7 +155,7 @@ Kubernetes API 服务器读取和修改的对象。
155155
For each container, you can specify resource limits and requests,
156156
including the following:
157157
-->
158-
## Pod 和 容器的资源请求和约束
158+
## Pod 和 容器的资源请求和约束 {#resource-requests-and-limits-of-pod-and-container}
159159

160160
针对每个容器,你都可以指定其资源约束和请求,包括如下选项:
161161

@@ -170,7 +170,6 @@ including the following:
170170
Although you can only specify requests and limits for individual containers,
171171
it is also useful to think about the overall resource requests and limits for
172172
a Pod.
173-
A
174173
For a particular resource, a *Pod resource request/limit* is the sum of the
175174
resource requests/limits of that type for each container in the Pod.
176175
-->
@@ -184,7 +183,7 @@ resource requests/limits of that type for each container in the Pod.
184183
185184
Limits and requests for CPU resources are measured in *cpu* units.
186185
In Kubernetes, 1 CPU unit is equivalent to **1 physical CPU core**,
187-
or **1 virtual core**, depending on whether the node is a physical host
186+
or **1 virtual core**, depending on whether the node is a physical host
188187
or a virtual machine running inside a physical machine.
189188
-->
190189
## Kubernetes 中的资源单位 {#resource-units-in-kubernetes}
@@ -316,7 +315,7 @@ a Pod on a node if the capacity check fails. This protects against a resource
316315
shortage on a node when resource usage later increases, for example, during a
317316
daily peak in request rate.
318317
-->
319-
## 带资源请求的 Pod 如何调度
318+
## 带资源请求的 Pod 如何调度 {#how-pods-with-resource-limits-are-run}
320319
321320
当你创建一个 Pod 时,Kubernetes 调度程序将为 Pod 选择一个节点。
322321
每个节点对每种资源类型都有一个容量上限:可为 Pod 提供的 CPU 和内存量。
@@ -328,7 +327,7 @@ daily peak in request rate.
328327
<!--
329328
## How Kubernetes applies resource requests and limits {#how-pods-with-resource-limits-are-run}
330329
331-
When the kubelet starts a container of a Pod, the kubelet passes that container's
330+
When the kubelet starts a container as part of a Pod, the kubelet passes that container's
332331
requests and limits for memory and CPU to the container runtime.
333332
334333
On Linux, the container runtime typically configures
@@ -337,7 +336,7 @@ limits you defined.
337336
-->
338337
## Kubernetes 应用资源请求与约束的方式 {#how-pods-with-resource-limits-are-run}
339338
340-
当 kubelet 启动 Pod 中的容器时,它会将容器的 CPU 和内存请求与约束信息传递给容器运行时。
339+
当 kubelet 将容器作为 Pod 的一部分启动时,它会将容器的 CPU 和内存请求与约束信息传递给容器运行时。
341340
342341
在 Linux 系统上,容器运行时通常会配置内核
343342
{{< glossary_tooltip text="CGroups" term_id="cgroup" >}},负责应用并实施所定义的请求。
@@ -414,7 +413,7 @@ are available in your cluster, then Pod resource usage can be retrieved either
414413
from the [Metrics API](/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api)
415414
directly or from your monitoring tools.
416415
-->
417-
## 监控计算和内存资源用量
416+
## 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage}
418417

419418
kubelet 会将 Pod 的资源使用情况作为 Pod
420419
[`status`](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
@@ -433,7 +432,7 @@ locally-attached writeable devices or, sometimes, by RAM.
433432

434433
Pods use ephemeral local storage for scratch space, caching, and for logs.
435434
The kubelet can provide scratch space to Pods using local ephemeral storage to
436-
mount [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
435+
mount [`emptyDir`](/docs/concepts/storage/volumes/#emptydir)
437436
{{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
438437
-->
439438
## 本地临时存储 {#local-ephemeral-storage}
@@ -490,7 +489,7 @@ The kubelet also writes
490489
[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
491490
and treats these similarly to ephemeral local storage.
492491
-->
493-
### 本地临时性存储的配置
492+
### 本地临时性存储的配置 {##configurations-for-local-ephemeral-storage}
494493

495494
Kubernetes 有两种方式支持节点上配置本地临时性存储:
496495

@@ -606,12 +605,12 @@ container of a Pod can specify either or both of the following:
606605
* `spec.containers[].resources.limits.ephemeral-storage`
607606
* `spec.containers[].resources.requests.ephemeral-storage`
608607

609-
Limits and requests for `ephemeral-storage` are measured in quantities.
608+
Limits and requests for `ephemeral-storage` are measured in byte quantities.
610609
You can express storage as a plain integer or as a fixed-point number using one of these suffixes:
611-
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
612-
Mi, Ki. For example, the following represent roughly the same value:
610+
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
611+
Mi, Ki. For example, the following quantities all represent roughly the same value:
613612
-->
614-
### 为本地临时性存储设置请求和约束值
613+
### 为本地临时性存储设置请求和约束值 {#setting-requests-and-limits-for-local-ephemeral-storage}
615614

616615
你可以使用 `ephemeral-storage` 来管理本地临时性存储。
617616
Pod 中的每个容器可以设置以下属性:
@@ -620,7 +619,7 @@ Pod 中的每个容器可以设置以下属性:
620619
* `spec.containers[].resources.requests.ephemeral-storage`
621620

622621
`ephemeral-storage` 的请求和约束值是按量纲计量的。你可以使用一般整数或者定点数字
623-
加上下面的后缀来表达存储量:E、P、T、G、M、K
622+
加上下面的后缀来表达存储量:E、P、T、G、M、k
624623
你也可以使用对应的 2 的幂级数来表达:Ei、Pi、Ti、Gi、Mi、Ki。
625624
例如,下面的表达式所表达的大致是同一个值:
626625

@@ -641,8 +640,8 @@ or 400 megabytes (`400M`).
641640
<!--
642641
In the following example, the Pod has two containers. Each container has a request of
643642
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
644-
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a
645-
limit of 8GiB of local ephemeral storage.
643+
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
644+
a limit of 8GiB of local ephemeral storage.
646645
-->
647646

648647
在下面的例子中,Pod 包含两个容器。每个容器请求 2 GiB 大小的本地临时性存储。
@@ -692,7 +691,7 @@ For more information, see
692691
The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node.
693692
-->
694693

695-
### 带临时性存储的 Pods 的调度行为
694+
### 带临时性存储的 Pods 的调度行为 {#how-pods-with-ephemeral-storage-requests-are-scheduled}
696695

697696
当你创建一个 Pod 时,Kubernetes 调度器会为 Pod 选择一个节点来运行之。
698697
每个节点都有一个本地临时性存储的上限,是其可提供给 Pods 使用的总量。
@@ -870,6 +869,7 @@ If you want to use project quotas, you should:
870869
has project quotas enabled. All XFS filesystems support project quotas.
871870
For ext4 filesystems, you need to enable the project quota tracking feature
872871
while the filesystem is not mounted.
872+
873873
```bash
874874
# For ext4, with /dev/block-device not mounted
875875
sudo tune2fs -O project -Q prjquota /dev/block-device
@@ -962,11 +962,11 @@ asynchronously by the kubelet.
962962
kubelet 会异步地对 `status.allocatable` 字段执行自动更新操作,使之包含新资源。
963963

964964
<!--
965-
Because the scheduler uses the node `status.allocatable` value when
966-
evaluating Pod fitness, the shceduler only takes account of the new value after
967-
the asynchronous update. There may be a short delay between patching the
965+
Because the scheduler uses the node's `status.allocatable` value when
966+
evaluating Pod fitness, the scheduler only takes account of the new value after
967+
that asynchronous update. There may be a short delay between patching the
968968
node capacity with a new resource and the time when the first Pod that requests
969-
the resource to be scheduled on that node.
969+
the resource can be scheduled on that node.
970970
-->
971971
由于调度器在评估 Pod 是否适合在某节点上执行时会使用节点的 `status.allocatable` 值,
972972
调度器只会考虑异步更新之后的新值。
@@ -997,6 +997,7 @@ http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
997997
In the preceding request, `~1` is the encoding for the character `/`
998998
in the patch path. The operation path value in JSON-Patch is interpreted as a
999999
JSON-Pointer. For more details, see
1000+
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
10001001
{{< /note >}}
10011002
-->
10021003
{{< note >}}
@@ -1013,14 +1014,14 @@ Cluster-level extended resources are not tied to nodes. They are usually managed
10131014
by scheduler extenders, which handle the resource consumption and resource quota.
10141015

10151016
You can specify the extended resources that are handled by scheduler extenders
1016-
in [scheduler policy configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
1017+
in [scheduler configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
10171018
-->
10181019
#### 集群层面的扩展资源 {#cluster-level-extended-resources}
10191020

10201021
集群层面的扩展资源并不绑定到具体节点。
10211022
它们通常由调度器扩展程序(Scheduler Extenders)管理,这些程序处理资源消耗和资源配额。
10221023

1023-
你可以在[调度器策略配置](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
1024+
你可以在[调度器配置](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
10241025
中指定由调度器扩展程序处理的扩展资源。
10251026

10261027
<!--
@@ -1158,12 +1159,12 @@ to limit the number of PIDs that a given Pod can consume. See
11581159
If the scheduler cannot find any node where a Pod can fit, the Pod remains
11591160
unscheduled until a place can be found. An
11601161
[Event](/docs/reference/kubernetes-api/cluster-resources/event-v1/) is produced
1161-
each time the scheduler fails to find a place for the Pod, You can use `kubectl`
1162+
each time the scheduler fails to find a place for the Pod. You can use `kubectl`
11621163
to view the events for a Pod; for example:
11631164
-->
1164-
## 疑难解答
1165+
## 疑难解答 {#troubleshooting}
11651166

1166-
### 我的 Pod 处于悬决状态且事件信息显示 `FailedScheduling`
1167+
### 我的 Pod 处于悬决状态且事件信息显示 `FailedScheduling` {#my-pods-are-pending-with-event-message-failedscheduling}
11671168

11681169
如果调度器找不到该 Pod 可以匹配的任何节点,则该 Pod 将保持未被调度状态,
11691170
直到找到一个可以被调度到的位置。每当调度器找不到 Pod 可以调度的地方时,
@@ -1240,22 +1241,22 @@ Allocated resources:
12401241
(Total limits may be over 100 percent, i.e., overcommitted.)
12411242
CPU Requests CPU Limits Memory Requests Memory Limits
12421243
------------ ---------- --------------- -------------
1243-
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
1244+
680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)
12441245
```
12451246

12461247
<!--
1247-
In the preceding output, you can see that if a Pod requests more than 1120m
1248-
CPUs or 6.23Gi of memory, it will not fit on the node.
1248+
In the preceding output, you can see that if a Pod requests more than 1.120 CPUs
1249+
or more than 6.23Gi of memory, that Pod will not fit on the node.
12491250
12501251
By looking at the "Pods" section, you can see which Pods are taking up space on
12511252
the node.
12521253
-->
1253-
在上面的输出中,你可以看到如果 Pod 请求超过 1120m CPU 或者 6.23Gi 内存,节点将无法满足。
1254+
在上面的输出中,你可以看到如果 Pod 请求超过 1.120 CPU 或者 6.23Gi 内存,节点将无法满足。
12541255

12551256
通过查看 "Pods" 部分,你将看到哪些 Pod 占用了节点上的资源。
12561257

12571258
<!--
1258-
The amount of resources available to Pods is less than the node capacity, because
1259+
The amount of resources available to Pods is less than the node capacity because
12591260
system daemons use a portion of the available resources. Within the Kubernetes API,
12601261
each Node has a `.status.allocatable` field
12611262
(see [NodeStatus](/docs/reference/kubernetes-api/cluster-resources/node-v1/#NodeStatus)
@@ -1286,7 +1287,7 @@ prevent one team from using so much of any resource that this over-use affects o
12861287
12871288
You should also consider what access you grant to that namespace:
12881289
**full** write access to a namespace allows someone with that access to remove any
1289-
resource, include a configured ResourceQuota.
1290+
resource, including a configured ResourceQuota.
12901291
-->
12911292
你可以配置[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/)功能特性以限制每个名字空间可以使用的资源总量。
12921293
当某名字空间中存在 ResourceQuota 时,Kubernetes 会在该名字空间中的对象强制实施配额。
@@ -1305,7 +1306,7 @@ whether a Container is being killed because it is hitting a resource limit, call
13051306
`kubectl describe pod` on the Pod of interest:
13061307
-->
13071308

1308-
### 我的容器被终止了
1309+
### 我的容器被终止了 {#my-container-is-terminated}
13091310

13101311
你的容器可能因为资源紧张而被终止。要查看容器是否因为遇到资源限制而被杀死,
13111312
请针对相关的 Pod 执行 `kubectl describe pod`
@@ -1331,18 +1332,19 @@ Message:
13311332
IP: 10.244.2.75
13321333
Containers:
13331334
simmemleak:
1334-
Image: saadali/simmemleak
1335+
Image: saadali/simmemleak:latest
13351336
Limits:
1336-
cpu: 100m
1337-
memory: 50Mi
1338-
State: Running
1339-
Started: Tue, 07 Jul 2015 12:54:41 -0700
1340-
Last Termination State: Terminated
1341-
Exit Code: 1
1342-
Started: Fri, 07 Jul 2015 12:54:30 -0700
1343-
Finished: Fri, 07 Jul 2015 12:54:33 -0700
1344-
Ready: False
1345-
Restart Count: 5
1337+
cpu: 100m
1338+
memory: 50Mi
1339+
State: Running
1340+
Started: Tue, 07 Jul 2019 12:54:41 -0700
1341+
Last State: Terminated
1342+
Reason: OOMKilled
1343+
Exit Code: 137
1344+
Started: Fri, 07 Jul 2019 12:54:30 -0700
1345+
Finished: Fri, 07 Jul 2019 12:54:33 -0700
1346+
Ready: False
1347+
Restart Count: 5
13461348
Conditions:
13471349
Type Status
13481350
Ready False
@@ -1381,13 +1383,13 @@ memory limit (and possibly request) for that container.
13811383
* Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
13821384
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
13831385
and its [resource requirements](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
1384-
* Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
1386+
* Read about [project quotas](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F) in XFS
13851387
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
13861388
-->
13871389
* 获取[分配内存资源给容器和 Pod ](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验
13881390
* 获取[分配 CPU 资源给容器和 Pod ](/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验
13891391
* 阅读 API 参考中 [Container](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
13901392
和其[资源请求](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)定义。
1391-
* 阅读 XFS 中[配额](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html)的文档
1393+
* 阅读 XFS 中[配额](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F)的文档
13921394
* 进一步阅读 [kube-scheduler 配置参考 (v1beta3)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
13931395

0 commit comments

Comments
 (0)