@@ -14,7 +14,7 @@ title: Resource Management for Pods and Containers
14
14
content_type: concept
15
15
weight: 40
16
16
feature:
17
- title: Automatic binpacking
17
+ title: Automatic bin packing
18
18
description: >
19
19
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability.
20
20
Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
@@ -155,7 +155,7 @@ Kubernetes API 服务器读取和修改的对象。
155
155
For each container, you can specify resource limits and requests,
156
156
including the following:
157
157
-->
158
- ## Pod 和 容器的资源请求和约束
158
+ ## Pod 和 容器的资源请求和约束 {#resource-requests-and-limits-of-pod-and-container}
159
159
160
160
针对每个容器,你都可以指定其资源约束和请求,包括如下选项:
161
161
@@ -170,7 +170,6 @@ including the following:
170
170
Although you can only specify requests and limits for individual containers,
171
171
it is also useful to think about the overall resource requests and limits for
172
172
a Pod.
173
- A
174
173
For a particular resource, a *Pod resource request/limit* is the sum of the
175
174
resource requests/limits of that type for each container in the Pod.
176
175
-->
@@ -184,7 +183,7 @@ resource requests/limits of that type for each container in the Pod.
184
183
185
184
Limits and requests for CPU resources are measured in *cpu* units.
186
185
In Kubernetes, 1 CPU unit is equivalent to **1 physical CPU core**,
187
- or **1 virtual core**, depending on whether the node is a physical host
186
+ or **1 virtual core**, depending on whether the node is a physical host
188
187
or a virtual machine running inside a physical machine.
189
188
-->
190
189
## Kubernetes 中的资源单位 {#resource-units-in-kubernetes}
@@ -316,7 +315,7 @@ a Pod on a node if the capacity check fails. This protects against a resource
316
315
shortage on a node when resource usage later increases, for example, during a
317
316
daily peak in request rate.
318
317
-->
319
- ## 带资源请求的 Pod 如何调度
318
+ ## 带资源请求的 Pod 如何调度 {#how-pods-with-resource-limits-are-run}
320
319
321
320
当你创建一个 Pod 时,Kubernetes 调度程序将为 Pod 选择一个节点。
322
321
每个节点对每种资源类型都有一个容量上限:可为 Pod 提供的 CPU 和内存量。
@@ -328,7 +327,7 @@ daily peak in request rate.
328
327
<!--
329
328
## How Kubernetes applies resource requests and limits {#how-pods-with-resource-limits-are-run}
330
329
331
- When the kubelet starts a container of a Pod, the kubelet passes that container's
330
+ When the kubelet starts a container as part of a Pod, the kubelet passes that container's
332
331
requests and limits for memory and CPU to the container runtime.
333
332
334
333
On Linux, the container runtime typically configures
@@ -337,7 +336,7 @@ limits you defined.
337
336
-->
338
337
## Kubernetes 应用资源请求与约束的方式 {#how-pods-with-resource-limits-are-run}
339
338
340
- 当 kubelet 启动 Pod 中的容器时 ,它会将容器的 CPU 和内存请求与约束信息传递给容器运行时。
339
+ 当 kubelet 将容器作为 Pod 的一部分启动时 ,它会将容器的 CPU 和内存请求与约束信息传递给容器运行时。
341
340
342
341
在 Linux 系统上,容器运行时通常会配置内核
343
342
{{< glossary_tooltip text="CGroups" term_id="cgroup" >}},负责应用并实施所定义的请求。
@@ -414,7 +413,7 @@ are available in your cluster, then Pod resource usage can be retrieved either
414
413
from the [Metrics API](/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api)
415
414
directly or from your monitoring tools.
416
415
-->
417
- # # 监控计算和内存资源用量
416
+ # # 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage}
418
417
419
418
kubelet 会将 Pod 的资源使用情况作为 Pod
420
419
[`status`](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
@@ -433,7 +432,7 @@ locally-attached writeable devices or, sometimes, by RAM.
433
432
434
433
Pods use ephemeral local storage for scratch space, caching, and for logs.
435
434
The kubelet can provide scratch space to Pods using local ephemeral storage to
436
- mount [`emptyDir`](https://kubernetes.io /docs/concepts/storage/volumes/#emptydir)
435
+ mount [`emptyDir`](/docs/concepts/storage/volumes/#emptydir)
437
436
{{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
438
437
-->
439
438
# # 本地临时存储 {#local-ephemeral-storage}
@@ -490,7 +489,7 @@ The kubelet also writes
490
489
[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
491
490
and treats these similarly to ephemeral local storage.
492
491
-->
493
- # ## 本地临时性存储的配置
492
+ # ## 本地临时性存储的配置 {##configurations-for-local-ephemeral-storage}
494
493
495
494
Kubernetes 有两种方式支持节点上配置本地临时性存储:
496
495
@@ -606,12 +605,12 @@ container of a Pod can specify either or both of the following:
606
605
* `spec.containers[].resources.limits.ephemeral-storage`
607
606
* `spec.containers[].resources.requests.ephemeral-storage`
608
607
609
- Limits and requests for `ephemeral-storage` are measured in quantities.
608
+ Limits and requests for `ephemeral-storage` are measured in byte quantities.
610
609
You can express storage as a plain integer or as a fixed-point number using one of these suffixes :
611
- E, P, T, G, M, K . You can also use the power-of-two equivalents : Ei, Pi, Ti, Gi,
612
- Mi, Ki. For example, the following represent roughly the same value :
610
+ E, P, T, G, M, k . You can also use the power-of-two equivalents : Ei, Pi, Ti, Gi,
611
+ Mi, Ki. For example, the following quantities all represent roughly the same value :
613
612
-->
614
- # ## 为本地临时性存储设置请求和约束值
613
+ # ## 为本地临时性存储设置请求和约束值 {#setting-requests-and-limits-for-local-ephemeral-storage}
615
614
616
615
你可以使用 `ephemeral-storage` 来管理本地临时性存储。
617
616
Pod 中的每个容器可以设置以下属性:
@@ -620,7 +619,7 @@ Pod 中的每个容器可以设置以下属性:
620
619
* `spec.containers[].resources.requests.ephemeral-storage`
621
620
622
621
` ephemeral-storage` 的请求和约束值是按量纲计量的。你可以使用一般整数或者定点数字
623
- 加上下面的后缀来表达存储量:E、P、T、G、M、K 。
622
+ 加上下面的后缀来表达存储量:E、P、T、G、M、k 。
624
623
你也可以使用对应的 2 的幂级数来表达:Ei、Pi、Ti、Gi、Mi、Ki。
625
624
例如,下面的表达式所表达的大致是同一个值:
626
625
@@ -641,8 +640,8 @@ or 400 megabytes (`400M`).
641
640
<!--
642
641
In the following example, the Pod has two containers. Each container has a request of
643
642
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
644
- storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a
645
- limit of 8GiB of local ephemeral storage.
643
+ storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
644
+ a limit of 8GiB of local ephemeral storage.
646
645
-->
647
646
648
647
在下面的例子中,Pod 包含两个容器。每个容器请求 2 GiB 大小的本地临时性存储。
@@ -692,7 +691,7 @@ For more information, see
692
691
The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node.
693
692
-->
694
693
695
- # ## 带临时性存储的 Pods 的调度行为
694
+ # ## 带临时性存储的 Pods 的调度行为 {#how-pods-with-ephemeral-storage-requests-are-scheduled}
696
695
697
696
当你创建一个 Pod 时,Kubernetes 调度器会为 Pod 选择一个节点来运行之。
698
697
每个节点都有一个本地临时性存储的上限,是其可提供给 Pods 使用的总量。
@@ -870,6 +869,7 @@ If you want to use project quotas, you should:
870
869
has project quotas enabled. All XFS filesystems support project quotas.
871
870
For ext4 filesystems, you need to enable the project quota tracking feature
872
871
while the filesystem is not mounted.
872
+
873
873
` ` ` bash
874
874
# For ext4, with /dev/block-device not mounted
875
875
sudo tune2fs -O project -Q prjquota /dev/block-device
@@ -962,11 +962,11 @@ asynchronously by the kubelet.
962
962
kubelet 会异步地对 `status.allocatable` 字段执行自动更新操作,使之包含新资源。
963
963
964
964
<!--
965
- Because the scheduler uses the node `status.allocatable` value when
966
- evaluating Pod fitness, the shceduler only takes account of the new value after
967
- the asynchronous update. There may be a short delay between patching the
965
+ Because the scheduler uses the node's `status.allocatable` value when
966
+ evaluating Pod fitness, the scheduler only takes account of the new value after
967
+ that asynchronous update. There may be a short delay between patching the
968
968
node capacity with a new resource and the time when the first Pod that requests
969
- the resource to be scheduled on that node.
969
+ the resource can be scheduled on that node.
970
970
-->
971
971
由于调度器在评估 Pod 是否适合在某节点上执行时会使用节点的 `status.allocatable` 值,
972
972
调度器只会考虑异步更新之后的新值。
@@ -997,6 +997,7 @@ http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
997
997
In the preceding request, `~1` is the encoding for the character `/`
998
998
in the patch path. The operation path value in JSON-Patch is interpreted as a
999
999
JSON-Pointer. For more details, see
1000
+ [IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
1000
1001
{{< /note >}}
1001
1002
-->
1002
1003
{{< note >}}
@@ -1013,14 +1014,14 @@ Cluster-level extended resources are not tied to nodes. They are usually managed
1013
1014
by scheduler extenders, which handle the resource consumption and resource quota.
1014
1015
1015
1016
You can specify the extended resources that are handled by scheduler extenders
1016
- in [scheduler policy configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
1017
+ in [scheduler configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
1017
1018
-->
1018
1019
# ### 集群层面的扩展资源 {#cluster-level-extended-resources}
1019
1020
1020
1021
集群层面的扩展资源并不绑定到具体节点。
1021
1022
它们通常由调度器扩展程序(Scheduler Extenders)管理,这些程序处理资源消耗和资源配额。
1022
1023
1023
- 你可以在[调度器策略配置 ](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
1024
+ 你可以在[调度器配置 ](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
1024
1025
中指定由调度器扩展程序处理的扩展资源。
1025
1026
1026
1027
<!--
@@ -1158,12 +1159,12 @@ to limit the number of PIDs that a given Pod can consume. See
1158
1159
If the scheduler cannot find any node where a Pod can fit, the Pod remains
1159
1160
unscheduled until a place can be found. An
1160
1161
[Event](/docs/reference/kubernetes-api/cluster-resources/event-v1/) is produced
1161
- each time the scheduler fails to find a place for the Pod, You can use `kubectl`
1162
+ each time the scheduler fails to find a place for the Pod. You can use `kubectl`
1162
1163
to view the events for a Pod; for example :
1163
1164
-->
1164
- # # 疑难解答
1165
+ # # 疑难解答 {#troubleshooting}
1165
1166
1166
- # ## 我的 Pod 处于悬决状态且事件信息显示 `FailedScheduling`
1167
+ # ## 我的 Pod 处于悬决状态且事件信息显示 `FailedScheduling` {#my-pods-are-pending-with-event-message-failedscheduling}
1167
1168
1168
1169
如果调度器找不到该 Pod 可以匹配的任何节点,则该 Pod 将保持未被调度状态,
1169
1170
直到找到一个可以被调度到的位置。每当调度器找不到 Pod 可以调度的地方时,
@@ -1240,22 +1241,22 @@ Allocated resources:
1240
1241
(Total limits may be over 100 percent, i.e., overcommitted.)
1241
1242
CPU Requests CPU Limits Memory Requests Memory Limits
1242
1243
------------ ---------- --------------- -------------
1243
- 680m (34%) 400m (20%) 920Mi (12 %) 1070Mi (14 %)
1244
+ 680m (34%) 400m (20%) 920Mi (11 %) 1070Mi (13 %)
1244
1245
```
1245
1246
1246
1247
<!--
1247
- In the preceding output, you can see that if a Pod requests more than 1120m
1248
- CPUs or 6.23Gi of memory, it will not fit on the node.
1248
+ In the preceding output, you can see that if a Pod requests more than 1.120 CPUs
1249
+ or more than 6.23Gi of memory, that Pod will not fit on the node.
1249
1250
1250
1251
By looking at the "Pods" section, you can see which Pods are taking up space on
1251
1252
the node.
1252
1253
-->
1253
- 在上面的输出中,你可以看到如果 Pod 请求超过 1120m CPU 或者 6.23Gi 内存,节点将无法满足。
1254
+ 在上面的输出中,你可以看到如果 Pod 请求超过 1.120 CPU 或者 6.23Gi 内存,节点将无法满足。
1254
1255
1255
1256
通过查看 "Pods" 部分,你将看到哪些 Pod 占用了节点上的资源。
1256
1257
1257
1258
<!--
1258
- The amount of resources available to Pods is less than the node capacity, because
1259
+ The amount of resources available to Pods is less than the node capacity because
1259
1260
system daemons use a portion of the available resources. Within the Kubernetes API,
1260
1261
each Node has a `.status.allocatable` field
1261
1262
(see [NodeStatus](/docs/reference/kubernetes-api/cluster-resources/node-v1/#NodeStatus)
@@ -1286,7 +1287,7 @@ prevent one team from using so much of any resource that this over-use affects o
1286
1287
1287
1288
You should also consider what access you grant to that namespace:
1288
1289
**full** write access to a namespace allows someone with that access to remove any
1289
- resource, include a configured ResourceQuota.
1290
+ resource, including a configured ResourceQuota.
1290
1291
-->
1291
1292
你可以配置[ 资源配额] ( /zh-cn/docs/concepts/policy/resource-quotas/ ) 功能特性以限制每个名字空间可以使用的资源总量。
1292
1293
当某名字空间中存在 ResourceQuota 时,Kubernetes 会在该名字空间中的对象强制实施配额。
@@ -1305,7 +1306,7 @@ whether a Container is being killed because it is hitting a resource limit, call
1305
1306
`kubectl describe pod` on the Pod of interest:
1306
1307
-->
1307
1308
1308
- ### 我的容器被终止了
1309
+ ### 我的容器被终止了 {#my-container-is-terminated}
1309
1310
1310
1311
你的容器可能因为资源紧张而被终止。要查看容器是否因为遇到资源限制而被杀死,
1311
1312
请针对相关的 Pod 执行 ` kubectl describe pod ` :
@@ -1331,18 +1332,19 @@ Message:
1331
1332
IP: 10.244.2.75
1332
1333
Containers:
1333
1334
simmemleak:
1334
- Image: saadali/simmemleak
1335
+ Image: saadali/simmemleak:latest
1335
1336
Limits:
1336
- cpu: 100m
1337
- memory: 50Mi
1338
- State: Running
1339
- Started: Tue, 07 Jul 2015 12:54:41 -0700
1340
- Last Termination State: Terminated
1341
- Exit Code: 1
1342
- Started: Fri, 07 Jul 2015 12:54:30 -0700
1343
- Finished: Fri, 07 Jul 2015 12:54:33 -0700
1344
- Ready: False
1345
- Restart Count: 5
1337
+ cpu: 100m
1338
+ memory: 50Mi
1339
+ State: Running
1340
+ Started: Tue, 07 Jul 2019 12:54:41 -0700
1341
+ Last State: Terminated
1342
+ Reason: OOMKilled
1343
+ Exit Code: 137
1344
+ Started: Fri, 07 Jul 2019 12:54:30 -0700
1345
+ Finished: Fri, 07 Jul 2019 12:54:33 -0700
1346
+ Ready: False
1347
+ Restart Count: 5
1346
1348
Conditions:
1347
1349
Type Status
1348
1350
Ready False
@@ -1381,13 +1383,13 @@ memory limit (and possibly request) for that container.
1381
1383
* Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
1382
1384
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
1383
1385
and its [resource requirements](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
1384
- * Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html ) in XFS
1386
+ * Read about [project quotas](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F ) in XFS
1385
1387
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
1386
1388
-->
1387
1389
* 获取[ 分配内存资源给容器和 Pod ] ( /zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/ ) 的实践经验
1388
1390
* 获取[ 分配 CPU 资源给容器和 Pod ] ( /zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/ ) 的实践经验
1389
1391
* 阅读 API 参考中 [ Container] ( /zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container )
1390
1392
和其[ 资源请求] ( /zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources ) 定义。
1391
- * 阅读 XFS 中[ 配额] ( https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html ) 的文档
1393
+ * 阅读 XFS 中[ 配额] ( https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F ) 的文档
1392
1394
* 进一步阅读 [ kube-scheduler 配置参考 (v1beta3)] ( /zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/ )
1393
1395
0 commit comments