@@ -26,10 +26,10 @@ When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally sp
26
26
much of each resource a {{< glossary_tooltip text="container" term_id="container" >}} needs.
27
27
The most common resources to specify are CPU and memory (RAM); there are others.
28
28
29
- When you specify the resource _request_ for Containers in a Pod, the
29
+ When you specify the resource _request_ for containers in a Pod, the
30
30
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} uses this
31
31
information to decide which node to place the Pod on. When you specify a resource _limit_
32
- for a Container , the kubelet enforces those limits so that the running container is not
32
+ for a container , the kubelet enforces those limits so that the running container is not
33
33
allowed to use more of that resource than the limit you set. The kubelet also reserves
34
34
at least the _request_ amount of that system resource specifically for that container
35
35
to use.
@@ -273,6 +273,7 @@ MiB of memory, and a limit of 1 CPU and 256MiB of memory.
273
273
你可以认为该 Pod 的资源请求为 0.5 CPU 和 128 MiB 内存,资源限制为 1 CPU 和 256MiB 内存。
274
274
275
275
``` yaml
276
+ ---
276
277
apiVersion : v1
277
278
kind : Pod
278
279
metadata :
@@ -382,7 +383,7 @@ limits you defined.
382
383
而不是临时存储用量。
383
384
384
385
<!--
385
- If a container exceeds its memory request, and the node that it runs on becomes short of
386
+ If a container exceeds its memory request and the node that it runs on becomes short of
386
387
memory overall, it is likely that the Pod the container belongs to will be
387
388
{{< glossary_tooltip text="evicted" term_id="eviction" >}}.
388
389
@@ -401,7 +402,7 @@ see the [Troubleshooting](#troubleshooting) section.
401
402
要确定某容器是否会由于资源限制而无法调度或被杀死,请参阅[疑难解答](#troubleshooting)节。
402
403
403
404
<!--
404
- # # Monitoring compute & memory resource usage
405
+ # ## Monitoring compute & memory resource usage
405
406
406
407
The kubelet reports the resource usage of a Pod as part of the Pod
407
408
[`status`](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status).
@@ -411,7 +412,7 @@ are available in your cluster, then Pod resource usage can be retrieved either
411
412
from the [Metrics API](/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api)
412
413
directly or from your monitoring tools.
413
414
-->
414
- # # 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage}
415
+ # ## 监控计算和内存资源用量 {#monitoring-compute-memory-resource-usage}
415
416
416
417
kubelet 会将 Pod 的资源使用情况作为 Pod
417
418
[`status`](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
@@ -431,12 +432,11 @@ locally-attached writeable devices or, sometimes, by RAM.
431
432
Pods use ephemeral local storage for scratch space, caching, and for logs.
432
433
The kubelet can provide scratch space to Pods using local ephemeral storage to
433
434
mount [`emptyDir`](/docs/concepts/storage/volumes/#emptydir)
434
- {{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
435
+ {{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
435
436
-->
436
437
# # 本地临时存储 {#local-ephemeral-storage}
437
438
438
439
<!-- feature gate LocalStorageCapacityIsolation -->
439
-
440
440
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
441
441
442
442
节点通常还可以具有本地的临时性存储,由本地挂接的可写入设备或者有时也用 RAM
@@ -633,12 +633,14 @@ or 400 megabytes (`400M`).
633
633
In the following example, the Pod has two containers. Each container has a request of
634
634
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
635
635
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
636
- a limit of 8GiB of local ephemeral storage.
636
+ a limit of 8GiB of local ephemeral storage. 500Mi of that limit could be
637
+ consumed by the `emptyDir` volume.
637
638
-->
638
639
639
640
在下面的例子中,Pod 包含两个容器。每个容器请求 2 GiB 大小的本地临时性存储。
640
641
每个容器都设置了 4 GiB 作为其本地临时性存储的限制。
641
642
因此,整个 Pod 的本地临时性存储请求是 4 GiB,且其本地临时性存储的限制为 8 GiB。
643
+ 该限制值中有 500Mi 可供 `emptyDir` 卷使用。
642
644
643
645
` ` ` yaml
644
646
apiVersion: v1
@@ -669,7 +671,8 @@ spec:
669
671
mountPath: "/tmp"
670
672
volumes:
671
673
- name: ephemeral
672
- emptyDir: {}
674
+ emptyDir:
675
+ sizeLimit: 500Mi
673
676
` ` `
674
677
675
678
<!--
@@ -1017,9 +1020,9 @@ cluster-level extended resource "example.com/foo" is handled by the scheduler
1017
1020
extender.
1018
1021
1019
1022
- The scheduler sends a Pod to the scheduler extender only if the Pod requests
1020
- " example.com/foo" .
1023
+ " example.com/foo" .
1021
1024
- The `ignoredByScheduler` field specifies that the scheduler does not check
1022
- the "example.com/foo" resource in its `PodFitsResources` predicate.
1025
+ the "example.com/foo" resource in its `PodFitsResources` predicate.
1023
1026
-->
1024
1027
**示例:**
1025
1028
@@ -1235,7 +1238,7 @@ Allocated resources:
1235
1238
In the preceding output, you can see that if a Pod requests more than 1.120 CPUs
1236
1239
or more than 6.23Gi of memory, that Pod will not fit on the node.
1237
1240
1238
- By looking at the " Pods" section, you can see which Pods are taking up space on
1241
+ By looking at the “ Pods” section, you can see which Pods are taking up space on
1239
1242
the node.
1240
1243
-->
1241
1244
在上面的输出中,你可以看到如果 Pod 请求超过 1.120 CPU 或者 6.23Gi 内存,节点将无法满足。
@@ -1347,7 +1350,7 @@ Events:
1347
1350
1348
1351
<!--
1349
1352
In the preceding example, the `Restart Count: 5` indicates that the `simmemleak`
1350
- Container in the Pod was terminated and restarted five times (so far).
1353
+ container in the Pod was terminated and restarted five times (so far).
1351
1354
The `OOMKilled` reason shows that the container tried to use more memory than its limit.
1352
1355
-->
1353
1356
在上面的例子中,` Restart Count: 5 ` 意味着 Pod 中的 ` simmemleak `
0 commit comments