Skip to content

Commit b9c88e7

Browse files
authored
Merge pull request #41147 from windsonsea/resizey
[zh] sync resize-container-resources.md
2 parents 6b3ef72 + f2a3536 commit b9c88e7

File tree

2 files changed

+396
-0
lines changed

2 files changed

+396
-0
lines changed
Lines changed: 380 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,380 @@
1+
---
2+
title: 调整分配给容器的 CPU 和内存资源
3+
content_type: task
4+
weight: 30
5+
min-kubernetes-server-version: 1.27
6+
---
7+
<!--
8+
title: Resize CPU and Memory Resources assigned to Containers
9+
content_type: task
10+
weight: 30
11+
min-kubernetes-server-version: 1.27
12+
-->
13+
14+
<!-- overview -->
15+
16+
{{< feature-state state="alpha" for_k8s_version="v1.27" >}}
17+
18+
<!--
19+
This page assumes that you are familiar with [Quality of Service](/docs/tasks/configure-pod-container/quality-service-pod/)
20+
for Kubernetes Pods.
21+
22+
This page shows how to resize CPU and memory resources assigned to containers
23+
of a running pod without restarting the pod or its containers. A Kubernetes node
24+
allocates resources for a pod based on its `requests`, and restricts the pod's
25+
resource usage based on the `limits` specified in the pod's containers.
26+
-->
27+
本页假定你已经熟悉了 Kubernetes Pod
28+
[服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)
29+
30+
本页说明如何在不重启 Pod 或其容器的情况下调整分配给运行中 Pod 容器的 CPU 和内存资源。
31+
Kubernetes 节点会基于 Pod 的 `requests` 为 Pod 分配资源,
32+
并基于 Pod 的容器中指定的 `limits` 限制 Pod 的资源使用。
33+
34+
<!--
35+
For in-place resize of pod resources:
36+
- Container's resource `requests` and `limits` are _mutable_ for CPU
37+
and memory resources.
38+
- `allocatedResources` field in `containerStatuses` of the Pod's status reflects
39+
the resources allocated to the pod's containers.
40+
- `resources` field in `containerStatuses` of the Pod's status reflects the
41+
actual resource `requests` and `limits` that are configured on the running
42+
containers as reported by the container runtime.
43+
-->
44+
对于原地调整 Pod 资源而言:
45+
46+
- 针对 CPU 和内存资源的容器的 `requests``limits`**可变更的**
47+
- Pod 状态中 `containerStatuses``allocatedResources` 字段反映了分配给 Pod 容器的资源。
48+
- Pod 状态中 `containerStatuses``resources`
49+
字段反映了如同容器运行时所报告的、针对正运行的容器配置的实际资源 `requests``limits`
50+
<!--
51+
- `resize` field in the Pod's status shows the status of the last requested
52+
pending resize. It can have the following values:
53+
- `Proposed`: This value indicates an acknowledgement of the requested resize
54+
and that the request was validated and recorded.
55+
- `InProgress`: This value indicates that the node has accepted the resize
56+
request and is in the process of applying it to the pod's containers.
57+
- `Deferred`: This value means that the requested resize cannot be granted at
58+
this time, and the node will keep retrying. The resize may be granted when
59+
other pods leave and free up node resources.
60+
- `Infeasible`: is a signal that the node cannot accommodate the requested
61+
resize. This can happen if the requested resize exceeds the maximum
62+
resources the node can ever allocate for a pod.
63+
-->
64+
- Pod 状态中 `resize` 字段显示上次请求待处理的调整状态。此字段可以具有以下值:
65+
- `Proposed`:此值表示请求调整已被确认,并且请求已被验证和记录。
66+
- `InProgress`:此值表示节点已接受调整请求,并正在将其应用于 Pod 的容器。
67+
- `Deferred`:此值意味着在此时无法批准请求的调整,节点将继续重试。
68+
当其他 Pod 退出并释放节点资源时,调整可能会被真正实施。
69+
- `Infeasible`:此值是一种信号,表示节点无法承接所请求的调整值。
70+
如果所请求的调整超过节点可分配给 Pod 的最大资源,则可能会发生这种情况。
71+
72+
## {{% heading "prerequisites" %}}
73+
74+
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
75+
76+
<!--
77+
## Container Resize Policies
78+
79+
Resize policies allow for a more fine-grained control over how pod's containers
80+
are resized for CPU and memory resources. For example, the container's
81+
application may be able to handle CPU resources resized without being restarted,
82+
but resizing memory may require that the application hence the containers be restarted.
83+
-->
84+
## 容器调整策略 {#container-resize-policies}
85+
86+
调整策略允许更精细地控制 Pod 中的容器如何针对 CPU 和内存资源进行调整。
87+
例如,容器的应用程序可以处理 CPU 资源的调整而不必重启,
88+
但是调整内存可能需要应用程序重启,因此容器也必须重启。
89+
90+
<!--
91+
To enable this, the Container specification allows users to specify a `resizePolicy`.
92+
The following restart policies can be specified for resizing CPU and memory:
93+
* `NotRequired`: Resize the container's resources while it is running.
94+
* `RestartContainer`: Restart the container and apply new resources upon restart.
95+
96+
If `resizePolicy[*].restartPolicy` is not specified, it defaults to `NotRequired`.
97+
-->
98+
为了实现这一点,容器规约允许用户指定 `resizePolicy`
99+
针对调整 CPU 和内存可以设置以下重启策略:
100+
101+
- `NotRequired`:在运行时调整容器的资源。
102+
- `RestartContainer`:重启容器并在重启后应用新资源。
103+
104+
如果未指定 `resizePolicy[*].restartPolicy`,则默认为 `NotRequired`
105+
106+
{{< note >}}
107+
<!--
108+
If the Pod's `restartPolicy` is `Never`, container's resize restart policy must be
109+
set to `NotRequired` for all Containers in the Pod.
110+
-->
111+
如果 Pod 的 `restartPolicy``Never`,则 Pod 中所有容器的调整重启策略必须被设置为 `NotRequired`
112+
{{< /note >}}
113+
114+
<!--
115+
Below example shows a Pod whose Container's CPU can be resized without restart, but
116+
memory resize memory requires the container to be restarted.
117+
-->
118+
下面的示例显示了一个 Pod,其中 CPU 可以在不重启容器的情况下进行调整,但是内存调整需要重启容器。
119+
120+
```yaml
121+
apiVersion: v1
122+
kind: Pod
123+
metadata:
124+
name: qos-demo-5
125+
namespace: qos-example
126+
spec:
127+
containers:
128+
- name: qos-demo-ctr-5
129+
image: nginx
130+
resizePolicy:
131+
- resourceName: cpu
132+
restartPolicy: NotRequired
133+
- resourceName: memory
134+
restartPolicy: RestartContainer
135+
resources:
136+
limits:
137+
memory: "200Mi"
138+
cpu: "700m"
139+
requests:
140+
memory: "200Mi"
141+
cpu: "700m"
142+
```
143+
144+
{{< note >}}
145+
<!--
146+
In the above example, if desired requests or limits for both CPU _and_ memory
147+
have changed, the container will be restarted in order to resize its memory.
148+
-->
149+
在上述示例中,如果所需的 CPU 和内存请求或限制已更改,则容器将被重启以调整其内存。
150+
{{< /note >}}
151+
152+
<!-- steps -->
153+
154+
<!--
155+
## Create a pod with resource requests and limits
156+
157+
You can create a Guaranteed or Burstable [Quality of Service](/docs/tasks/configure-pod-container/quality-service-pod/)
158+
class pod by specifying requests and/or limits for a pod's containers.
159+
160+
Consider the following manifest for a Pod that has one Container.
161+
-->
162+
## 创建具有资源请求和限制的 Pod {#create-pod-with-resource-requests-and-limits}
163+
164+
你可以通过为 Pod 的容器指定请求和/或限制来创建 Guaranteed 或 Burstable
165+
[服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)类的 Pod。
166+
167+
考虑以下包含一个容器的 Pod 的清单。
168+
169+
{{< codenew file="pods/qos/qos-pod-5.yaml" >}}
170+
171+
<!--
172+
Create the pod in the `qos-example` namespace:
173+
-->
174+
在 `qos-example` 名字空间中创建该 Pod:
175+
176+
```shell
177+
kubectl create namespace qos-example
178+
kubectl create -f https://k8s.io/examples/pods/qos/qos-pod-5.yaml --namespace=qos-example
179+
```
180+
181+
<!--
182+
This pod is classified as a Guaranteed QoS class requesting 700m CPU and 200Mi
183+
memory.
184+
185+
View detailed information about the pod:
186+
-->
187+
此 Pod 被分类为 Guaranteed QoS 类,请求 700m CPU 和 200Mi 内存。
188+
189+
查看有关 Pod 的详细信息:
190+
191+
```shell
192+
kubectl get pod qos-demo-5 --output=yaml --namespace=qos-example
193+
```
194+
195+
<!--
196+
Also notice that the values of `resizePolicy[*].restartPolicy` defaulted to
197+
`NotRequired`, indicating that CPU and memory can be resized while container
198+
is running.
199+
-->
200+
另请注意,`resizePolicy[*].restartPolicy` 的值默认为 `NotRequired`,
201+
表示可以在容器运行的情况下调整 CPU 和内存大小。
202+
203+
```yaml
204+
spec:
205+
containers:
206+
...
207+
resizePolicy:
208+
- resourceName: cpu
209+
restartPolicy: NotRequired
210+
- resourceName: memory
211+
restartPolicy: NotRequired
212+
resources:
213+
limits:
214+
cpu: 700m
215+
memory: 200Mi
216+
requests:
217+
cpu: 700m
218+
memory: 200Mi
219+
...
220+
containerStatuses:
221+
...
222+
name: qos-demo-ctr-5
223+
ready: true
224+
...
225+
allocatedResources:
226+
cpu: 700m
227+
memory: 200Mi
228+
resources:
229+
limits:
230+
cpu: 700m
231+
memory: 200Mi
232+
requests:
233+
cpu: 700m
234+
memory: 200Mi
235+
restartCount: 0
236+
started: true
237+
...
238+
qosClass: Guaranteed
239+
```
240+
241+
<!--
242+
## Updating the pod's resources
243+
244+
Let's say the CPU requirements have increased, and 0.8 CPU is now desired. This
245+
is typically determined, and may be programmatically applied, by an entity such as
246+
[VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA).
247+
-->
248+
## 更新 Pod 的资源 {#updating-pod-resources}
249+
250+
假设要求的 CPU 需求已上升,现在需要 0.8 CPU。这通常由
251+
[VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA)
252+
这样的实体确定并可能以编程方式应用。
253+
254+
{{< note >}}
255+
<!--
256+
While you can change a Pod's requests and limits to express new desired
257+
resources, you cannot change the QoS class in which the Pod was created.
258+
-->
259+
尽管你可以更改 Pod 的请求和限制以表示新的期望资源,
260+
但无法更改 Pod 创建时所归属的 QoS 类。
261+
{{< /note >}}
262+
263+
<!--
264+
Now, patch the Pod's Container with CPU requests & limits both set to `800m`:
265+
-->
266+
现在对 Pod 的 Container 执行 patch 命令,将容器的 CPU 请求和限制均设置为 `800m`:
267+
268+
```shell
269+
kubectl -n qos-example patch pod qos-demo-5 --patch '{"spec":{"containers":[{"name":"qos-demo-ctr-5", "resources":{"requests":{"cpu":"800m"}, "limits":{"cpu":"800m"}}}]}}'
270+
```
271+
272+
<!--
273+
Query the Pod's detailed information after the Pod has been patched.
274+
-->
275+
在 Pod 已打补丁后查询其详细信息。
276+
277+
```shell
278+
kubectl get pod qos-demo-5 --output=yaml --namespace=qos-example
279+
```
280+
281+
<!--
282+
The Pod's spec below reflects the updated CPU requests and limits.
283+
-->
284+
以下 Pod 规约反映了更新后的 CPU 请求和限制。
285+
286+
```yaml
287+
spec:
288+
containers:
289+
...
290+
resources:
291+
limits:
292+
cpu: 800m
293+
memory: 200Mi
294+
requests:
295+
cpu: 800m
296+
memory: 200Mi
297+
...
298+
containerStatuses:
299+
...
300+
allocatedResources:
301+
cpu: 800m
302+
memory: 200Mi
303+
resources:
304+
limits:
305+
cpu: 800m
306+
memory: 200Mi
307+
requests:
308+
cpu: 800m
309+
memory: 200Mi
310+
restartCount: 0
311+
started: true
312+
```
313+
314+
<!--
315+
Observe that the `allocatedResources` values have been updated to reflect the new
316+
desired CPU requests. This indicates that node was able to accommodate the
317+
increased CPU resource needs.
318+
319+
In the Container's status, updated CPU resource values shows that new CPU
320+
resources have been applied. The Container's `restartCount` remains unchanged,
321+
indicating that container's CPU resources were resized without restarting the container.
322+
-->
323+
观察到 `allocatedResources` 的值已更新,反映了新的预期 CPU 请求。
324+
这表明节点能够容纳提高后的 CPU 资源需求。
325+
326+
在 Container 状态中,更新的 CPU 资源值显示已应用新的 CPU 资源。
327+
Container 的 `restartCount` 保持不变,表示已在无需重启容器的情况下调整了容器的 CPU 资源。
328+
329+
<!--
330+
## Clean up
331+
332+
Delete your namespace:
333+
-->
334+
## 清理 {#clean-up}
335+
336+
删除你的名字空间:
337+
338+
```shell
339+
kubectl delete namespace qos-example
340+
```
341+
342+
## {{% heading "whatsnext" %}}
343+
344+
<!--
345+
### For application developers
346+
347+
* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
348+
349+
* [Assign CPU Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/)
350+
-->
351+
### 对于应用开发人员
352+
353+
* [为容器和 Pod 分配内存资源](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/)
354+
355+
* [为容器和 Pod 分配 CPU 资源](/docs/tasks/configure-pod-container/assign-cpu-resource/)
356+
357+
<!--
358+
### For cluster administrators
359+
360+
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
361+
362+
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
363+
364+
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
365+
366+
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
367+
368+
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
369+
-->
370+
### 对于集群管理员
371+
372+
* [为名字空间配置默认内存请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
373+
374+
* [为名字空间配置默认 CPU 请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
375+
376+
* [为名字空间配置最小和最大内存约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
377+
378+
* [为名字空间配置最小和最大 CPU 约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
379+
380+
* [为名字空间配置内存和 CPU 配额](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
apiVersion: v1
2+
kind: Pod
3+
metadata:
4+
name: qos-demo-5
5+
namespace: qos-example
6+
spec:
7+
containers:
8+
- name: qos-demo-ctr-5
9+
image: nginx
10+
resources:
11+
limits:
12+
memory: "200Mi"
13+
cpu: "700m"
14+
requests:
15+
memory: "200Mi"
16+
cpu: "700m"

0 commit comments

Comments
 (0)