Skip to content

Commit b3e71d6

Browse files
authored
Merge pull request #52352 from asa3311/sync-zh-194
[zh] sync resize-container-resources
2 parents a41c38b + b09d922 commit b3e71d6

File tree

1 file changed

+55
-5
lines changed

1 file changed

+55
-5
lines changed

content/zh-cn/docs/tasks/configure-pod-container/resize-container-resources.md

Lines changed: 55 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -121,6 +121,51 @@ Kubelet 会通过更新 Pod 的状态状况来反映调整请求的当前状态
121121
这一状态通常很短暂,但也可能因资源类型或运行时行为而延长。
122122
执行过程中的任何错误都会在 `message` 字段中报告,同时带有 `reason: Error`
123123

124+
<!--
125+
### How kubelet retries Deferred resizes
126+
127+
If the requested resize is _Deferred_, the kubelet will periodically re-attempt the resize,
128+
for example when another pod is removed or scaled down. If there are multiple deferred
129+
resizes, they are retried according to the following priority:
130+
131+
* Pods with a higher Priority (based on PriorityClass) will have their resize request retried first.
132+
* If two pods have the same Priority, resize of guaranteed pods will be retried before the resize of burstable pods.
133+
* If all else is the same, pods that have been in the Deferred state longer will be prioritized.
134+
135+
A higher priority resize being marked as pending will not block the remaining pending resizes from being attempted;
136+
all remaining pending resizes will still be retried even if a higher-priority resize gets deferred again.
137+
-->
138+
### 如何重试 Deferred 调整大小
139+
140+
如果请求的调整大小操作被标记为 **Deferred**,kubelet 会定期重新尝试执行该调整,例如当其他 Pod 被移除或缩容时。
141+
当存在多个延迟的调整操作时,kubelet 会按照以下优先级顺序进行重试:
142+
143+
* 优先级(基于 PriorityClass)较高的 Pod,其调整请求会先被重试。
144+
* 如果两个 Pod 拥有相同的优先级,则会先重试 Guaranteed 类型的 Pod,再重试 Burstable 的类型 Pod。
145+
* 如果上述条件均相同,则优先处理在延迟状态下停留时间更长的 Pod。
146+
147+
需要注意的是,即使高优先级的调整被再次标记为待处理,也不会阻塞其余待处理的调整操作;其余的待处理调整仍会被继续重试。
148+
149+
<!--
150+
### Leveraging `observedGeneration` Fields
151+
152+
{{< feature-state feature_gate_name="PodObservedGenerationTracking" >}}
153+
154+
* The top-level `status.observedGeneration` field shows the `metadata.generation` corresponding to the latest pod specification that the kubelet has acknowledged. You can use this to determine the most recent resize request the kubelet has processed.
155+
* In the `PodResizeInProgress` condition, the `conditions[].observedGeneration` field indicates the `metadata.generation` of the podSpec when the current in-progress resize was initiated.
156+
* In the `PodResizePending` condition, the `conditions[].observedGeneration` field indicates the `metadata.generation` of the podSpec when the pending resize's allocation was last attempted.
157+
-->
158+
### 利用 `observedGeneration` 字段
159+
160+
{{< feature-state feature_gate_name="PodObservedGenerationTracking" >}}
161+
162+
* 顶层的 `status.observedGeneration` 字段显示了 kubelet 已确认的最新 Pod 规约所对应的 `metadata.generation`
163+
你可以使用该字段来判断 kubelet 已处理的最近一次调整请求。
164+
*`PodResizeInProgress` 状态条件,`conditions[].observedGeneration` 字段表示当前正在进行的调整操作开始时,
165+
该 Pod 规约(podSpec)的 `metadata.generation`
166+
*`PodResizePending` 状态条件,`conditions[].observedGeneration` 字段表示上一次尝试为待处理调整请求分配资源时,
167+
Pod 规约的 `metadata.generation`
168+
124169
<!--
125170
## Container resize policies
126171
@@ -201,11 +246,16 @@ For Kubernetes v{{< skew currentVersion >}}, resizing pod resources in-place has
201246
* **资源类型**:只能调整 CPU 和内存资源。
202247

203248
<!--
204-
* **Memory Decrease:** Memory limits _cannot be decreased_ unless the `resizePolicy` for memory is `RestartContainer`.
205-
Memory requests can generally be decreased.
249+
* **Memory Decrease:** If the memory resize restart policy is `NotRequired` (or unspecified), the kubelet will make a
250+
best-effort attempt to prevent oom-kills when decreasing memory limits, but doesn't provide any guarantees.
251+
Before decreasing container memory limits, if memory usage exceeds the requested limit, the resize will be skipped
252+
and the status will remain in an "In Progress" state. This is considered best-effort because it is still subject
253+
to a race condition where memory usage may spike right after the check is performed.
206254
-->
207-
* **内存减少**:除非内存的 `resizePolicy` 为 `RestartContainer`,否则内存限制**不能减少**。
208-
内存请求通常可以减少。
255+
* **内存减少**:如果内存调整的重启策略为 `NotRequired`(或未指定),kubelet 会尽力在降低内存限制时避免 OOM(内存不足导致的进程被杀死),
256+
但并不提供任何保证。在降低容器内存限制之前,如果内存使用量已超过请求的限制,则此次调整会被跳过,
257+
状态将保持在 "In Progress"。之所以称为尽力而为,是因为该过程仍可能受到竞争条件影响:
258+
在检查完成后,内存使用量可能会立即出现峰值。
209259

210260
<!--
211261
* **QoS Class:** The Pod's original [Quality of Service (QoS) class](/docs/concepts/workloads/pods/pod-qos/)
@@ -314,7 +364,7 @@ kubectl patch pod resize-demo --subresource resize --patch \
314364
315365
# 替代方法:
316366
# kubectl -n qos-example edit pod resize-demo --subresource resize
317-
# kubectl -n qos-example apply -f <updated-manifest> --subresource resize
367+
# kubectl -n qos-example apply -f <updated-manifest> --subresource resize --server-side
318368
```
319369

320370
{{< note >}}

0 commit comments

Comments
 (0)