You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/zh-cn/docs/concepts/policy/limit-range.md
+17-15Lines changed: 17 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -93,34 +93,38 @@ LimitRange 的名称必须是合法的
93
93
94
94
A `LimitRange` does **not** check the consistency of the default values it applies. This means that a default value for the _limit_ that is set by `LimitRange` may be less than the _request_ value specified for the container in the spec that a client submits to the API server. If that happens, the final Pod will not be scheduleable.
95
95
96
-
For example, if "LimitRange` is defined as following:
96
+
For example, you define a `LimitRange` with this manifest:
97
97
-->
98
98
## Pod 的 LimitRange 和准入检查 {#limitrange-and-admission-checks-for-pod}
99
99
100
100
`LimitRange`**不** 检查所应用的默认值的一致性。
101
101
这意味着 `LimitRange` 设置的 **limit** 的默认值可能小于客户端提交给 API 服务器的规约中为容器指定的 **request** 值。
This Pod will not be scheduled with the error `Pod "ConflictingCpuSettings" is invalid: spec.containers[0].resources.requests: Invalid value: "700m": must be less than or equal to cpu limit`
117
-
118
-
If both, request and limit are set, the Pod will be scheduled successfully with the same `LimitRange` object:
116
+
then that Pod will not be scheduled, failing with an error similar to:
119
117
-->
120
-
此 Pod 将不会被调度并报错:
121
-
`Pod "ConflictingCpuSettings" is invalid: spec.containers[0].resources.requests: Invalid value: "700m": must be less than or equal to cpu limit`
118
+
那么该 Pod 将不会被调度,失败并出现类似以下的错误:
119
+
120
+
```
121
+
Pod "example-conflict-with-limitrange-cpu" is invalid: spec.containers[0].resources.requests: Invalid value: "700m": must be less than or equal to cpu limit
122
+
```
122
123
123
-
如果同时设置了请求和限制,Pod 将随相同的 `LimitRange` 对象被成功调度:
124
+
<!--
125
+
If you set both `request` and `limit`, then that new Pod will be scheduled successfully even with the same `LimitRange` in place:
126
+
-->
127
+
如果你同时设置了 `request` 和 `limit`,那么即使使用相同的 `LimitRange`,新 Pod 也会被成功调度:
@@ -155,11 +159,6 @@ Neither contention nor changes to a LimitRange will affect already created resou
155
159
156
160
## {{% heading "whatsnext" %}}
157
161
158
-
<!--
159
-
Refer to the [LimitRanger design document](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md) for more information.
@@ -169,6 +168,8 @@ For examples on using limits, see:
169
168
- [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/).
170
169
- [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
171
170
- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/).
171
+
172
+
Refer to the [LimitRanger design document](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md) for context and historical information.
172
173
-->
173
174
关于使用限值的例子,可参阅:
174
175
@@ -179,3 +180,4 @@ For examples on using limits, see:
0 commit comments