Skip to content

Commit 06325f9

Browse files
authored
Merge pull request #43521 from windsonsea/podnode
[zh] Sync concepts/scheduling-eviction/ files
2 parents e27d886 + 2d41f04 commit 06325f9

File tree

4 files changed

+73
-58
lines changed

4 files changed

+73
-58
lines changed

content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 42 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -59,8 +59,10 @@ specific Pods:
5959
## Node labels {#built-in-node-labels}
6060
6161
Like many other Kubernetes objects, nodes have
62-
[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node).
63-
Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/) on all nodes in a cluster.
62+
[labels](/docs/concepts/overview/working-with-objects/labels/). You can
63+
[attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node).
64+
Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/)
65+
on all nodes in a cluster.
6466
-->
6567
## 节点标签 {#built-in-node-labels}
6668

@@ -539,7 +541,7 @@ specified.
539541

540542
如果当前正被调度的 Pod 在具有自我亲和性的 Pod 序列中排在第一个,
541543
那么只要它满足其他所有的亲和性规则,它就可以被成功调度。
542-
这是通过以下方式确定的:确保集群中没有其他 Pod 与此 Pod 的命名空间和标签选择器匹配
544+
这是通过以下方式确定的:确保集群中没有其他 Pod 与此 Pod 的名字空间和标签选择算符匹配
543545
该 Pod 满足其自身定义的条件,并且选定的节点满足所指定的所有拓扑要求。
544546
这确保即使所有的 Pod 都配置了 Pod 间亲和性,也不会出现调度死锁的情况。
545547

@@ -565,29 +567,40 @@ uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`.
565567
`preferredDuringSchedulingIgnoredDuringExecution`
566568

567569
<!--
568-
The affinity rule says that the scheduler can only schedule a Pod onto a node if
569-
the node is in the same zone as one or more existing Pods with the label
570-
`security=S1`. More precisely, the scheduler must place the Pod on a node that has the
571-
`topology.kubernetes.io/zone=V` label, as long as there is at least one node in
572-
that zone that currently has one or more Pods with the Pod label `security=S1`.
573-
-->
574-
亲和性规则表示,仅当节点和至少一个已运行且有 `security=S1` 的标签的
575-
Pod 处于同一区域时,才可以将该 Pod 调度到节点上。
576-
更确切的说,调度器必须将 Pod 调度到具有 `topology.kubernetes.io/zone=V`
577-
标签的节点上,并且集群中至少有一个位于该可用区的节点上运行着带有
578-
`security=S1` 标签的 Pod。
579-
580-
<!--
581-
The anti-affinity rule says that the scheduler should try to avoid scheduling
582-
the Pod onto a node that is in the same zone as one or more Pods with the label
583-
`security=S2`. More precisely, the scheduler should try to avoid placing the Pod on a node that has the
584-
`topology.kubernetes.io/zone=R` label if there are other nodes in the
585-
same zone currently running Pods with the `Security=S2` Pod label.
586-
-->
587-
反亲和性规则表示,如果节点处于 Pod 所在的同一可用区且至少一个 Pod 具有
588-
`security=S2` 标签,则该 Pod 不应被调度到该节点上。
589-
更确切地说, 如果同一可用区中存在其他运行着带有 `security=S2` 标签的 Pod 节点,
590-
并且节点具有标签 `topology.kubernetes.io/zone=R`,Pod 不能被调度到该节点上。
570+
The affinity rule specifies that the scheduler is allowed to place the example Pod
571+
on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
572+
where other Pods have been labeled with `security=S1`.
573+
For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
574+
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
575+
assign the Pod to any node within Zone V, as long as there is at least one Pod within
576+
Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1`
577+
labels in Zone V, the scheduler will not assign the example Pod to any node in that zone.
578+
-->
579+
亲和性规则规定,只有节点属于特定的
580+
[区域](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
581+
且该区域中的其他 Pod 已打上 `security=S1` 标签时,调度器才可以将示例 Pod 调度到此节点上。
582+
例如,如果我们有一个具有指定区域(称之为 "Zone V")的集群,此区域由带有 `topology.kubernetes.io/zone=V`
583+
标签的节点组成,那么只要 Zone V 内已经至少有一个 Pod 打了 `security=S1` 标签,
584+
调度器就可以将此 Pod 调度到 Zone V 内的任何节点。相反,如果 Zone V 中没有带有 `security=S1` 标签的 Pod,
585+
则调度器不会将示例 Pod 调度给该区域中的任何节点。
586+
587+
<!--
588+
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
589+
on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
590+
where other Pods have been labeled with `security=S2`.
591+
For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
592+
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
593+
assigning the Pod to any node within Zone R, as long as there is at least one Pod within
594+
Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact
595+
scheduling into Zone R if there are no Pods with `security=S2` labels.
596+
-->
597+
反亲和性规则规定,如果节点属于特定的
598+
[区域](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
599+
且该区域中的其他 Pod 已打上 `security=S2` 标签,则调度器应尝试避免将 Pod 调度到此节点上。
600+
例如,如果我们有一个具有指定区域(我们称之为 "Zone R")的集群,此区域由带有 `topology.kubernetes.io/zone=R`
601+
标签的节点组成,只要 Zone R 内已经至少有一个 Pod 打了 `security=S2` 标签,
602+
调度器应避免将 Pod 分配给 Zone R 内的任何节点。相反,如果 Zone R 中没有带有 `security=S2` 标签的 Pod,
603+
则反亲和性规则不会影响将 Pod 调度到 Zone R。
591604

592605
<!--
593606
To get yourself more familiar with the examples of Pod affinity and anti-affinity,
@@ -847,10 +860,10 @@ Some of the limitations of using `nodeName` to select nodes are:
847860
you need to bypass any configured schedulers. Bypassing the schedulers might lead to
848861
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
849862
-->
850-
`nodeName` 旨在供自定义调度程序或需要绕过任何已配置调度程序的高级场景使用
851-
如果已分配的 Node 负载过重,绕过调度程序可能会导致 Pod 失败。
863+
`nodeName` 旨在供自定义调度器或需要绕过任何已配置调度器的高级场景使用
864+
如果已分配的 Node 负载过重,绕过调度器可能会导致 Pod 失败。
852865
你可以使用[节点亲和性](#node-affinity)或 [`nodeselector` 字段](#nodeselector)将
853-
Pod 分配给特定 Node,而无需绕过调度程序
866+
Pod 分配给特定 Node,而无需绕过调度器
854867
{{</ note >}}
855868

856869
<!--

content/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,8 @@ Kubernetes v{{< skew currentVersion >}} 包含用于动态资源分配的集群
4646

4747
<!-- body -->
4848

49-
## API {#api}
49+
## API
50+
5051
<!--
5152
The `resource.k8s.io/v1alpha2` {{< glossary_tooltip text="API group"
5253
term_id="api-group" >}} provides four types:
@@ -101,9 +102,8 @@ typically using the type defined by a {{< glossary_tooltip
101102
term_id="CustomResourceDefinition" text="CRD" >}} that was created when
102103
installing a resource driver.
103104
-->
104-
ResourceClass 和 ResourceClaim 的参数存储在单独的对象中,
105-
通常使用安装资源驱动程序时创建的 {{< glossary_tooltip
106-
term_id="CustomResourceDefinition" text="CRD" >}} 所定义的类型。
105+
ResourceClass 和 ResourceClaim 的参数存储在单独的对象中,通常使用安装资源驱动程序时创建的
106+
{{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}} 所定义的类型。
107107

108108
<!--
109109
The `core/v1` `PodSpec` defines ResourceClaims that are needed for a Pod in a
@@ -274,7 +274,7 @@ or not reserved for the Pod, then the kubelet will fail to run the Pod and
274274
re-check periodically because those requirements might still get fulfilled
275275
later.
276276
-->
277-
## 预调度的 Pod
277+
## 预调度的 Pod {#pre-scheduled-pods}
278278

279279
当你(或别的 API 客户端)创建设置了 `spec.nodeName` 的 Pod 时,调度器将被绕过。
280280
如果 Pod 所需的某个 ResourceClaim 尚不存在、未被分配或未为该 Pod 保留,那么 kubelet
@@ -335,8 +335,8 @@ kube-scheduler, kube-controller-manager and kubelet also need the feature gate.
335335
-->
336336
动态资源分配是一个 **alpha 特性**,只有在启用 `DynamicResourceAllocation`
337337
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
338-
和 `resource.k8s.io/v1alpha1` {{< glossary_tooltip text="API 组"
339-
term_id="api-group" >}} 时才启用。
338+
和 `resource.k8s.io/v1alpha1`
339+
{{< glossary_tooltip text="API 组" term_id="api-group" >}} 时才启用。
340340
有关详细信息,参阅 `--feature-gates` 和 `--runtime-config`
341341
[kube-apiserver 参数](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)。
342342
kube-scheduler、kube-controller-manager 和 kubelet 也需要设置该特性门控。
@@ -356,6 +356,7 @@ If your cluster supports dynamic resource allocation, the response is either a
356356
list of ResourceClass objects or:
357357
-->
358358
如果你的集群支持动态资源分配,则响应是 ResourceClass 对象列表或:
359+
359360
```
360361
No resources found
361362
```
@@ -364,6 +365,7 @@ No resources found
364365
If not supported, this error is printed instead:
365366
-->
366367
如果不支持,则会输出如下错误:
368+
367369
```
368370
error: the server doesn't have a resource type "resourceclasses"
369371
```
@@ -391,4 +393,4 @@ be installed. Please refer to the driver's documentation for details.
391393
[Dynamic Resource Allocation KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md).
392394
-->
393395
- 了解更多该设计的信息,
394-
参阅[动态资源分配 KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)。
396+
参阅[动态资源分配 KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)。

content/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ kubelet can proactively fail one or more pods on the node to reclaim resources
1919
and prevent starvation.
2020
2121
During a node-pressure eviction, the kubelet sets the [phase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) for the
22-
selected pods to `Failed`. This terminates the Pods.
22+
selected pods to `Failed`, and terminates the Pod.
2323
2424
Node-pressure eviction is not the same as
2525
[API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/).
@@ -30,7 +30,7 @@ Node-pressure eviction is not the same as
3030
kubelet 可以主动地使节点上一个或者多个 Pod 失效,以回收资源防止饥饿。
3131

3232
在节点压力驱逐期间,kubelet 将所选 Pod 的[阶段](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase)
33-
设置为 `Failed`。这将终止 Pod。
33+
设置为 `Failed` 并终止 Pod。
3434

3535
节点压力驱逐不同于 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)
3636

@@ -55,7 +55,7 @@ The kubelet attempts to [reclaim node-level resources](#reclaim-node-resources)
5555
before it terminates end-user pods. For example, it removes unused container
5656
images when disk resources are starved.
5757
-->
58-
## 自我修复行为
58+
## 自我修复行为 {#self-healing-behavior}
5959

6060
kubelet 在终止最终用户 Pod 之前会尝试[回收节点级资源](#reclaim-node-resources)
6161
例如,它会在磁盘资源不足时删除未使用的容器镜像。
@@ -75,7 +75,7 @@ pods in place of the evicted pods.
7575
<!--
7676
### Self healing for static pods
7777
-->
78-
### 静态 Pod 的自我修复
78+
### 静态 Pod 的自我修复 {#self-healing-for-static-pods}
7979

8080
<!--
8181
If you are running a [static pod](/docs/concepts/workloads/pods/#static-pods)
@@ -237,10 +237,10 @@ Eviction thresholds have the form `[eviction-signal][operator][quantity]`, where
237237

238238
驱逐条件的形式为 `[eviction-signal][operator][quantity]`,其中:
239239

240-
* `eviction-signal` 是要使用的[驱逐信号](#eviction-signals)
241-
* `operator` 是你想要的[关系运算符](https://en.wikipedia.org/wiki/Relational_operator#Standard_relational_operators)
240+
- `eviction-signal` 是要使用的[驱逐信号](#eviction-signals)
241+
- `operator` 是你想要的[关系运算符](https://en.wikipedia.org/wiki/Relational_operator#Standard_relational_operators)
242242
比如 `<`(小于)。
243-
* `quantity` 是驱逐条件数量,例如 `1Gi`
243+
- `quantity` 是驱逐条件数量,例如 `1Gi`
244244
`quantity` 的值必须与 Kubernetes 使用的数量表示相匹配。
245245
你可以使用文字值或百分比(`%`)。
246246

@@ -295,11 +295,11 @@ You can use the following flags to configure soft eviction thresholds:
295295
-->
296296
你可以使用以下标志来配置软驱逐条件:
297297

298-
* `eviction-soft`:一组驱逐条件,如 `memory.available<1.5Gi`
298+
- `eviction-soft`:一组驱逐条件,如 `memory.available<1.5Gi`
299299
如果驱逐条件持续时长超过指定的宽限期,可以触发 Pod 驱逐。
300-
* `eviction-soft-grace-period`:一组驱逐宽限期,
300+
- `eviction-soft-grace-period`:一组驱逐宽限期,
301301
`memory.available=1m30s`,定义软驱逐条件在触发 Pod 驱逐之前必须保持多长时间。
302-
* `eviction-max-pod-grace-period`:在满足软驱逐条件而终止 Pod 时使用的最大允许宽限期(以秒为单位)。
302+
- `eviction-max-pod-grace-period`:在满足软驱逐条件而终止 Pod 时使用的最大允许宽限期(以秒为单位)。
303303

304304
<!--
305305
#### Hard eviction thresholds {#hard-eviction-thresholds}
@@ -329,10 +329,10 @@ The kubelet has the following default hard eviction thresholds:
329329
-->
330330
kubelet 具有以下默认硬驱逐条件:
331331

332-
* `memory.available<100Mi`
333-
* `nodefs.available<10%`
334-
* `imagefs.available<15%`
335-
* `nodefs.inodesFree<5%`(Linux 节点)
332+
- `memory.available<100Mi`
333+
- `nodefs.available<10%`
334+
- `imagefs.available<15%`
335+
- `nodefs.inodesFree<5%`(Linux 节点)
336336

337337
<!--
338338
These default values of hard eviction thresholds will only be set if none
@@ -852,11 +852,11 @@ to estimate or measure an optimal memory limit value for that container.
852852
- Learn about [API-initiated Eviction](/docs/concepts/scheduling-eviction/api-eviction/)
853853
- Learn about [Pod Priority and Preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
854854
- Learn about [PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/)
855-
- Learn about [Q**uality of Servic**e](/docs/tasks/configure-pod-container/quality-service-pod/) (QoS)
855+
- Learn about [Quality of Service](/docs/tasks/configure-pod-container/quality-service-pod/) (QoS)
856856
- Check out the [Eviction API](/docs/reference/generated/kubernetes-api/{{<param "version">}}/#create-eviction-pod-v1-core)
857857
-->
858-
* 了解 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)
859-
* 了解 [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
860-
* 了解 [PodDisruptionBudgets](/zh-cn/docs/tasks/run-application/configure-pdb/)
861-
* 了解[服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)(QoS)
862-
* 查看[驱逐 API](/docs/reference/generated/kubernetes-api/{{<param "version">}}/#create-eviction-pod-v1-core)
858+
- 了解 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)
859+
- 了解 [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
860+
- 了解 [PodDisruptionBudgets](/zh-cn/docs/tasks/run-application/configure-pdb/)
861+
- 了解[服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)(QoS)
862+
- 查看[驱逐 API](/docs/reference/generated/kubernetes-api/{{<param "version">}}/#create-eviction-pod-v1-core)

content/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ weight: 40
1111

1212
<!-- overview -->
1313

14-
{{< feature-state for_k8s_version="v1.26" state="alpha" >}}
14+
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
1515

1616
<!--
1717
Pods were considered ready for scheduling once created. Kubernetes scheduler

0 commit comments

Comments
 (0)