Skip to content

Commit 1563603

Browse files
authored
Merge pull request #27741 from tengqm/zh-sync-concepts-2
[zh] Resync concepts files (2)
2 parents c765642 + b33e1ae commit 1563603

File tree

6 files changed

+97
-65
lines changed

6 files changed

+97
-65
lines changed

content/zh/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 59 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: 将 Pod 分配给节点
33
content_type: concept
4-
weight: 50
4+
weight: 20
55
---
66

77
<!--
@@ -11,24 +11,24 @@ reviewers:
1111
- bsalamat
1212
title: Assigning Pods to Nodes
1313
content_type: concept
14-
weight: 50
14+
weight: 20
1515
-->
1616

1717
<!-- overview -->
1818

1919
<!--
20-
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} to only be able to run on particular
21-
{{< glossary_tooltip text="Node(s)" term_id="node" >}}, or to prefer to run on particular nodes.
20+
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
21+
{{< glossary_tooltip text="Node(s)" term_id="node" >}}.
2222
There are several ways to do this, and the recommended approaches all use
23-
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to make the selection.
23+
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
2424
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
25-
(e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.)
25+
(e.g. spread your pods across nodes so as not place the pod on a node with insufficient free resources, etc.)
2626
but there are some circumstances where you may want more control on a node where a pod lands, for example to ensure
2727
that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different
2828
services that communicate a lot into the same availability zone.
2929
-->
3030
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的
31-
{{< glossary_tooltip text="节点" term_id="node" >}} 上运行,或者优先运行在特定的节点上
31+
{{< glossary_tooltip text="节点" term_id="node" >}} 上运行。
3232
有几种方法可以实现这点,推荐的方法都是用
3333
[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)来进行选择。
3434
通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 Pod 分散到节点上,
@@ -132,22 +132,12 @@ Pod 将会调度到将标签添加到的节点上。
132132
## Interlude: built-in node labels {#built-in-node-labels}
133133

134134
In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated
135-
with a standard set of labels. These labels are
135+
with a standard set of labels. See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/) for a list of these.
136136
-->
137137
## 插曲:内置的节点标签 {#built-in-node-labels}
138138

139139
除了你[添加](#attach-labels-to-node)的标签外,节点还预先填充了一组标准标签。
140-
这些标签有:
141-
142-
* [`kubernetes.io/hostname`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname)
143-
* [`failure-domain.beta.kubernetes.io/zone`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone)
144-
* [`failure-domain.beta.kubernetes.io/region`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion)
145-
* [`topology.kubernetes.io/zone`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
146-
* [`topology.kubernetes.io/region`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
147-
* [`beta.kubernetes.io/instance-type`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-instance-type)
148-
* [`node.kubernetes.io/instance-type`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type)
149-
* [`kubernetes.io/os`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os)
150-
* [`kubernetes.io/arch`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch)
140+
参见[常用标签、注解和污点](/zh/docs/reference/labels-annotations-taints/)。
151141

152142
{{< note >}}
153143
<!--
@@ -247,12 +237,12 @@ Pod 可以调度到哪些节点。
247237
<!--
248238
There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
249239
`preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively,
250-
in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (just like
240+
in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (similar to
251241
`nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler
252242
will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar
253243
to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer
254-
met, the pod will still continue to run on the node. In the future we plan to offer
255-
`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution`
244+
met, the pod continues to run on the node. In the future we plan to offer
245+
`requiredDuringSchedulingRequiredDuringExecution` which will be identical to `requiredDuringSchedulingIgnoredDuringExecution`
256246
except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements.
257247
-->
258248
目前有两种类型的节点亲和性,分别为 `requiredDuringSchedulingIgnoredDuringExecution` 和
@@ -264,8 +254,8 @@ except that it will evict pods from nodes that cease to satisfy the pods' node a
264254
如果节点的标签在运行时发生变更,从而不再满足 Pod 上的亲和性规则,那么 Pod
265255
将仍然继续在该节点上运行。
266256
将来我们计划提供 `requiredDuringSchedulingRequiredDuringExecution`,
267-
它将类似于 `requiredDuringSchedulingIgnoredDuringExecution`,
268-
除了它会将 pod 从不再满足 pod 的节点亲和性要求的节点上驱逐。
257+
它将与 `requiredDuringSchedulingIgnoredDuringExecution` 完全相同
258+
只是它会将 Pod 从不再满足 Pod 的节点亲和性要求的节点上驱逐。
269259

270260
<!--
271261
Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs"
@@ -538,22 +528,23 @@ Pod 亲和性与反亲和性的合法操作符有 `In`,`NotIn`,`Exists`,`D
538528
然而,出于性能和安全原因,topologyKey 受到一些限制:
539529

540530
<!--
541-
1. For affinity and for `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity,
542-
empty `topologyKey` is not allowed.
543-
2. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or simply disable it.
544-
3. For `preferredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, empty `topologyKey` is interpreted as "all topologies" ("all topologies" here is now limited to the combination of `kubernetes.io/hostname`, `topology.kubernetes.io/zone` and `topology.kubernetes.io/region`).
531+
1. For pod affinity, empty `topologyKey` is not allowed in both
532+
`requiredDuringSchedulingIgnoredDuringExecution`
533+
and `preferredDuringSchedulingIgnoredDuringExecution`.
534+
2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
535+
and `preferredDuringSchedulingIgnoredDuringExecution`.
536+
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it.
545537
4. Except for the above cases, the `topologyKey` can be any legal label-key.
546538
-->
547-
1. 对于亲和性与 `requiredDuringSchedulingIgnoredDuringExecution` 要求的
548-
Pod 反亲和性,`topologyKey` 不允许为空。
549-
2. 对于 `requiredDuringSchedulingIgnoredDuringExecution` 要求的 Pod 反亲和性,
550-
准入控制器 `LimitPodHardAntiAffinityTopology` 被引入来限制 `topologyKey`
551-
为 `kubernetes.io/hostname`。
552-
如果你想设置topologyKey为其他值来用于自定义拓扑结构,你必须修改准入控制器或者禁用它。
553-
3. 对于 `preferredDuringSchedulingIgnoredDuringExecution` 要求的 Pod 反亲和性,
554-
空的 `topologyKey` 被解释为“所有拓扑结构”(这里的“所有拓扑结构”限制为
555-
`kubernetes.io/hostname`,`topology.kubernetes.io/zone` 和
556-
`topology.kubernetes.io/region` 的组合)。
539+
1. 对于 Pod 亲和性而言,在 `requiredDuringSchedulingIgnoredDuringExecution`
540+
和 `preferredDuringSchedulingIgnoredDuringExecution` 中,`topologyKey` 不允许为空。
541+
2. 对于 Pod 反亲和性而言,`requiredDuringSchedulingIgnoredDuringExecution`
542+
和 `preferredDuringSchedulingIgnoredDuringExecution` 中,`topologyKey`
543+
都不可以为空。
544+
3. 对于 `requiredDuringSchedulingIgnoredDuringExecution` 要求的 Pod 反亲和性,
545+
准入控制器 `LimitPodHardAntiAffinityTopology` 被引入以确保 `topologyKey`
546+
只能是 `kubernetes.io/hostname`。如果你希望 `topologyKey` 也可用于其他定制
547+
拓扑逻辑,你可以更改准入控制器或者禁用之。
557548
4. 除上述情况外,`topologyKey` 可以是任何合法的标签键。
558549

559550
<!--
@@ -573,6 +564,36 @@ must be satisfied for the pod to be scheduled onto a node.
573564
所有与 `requiredDuringSchedulingIgnoredDuringExecution` 亲和性与反亲和性
574565
关联的 `matchExpressions` 必须满足,才能将 pod 调度到节点上。
575566

567+
<!--
568+
#### Namespace selector
569+
-->
570+
#### 名字空间选择算符
571+
572+
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
573+
574+
<!--
575+
Users can also select matching namespaces using `namespaceSelector`, which is a label query over the set of namespaces.
576+
The affinity term is applied to the union of the namespaces selected by `namespaceSelector` and the ones listed in the `namespaces` field.
577+
Note that an empty `namespaceSelector` ({}) matches all namespaces, while a null or empty `namespaces` list and
578+
null `namespaceSelector` means "this pod's namespace".
579+
-->
580+
用户也可以使用 `namespaceSelector` 选择匹配的名字空间,`namespaceSelector`
581+
是对名字空间集合进行标签查询的机制。
582+
亲和性条件会应用到 `namespaceSelector` 所选择的名字空间和 `namespaces` 字段中
583+
所列举的名字空间之上。
584+
注意,空的 `namespaceSelector`({})会匹配所有名字空间,而 null 或者空的
585+
`namespaces` 列表以及 null 值 `namespaceSelector` 意味着“当前 Pod 的名字空间”。
586+
587+
<!--
588+
This feature is alpha and disabled by default. You can enable it by setting the
589+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
590+
`PodAffinityNamespaceSelector` in both kube-apiserver and kube-scheduler.
591+
-->
592+
此功能特性是 Alpha 版本的,默认是被禁用的。你可以通过针对 kube-apiserver 和
593+
kube-scheduler 设置
594+
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
595+
`PodAffinityNamespaceSelector` 来启用此特性。
596+
576597
<!--
577598
#### More Practical Use-cases
578599

content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Kubernetes 调度器
33
content_type: concept
4-
weight: 50
4+
weight: 10
55
---
66

77
<!--
88
title: Kubernetes Scheduler
99
content_type: concept
10-
weight: 50
10+
weight: 10
1111
-->
1212
<!-- overview -->
1313

@@ -173,13 +173,15 @@ of the scheduler:
173173
* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
174174
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
175175
* Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler
176+
* Read the [kube-scheduler config (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/) reference
176177
* Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
177178
* Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/)
178179
* Learn about [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
179180
-->
180181
* 阅读关于 [调度器性能调优](/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
181182
* 阅读关于 [Pod 拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
182183
* 阅读关于 kube-scheduler 的 [参考文档](/zh/docs/reference/command-line-tools-reference/kube-scheduler/)
184+
* 阅读 [kube-scheduler 配置参考 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)
183185
* 了解关于 [配置多个调度器](/zh/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) 的方式
184186
* 了解关于 [拓扑结构管理策略](/zh/docs/tasks/administer-cluster/topology-manager/)
185187
* 了解关于 [Pod 额外开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/)

content/zh/docs/concepts/scheduling-eviction/resource-bin-packing.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,16 @@
11
---
22
title: 扩展资源的资源装箱
33
content_type: concept
4-
weight: 50
4+
weight: 30
55
---
66
<!--
7+
reviewers:
8+
- bsalamat
9+
- k82cn
10+
- ahg-g
711
title: Resource Bin Packing for Extended Resources
812
content_type: concept
9-
weight: 50
13+
weight: 30
1014
-->
1115

1216
<!-- overview -->

content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -78,12 +78,14 @@ had set a value of 100.
7878
kube-scheduler 的表现等价于设置值为 100。
7979

8080
<!--
81-
To change the value, edit the kube-scheduler configuration file (this is likely
82-
to be `/etc/kubernetes/config/kube-scheduler.yaml`), then restart the scheduler.
83-
-->
84-
要修改这个值,编辑 kube-scheduler 的配置文件
85-
(通常是 `/etc/kubernetes/config/kube-scheduler.yaml`),
86-
然后重启调度器。
81+
To change the value, edit the
82+
[kube-scheduler configuration file](/docs/reference/config-api/kube-scheduler-config.v1beta1/)
83+
and then restart the scheduler.
84+
In many cases, the configuration file can be found at `/etc/kubernetes/config/kube-scheduler.yaml`.
85+
-->
86+
要修改这个值,编辑 [kube-scheduler 的配置文件](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)
87+
之后重启调度器。
88+
在很多场合下,配置文件位于 `/etc/kubernetes/config/kube-scheduler.yaml`
8789

8890
<!--
8991
After you have made this change, you can run
@@ -193,8 +195,8 @@ minimum value of 50 nodes.
193195
另外,还有一个 50 个 Node 的最小值是硬编码在程序中。
194196

195197
<!--
196-
{{< note >}} In clusters with less than 50 feasible nodes, the scheduler still
197-
checks all the nodes, simply because there are not enough feasible nodes to stop
198+
In clusters with less than 50 feasible nodes, the scheduler still
199+
checks all the nodes because there are not enough feasible nodes to stop
198200
the scheduler's search early.
199201

200202
In a small cluster, if you set a low value for `percentageOfNodesToScore`, your
@@ -203,7 +205,6 @@ change will have no or little effect, for a similar reason.
203205
If your cluster has several hundred Nodes or fewer, leave this configuration option
204206
at its default value. Making changes is unlikely to improve the
205207
scheduler's performance significantly.
206-
{{< /note >}}
207208
-->
208209
{{< note >}}
209210
当集群中的可调度节点少于 50 个时,调度器仍然会去检查所有的 Node,
@@ -293,3 +294,8 @@ After going over all the Nodes, it goes back to Node 1.
293294
-->
294295
在评估完所有 Node 后,将会返回到 Node 1,从头开始。
295296
297+
298+
## {{% heading "whatsnext" %}}
299+
300+
* 查阅 [kube-scheduler 配置参考 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)
301+

content/zh/docs/concepts/scheduling-eviction/scheduling-framework.md

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,4 @@
11
---
2-
reviewers:
3-
- ahg-g
42
title: 调度框架
53
content_type: concept
64
weight: 70
@@ -11,24 +9,25 @@ reviewers:
119
- ahg-g
1210
title: Scheduling Framework
1311
content_type: concept
14-
weight: 60
12+
weight: 70
1513
-->
1614

1715
<!-- overview -->
1816

1917
{{< feature-state for_k8s_version="1.15" state="alpha" >}}
2018

2119
<!--
22-
The scheduling framework is a plugable architecture for Kubernetes Scheduler
23-
that makes scheduler customizations easy. It adds a new set of "plugin" APIs to
24-
the existing scheduler. Plugins are compiled into the scheduler. The APIs
25-
allow most scheduling features to be implemented as plugins, while keeping the
20+
The scheduling framework is a plugable architecture for the Kubernetes Scheduler.
21+
It adds a new set of "plugin" APIs to the existing scheduler. Plugins are compiled
22+
into the scheduler. The APIs allow most scheduling features to be implemented as
23+
plugins, while keeping the
2624
scheduling "core" simple and maintainable. Refer to the [design proposal of the
2725
scheduling framework][kep] for more technical information on the design of the
2826
framework.
2927
-->
30-
调度框架是 Kubernetes Scheduler 的一种可插入架构,可以简化调度器的自定义。
31-
它向现有的调度器增加了一组新的“插件” API。插件被编译到调度器程序中。
28+
调度框架是 Kubernetes 调度器的一种可插入架构。
29+
调度框架向现有的调度器增加了一组新的“插件(Plugin)” API。
30+
插件被编译到调度器程序中。
3231
这些 API 允许大多数调度功能以插件的形式实现,同时使调度“核心”保持简单且可维护。
3332
请参考[调度框架的设计提案](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/624-scheduling-framework/README.md)
3433
获取框架设计的更多技术信息。
@@ -328,15 +327,15 @@ _Permit_ 插件在每个 Pod 调度周期的最后调用,用于防止或延迟
328327

329328
<!--
330329
While any plugin can access the list of "waiting" Pods and approve them
331-
(see [`FrameworkHandle`](#frameworkhandle)), we expect only the permit
330+
(see [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)), we expect only the permit
332331
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
333332
is approved, it is sent to the [PreBind](#pre-bind) phase.
334333
-->
335334
{{< note >}}
336335
尽管任何插件可以访问 “等待中” 状态的 Pod 列表并批准它们
337-
(查看 [`FrameworkHandle`](#frameworkhandle))
338-
我们希望只有允许插件可以批准处于 “等待中” 状态的预留 Pod 的绑定。
339-
一旦 Pod 被批准了,它将发送到[预绑定](#pre-bind) 阶段。
336+
(参阅 [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)
337+
我们希望只有被允许的插件可以批准处于“等待中”状态的预留 Pod 的绑定。
338+
一旦 Pod 被批准了,它将进入到[预绑定](#pre-bind) 阶段。
340339
{{< /note >}}
341340

342341
<!--

content/zh/docs/tasks/manage-gpus/scheduling-gpus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ When the above conditions are true, Kubernetes will expose `amd.com/gpu` or
5454
`nvidia.com/gpu` as a schedulable resource.
5555
5656
You can consume these GPUs from your containers by requesting
57-
`<vendor>.com/gpu` just like you request `cpu` or `memory`.
57+
`<vendor>.com/gpu` the same way you request `cpu` or `memory`.
5858
However, there are some limitations in how you specify the resource requirements
5959
when using GPUs:
6060
-->

0 commit comments

Comments
 (0)