Skip to content

Commit 7e343b3

Browse files
committed
[zh] sync /controllers/daemonset.md
1 parent 6d47dee commit 7e343b3

File tree

1 file changed

+87
-45
lines changed
  • content/zh-cn/docs/concepts/workloads/controllers

1 file changed

+87
-45
lines changed

content/zh-cn/docs/concepts/workloads/controllers/daemonset.md

Lines changed: 87 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -202,36 +202,42 @@ If you do not specify either, then the DaemonSet controller will create Pods on
202202

203203
<!--
204204
## How Daemon Pods are scheduled
205-
206-
### Scheduled by default scheduler
207205
-->
208206
## Daemon Pods 是如何被调度的 {#how-daemon-pods-are-scheduled}
209207

210-
### 通过默认调度器调度 {#scheduled-by-default-scheduler}
211-
212-
{{< feature-state for_k8s_version="1.17" state="stable" >}}
213-
214208
<!--
215-
A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the
216-
node that a Pod runs on is selected by the Kubernetes scheduler. However,
217-
DaemonSet pods are created and scheduled by the DaemonSet controller instead.
218-
That introduces the following issues:
219-
220-
* Inconsistent Pod behavior: Normal Pods waiting to be scheduled are created
221-
and in `Pending` state, but DaemonSet pods are not created in `Pending`
222-
state. This is confusing to the user.
223-
* [Pod preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
224-
is handled by default scheduler. When preemption is enabled, the DaemonSet controller
225-
will make scheduling decisions without considering pod priority and preemption.
209+
A DaemonSet ensures that all eligible nodes run a copy of a Pod. The DaemonSet
210+
controller creates a Pod for each eligible node and adds the
211+
`spec.affinity.nodeAffinity` field of the Pod to match the target host. After
212+
the Pod is created, the default scheduler typically takes over and then binds
213+
the Pod to the target host by setting the `.spec.nodeName` field. If the new
214+
Pod cannot fit on the node, the default scheduler may preempt (evict) some of
215+
the existing Pods based on the
216+
[priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)
217+
of the new Pod.
226218
-->
227219
DaemonSet 确保所有符合条件的节点都运行该 Pod 的一个副本。
228-
通常,运行 Pod 的节点由 Kubernetes 调度器选择。
229-
不过,DaemonSet Pods 由 DaemonSet 控制器创建和调度。这就带来了以下问题:
220+
DaemonSet 控制器为每个符合条件的节点创建一个 Pod,并添加 Pod 的 `spec.affinity.nodeAffinity`
221+
字段以匹配目标主机。Pod 被创建之后,默认的调度程序通常通过设置 `.spec.nodeName` 字段来接管 Pod 并将
222+
Pod 绑定到目标主机。如果新的 Pod 无法放在节点上,则默认的调度程序可能会根据新 Pod
223+
[优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)抢占
224+
(驱逐)某些现存的 Pod。
225+
226+
<!--
227+
The user can specify a different scheduler for the Pods of the DamonSet, by
228+
setting the `.spec.template.spec.schedulerName` field of the DaemonSet.
229+
230+
The original node affinity specified at the
231+
`.spec.template.spec.affinity.nodeAffinity` field (if specified) is taken into
232+
consideration by the DaemonSet controller when evaluating the eligible nodes,
233+
but is replaced on the created Pod with the node affinity that matches the name
234+
of the eligible node.
235+
-->
236+
用户通过设置 DaemonSet 的 `.spec.template.spec.schedulerName` 字段,可以为 DamonSet
237+
的 Pod 指定不同的调度程序。
230238

231-
* Pod 行为的不一致性:正常 Pod 在被创建后等待调度时处于 `Pending` 状态,
232-
DaemonSet Pods 创建后不会处于 `Pending` 状态下。这使用户感到困惑。
233-
* [Pod 抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)由默认调度器处理。
234-
启用抢占后,DaemonSet 控制器将在不考虑 Pod 优先级和抢占的情况下制定调度决策。
239+
当评估符合条件的节点时,原本在 `.spec.template.spec.affinity.nodeAffinity` 字段上指定的节点亲和性将由
240+
DaemonSet 控制器进行考量,但在创建的 Pod 上会被替换为与符合条件的节点名称匹配的节点亲和性。
235241

236242
<!--
237243
`ScheduleDaemonSetPods` allows you to schedule DaemonSets using the default
@@ -263,39 +269,75 @@ nodeAffinity:
263269
```
264270
265271
<!--
266-
In addition, `node.kubernetes.io/unschedulable:NoSchedule` toleration is added
267-
automatically to DaemonSet Pods. The default scheduler ignores
268-
`unschedulable` Nodes when scheduling DaemonSet Pods.
272+
### Taints and tolerations
273+
274+
The DaemonSet controller automatically adds a set of {{< glossary_tooltip
275+
text="tolerations" term_id="toleration" >}} to DaemonSet Pods:
269276
-->
270-
此外,系统会自动添加 `node.kubernetes.io/unschedulable:NoSchedule` 容忍度到这些
271-
DaemonSet Pod。在调度 DaemonSet Pod 时,默认调度器会忽略 `unschedulable` 节点。
277+
### 污点和容忍度 {#taint-and-toleration}
278+
279+
DaemonSet 控制器会自动将一组容忍度添加到 DaemonSet Pod:
272280
273281
<!--
274-
### Taints and Tolerations
282+
Tolerations for DaemonSet pods
283+
-->
284+
{{< table caption="DaemonSet Pod 适用的容忍度" >}}
275285
276-
Although Daemon Pods respect
277-
[taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/),
278-
the following tolerations are added to DaemonSet Pods automatically according to
279-
the related features.
286+
<!--
287+
| Toleration key | Effect | Details |
288+
| --------------------------------------------------------------------------------------------------------------------- | ------------ | --------------------------------------------------------------------------------------------------------------------------------------------- |
289+
| [`node.kubernetes.io/not-ready`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-not-ready) | `NoExecute` | DaemonSet Pods can be scheduled onto nodes that are not healthy or ready to accept Pods. Any DaemonSet Pods running on such nodes will not be evicted. |
290+
| [`node.kubernetes.io/unreachable`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-unreachable) | `NoExecute` | DaemonSet Pods can be scheduled onto nodes that are unreachable from the node controller. Any DaemonSet Pods running on such nodes will not be evicted. |
291+
| [`node.kubernetes.io/disk-pressure`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-disk-pressure) | `NoSchedule` | DaemonSet Pods can be scheduled onto nodes with disk pressure issues. |
292+
| [`node.kubernetes.io/memory-pressure`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-memory-pressure) | `NoSchedule` | DaemonSet Pods can be scheduled onto nodes with memory pressure issues. |
293+
| [`node.kubernetes.io/pid-pressure`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-pid-pressure) | `NoSchedule` | DaemonSet Pods can be scheduled onto nodes with process pressure issues. |
294+
| [`node.kubernetes.io/unschedulable`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-unschedulable) | `NoSchedule` | DaemonSet Pods can be scheduled onto nodes that are unschedulable. |
295+
| [`node.kubernetes.io/network-unavailable`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-network-unavailable) | `NoSchedule` | **Only added for DaemonSet Pods that request host networking**, i.e., Pods having `spec.hostNetwork: true`. Such DaemonSet Pods can be scheduled onto nodes with unavailable network.|
280296
-->
281-
### 污点和容忍度 {#taint-and-toleration}
297+
| 容忍度键名 | 效果 | 描述 |
298+
| -------------------------------------------------------- | ---------- | ----------------------- |
299+
| [`node.kubernetes.io/not-ready`](/zh-cn/docs/reference/labels-annotations-taints/#node-kubernetes-io-not-ready) | `NoExecute` | DaemonSet Pod 可以被调度到不健康或还不准备接受 Pod 的节点上。在这些节点上运行的所有 DaemonSet Pod 将不会被驱逐。 |
300+
| [`node.kubernetes.io/unreachable`](/zh-cn/docs/reference/labels-annotations-taints/#node-kubernetes-io-unreachable) | `NoExecute` | DaemonSet Pod 可以被调度到从节点控制器不可达的节点上。在这些节点上运行的所有 DaemonSet Pod 将不会被驱逐。 |
301+
| [`node.kubernetes.io/disk-pressure`](/zh-cn/docs/reference/labels-annotations-taints/#node-kubernetes-io-disk-pressure) | `NoSchedule` | DaemonSet Pod 可以被调度到具有磁盘压力问题的节点上。 |
302+
| [`node.kubernetes.io/memory-pressure`](/zh-cn/docs/reference/labels-annotations-taints/#node-kubernetes-io-memory-pressure) | `NoSchedule` | DaemonSet Pod 可以被调度到具有内存压力问题的节点上。 |
303+
| [`node.kubernetes.io/pid-pressure`](/zh-cn/docs/reference/labels-annotations-taints/#node-kubernetes-io-pid-pressure) | `NoSchedule` | DaemonSet Pod 可以被调度到具有进程压力问题的节点上。 |
304+
| [`node.kubernetes.io/unschedulable`](/zh-cn/docs/reference/labels-annotations-taints/#node-kubernetes-io-unschedulable) | `NoSchedule` | DaemonSet Pod 可以被调度到不可调度的节点上。 |
305+
| [`node.kubernetes.io/network-unavailable`](/zh-cn/docs/reference/labels-annotations-taints/#node-kubernetes-io-network-unavailable) | `NoSchedule` | **仅针对请求主机联网的 DaemonSet Pod 添加此容忍度**,即 Pod 具有 `spec.hostNetwork: true`。这些 DaemonSet Pod 可以被调度到网络不可用的节点上。|
306+
307+
{{< /table >}}
282308

283-
尽管 Daemon Pod 遵循[污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)规则,
284-
根据相关特性,控制器会自动将以下容忍度添加到 DaemonSet Pod:
309+
<!--
310+
You can add your own tolerations to the Pods of a Daemonset as well, by
311+
defining these in the Pod template of the DaemonSet.
285312

286-
| 容忍度键名 | 效果 | 版本 | 描述 |
287-
| ---------------------------------------- | ---------- | ------- | ------------------------------------------------------------ |
288-
| `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | 当出现类似网络断开的情况导致节点问题时,DaemonSet Pod 不会被逐出。 |
289-
| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | 当出现类似于网络断开的情况导致节点问题时,DaemonSet Pod 不会被逐出。 |
290-
| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | DaemonSet Pod 被默认调度器调度时能够容忍磁盘压力属性。 |
291-
| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | DaemonSet Pod 被默认调度器调度时能够容忍内存压力属性。 |
292-
| `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | DaemonSet Pod 能够容忍默认调度器所设置的 `unschedulable` 属性. |
293-
| `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | DaemonSet 在使用宿主网络时,能够容忍默认调度器所设置的 `network-unavailable` 属性。 |
313+
Because the DaemonSet controller sets the
314+
`node.kubernetes.io/unschedulable:NoSchedule` toleration automatically,
315+
Kubernetes can run DaemonSet Pods on nodes that are marked as _unschedulable_.
316+
-->
317+
你也可以在 DaemonSet 的 Pod 模板中定义自己的容忍度并将其添加到 DaemonSet Pod。
318+
319+
因为 DaemonSet 控制器自动设置 `node.kubernetes.io/unschedulable:NoSchedule` 容忍度,
320+
所以 Kubernetes 可以在标记为**不可调度**的节点上运行 DaemonSet Pod。
321+
322+
<!--
323+
If you use a DaemonSet to provide an important node-level function, such as
324+
[cluster networking](/docs/concepts/cluster-administration/networking/), it is
325+
helpful that Kubernetes places DaemonSet Pods on nodes before they are ready.
326+
For example, without that special toleration, you could end up in a deadlock
327+
situation where the node is not marked as ready because the network plugin is
328+
not running there, and at the same time the network plugin is not running on
329+
that node because the node is not yet ready.
330+
-->
331+
如果你使用 DaemonSet 提供重要的节点级别功能,
332+
例如[集群联网](/zh-cn/docs/concepts/cluster-administration/networking/),
333+
Kubernetes 在节点就绪之前将 DaemonSet Pod 放到节点上会很有帮助。
334+
例如,如果没有这种特殊的容忍度,因为网络插件未在节点上运行,所以你可能会在未标记为就绪的节点上陷入死锁状态,
335+
同时因为该节点还未就绪,所以网络插件不会在该节点上运行。
294336

295337
<!--
296338
## Communicating with Daemon Pods
297339
-->
298-
## 与 Daemon Pods 通信 {#communicating-with-daemon-pods}
340+
## 与 Daemon Pod 通信 {#communicating-with-daemon-pods}
299341

300342
<!--
301343
Some possible patterns for communicating with Pods in a DaemonSet are:

0 commit comments

Comments
 (0)