@@ -18,9 +18,9 @@ collected. Deleting a DaemonSet will clean up the Pods it created.
18
18
19
19
Some typical uses of a DaemonSet are:
20
20
21
- - running a cluster storage daemon, such as ` glusterd ` , ` ceph ` , on each node.
22
- - running a logs collection daemon on every node, such as ` fluentd ` or ` filebeat ` .
23
- - running a node monitoring daemon on every node, such as [ Prometheus Node Exporter ] ( https://github.com/prometheus/node_exporter ) , [ Flowmill ] ( https://github.com/Flowmill/flowmill-k8s/ ) , [ Sysdig Agent ] ( https://docs.sysdig.com ) , ` collectd ` , [ Dynatrace OneAgent ] ( https://www.dynatrace.com/technologies/kubernetes-monitoring/ ) , [ AppDynamics Agent ] ( https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes ) , [ Datadog agent ] ( https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/ ) , [ New Relic agent ] ( https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration ) , Ganglia ` gmond ` , [ Instana Agent ] ( https://www.instana.com/supported-integrations/kubernetes-monitoring/ ) or [ Elastic Metricbeat ] ( https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-kubernetes.html ) .
21
+ - running a cluster storage daemon on every node
22
+ - running a logs collection daemon on every node
23
+ - running a node monitoring daemon on every node
24
24
25
25
In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon.
26
26
A more complex setup might use multiple DaemonSets for a single type of daemon, but with
@@ -95,15 +95,15 @@ another DaemonSet, or via another workload resource such as ReplicaSet. Otherwi
95
95
Kubernetes will not stop you from doing this. One case where you might want to do this is manually
96
96
create a Pod with a different value on a node for testing.
97
97
98
- ### Running Pods on Only Some Nodes
98
+ ### Running Pods on select Nodes
99
99
100
100
If you specify a ` .spec.template.spec.nodeSelector ` , then the DaemonSet controller will
101
101
create Pods on nodes which match that [ node
102
102
selector] ( /docs/concepts/scheduling-eviction/assign-pod-node/ ) . Likewise if you specify a ` .spec.template.spec.affinity ` ,
103
103
then DaemonSet controller will create Pods on nodes which match that [ node affinity] ( /docs/concepts/scheduling-eviction/assign-pod-node/ ) .
104
104
If you do not specify either, then the DaemonSet controller will create Pods on all nodes.
105
105
106
- ## How Daemon Pods are Scheduled
106
+ ## How Daemon Pods are scheduled
107
107
108
108
### Scheduled by default scheduler
109
109
@@ -144,25 +144,21 @@ In addition, `node.kubernetes.io/unschedulable:NoSchedule` toleration is added
144
144
automatically to DaemonSet Pods. The default scheduler ignores
145
145
` unschedulable` Nodes when scheduling DaemonSet Pods.
146
146
147
-
148
147
# ## Taints and Tolerations
149
148
150
149
Although Daemon Pods respect
151
150
[taints and tolerations](/docs/concepts/configuration/taint-and-toleration),
152
151
the following tolerations are added to DaemonSet Pods automatically according to
153
152
the related features.
154
153
155
- | Toleration Key | Effect | Version | Description |
156
- | ---------------------------------------- | ---------- | ------- | ------------------------------------------------------------ |
157
- | `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
158
- | `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
159
- | `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | |
160
- | `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | |
161
- | `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | DaemonSet pods tolerate unschedulable attributes by default scheduler. |
162
- | `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | DaemonSet pods, who uses host network, tolerate network-unavailable attributes by default scheduler. |
163
-
164
-
165
-
154
+ | Toleration Key | Effect | Version | Description |
155
+ | ---------------------------------------- | ---------- | ------- | ----------- |
156
+ | `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
157
+ | `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | DaemonSet pods will not be evicted when there are node problems such as a network partition. |
158
+ | `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | |
159
+ | `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | |
160
+ | `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | DaemonSet pods tolerate unschedulable attributes by default scheduler. |
161
+ | `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | DaemonSet pods, who uses host network, tolerate network-unavailable attributes by default scheduler. |
166
162
167
163
# # Communicating with Daemon Pods
168
164
@@ -195,7 +191,7 @@ You can [perform a rolling update](/docs/tasks/manage-daemon/update-daemon-set/)
195
191
196
192
# # Alternatives to DaemonSet
197
193
198
- # ## Init Scripts
194
+ # ## Init scripts
199
195
200
196
It is certainly possible to run daemon processes by directly starting them on a node (e.g. using
201
197
` init` , `upstartd`, or `systemd`). This is perfectly fine. However, there are several advantages to
0 commit comments