Skip to content

Commit de07e4a

Browse files
authored
Merge pull request #19746 from rajeshdeshpande02/patch-64
Adding example to DaemonSet Rolling Update task
2 parents 3efc30e + f3d82cf commit de07e4a

File tree

3 files changed

+131
-35
lines changed

3 files changed

+131
-35
lines changed

content/en/docs/tasks/manage-daemon/update-daemon-set.md

Lines changed: 41 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -43,21 +43,39 @@ To enable the rolling update feature of a DaemonSet, you must set its
4343
You may want to set [`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) (default
4444
to 1) and [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (default to 0) as well.
4545

46+
### Creating a DaemonSet with `RollingUpdate` update strategy
4647

47-
### Step 1: Checking DaemonSet `RollingUpdate` update strategy
48+
This YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate'
4849

49-
First, check the update strategy of your DaemonSet, and make sure it's set to
50+
{{< codenew file="controllers/fluentd-daemonset.yaml" >}}
51+
52+
After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
53+
54+
```shell
55+
kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
56+
```
57+
58+
Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to
59+
update the DaemonSet with `kubectl apply`.
60+
61+
```shell
62+
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
63+
```
64+
65+
### Checking DaemonSet `RollingUpdate` update strategy
66+
67+
Check the update strategy of your DaemonSet, and make sure it's set to
5068
`RollingUpdate`:
5169

5270
```shell
53-
kubectl get ds/<daemonset-name> -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
71+
kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system
5472
```
5573

5674
If you haven't created the DaemonSet in the system, check your DaemonSet
5775
manifest with the following command instead:
5876

5977
```shell
60-
kubectl apply -f ds.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
78+
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
6179
```
6280

6381
The output from both commands should be:
@@ -69,28 +87,13 @@ RollingUpdate
6987
If the output isn't `RollingUpdate`, go back and modify the DaemonSet object or
7088
manifest accordingly.
7189

72-
### Step 2: Creating a DaemonSet with `RollingUpdate` update strategy
7390

74-
If you have already created the DaemonSet, you may skip this step and jump to
75-
step 3.
76-
77-
After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
78-
79-
```shell
80-
kubectl create -f ds.yaml
81-
```
82-
83-
Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to
84-
update the DaemonSet with `kubectl apply`.
85-
86-
```shell
87-
kubectl apply -f ds.yaml
88-
```
89-
90-
### Step 3: Updating a DaemonSet template
91+
### Updating a DaemonSet template
9192

9293
Any updates to a `RollingUpdate` DaemonSet `.spec.template` will trigger a rolling
93-
update. This can be done with several different `kubectl` commands.
94+
update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different `kubectl` commands.
95+
96+
{{< codenew file="controllers/fluentd-daemonset-update.yaml" >}}
9497

9598
#### Declarative commands
9699

@@ -99,21 +102,17 @@ If you update DaemonSets using
99102
use `kubectl apply`:
100103

101104
```shell
102-
kubectl apply -f ds-v2.yaml
105+
kubectl apply -f https://k8s.io/examples/application/fluentd-daemonset-update.yaml
103106
```
104107

105108
#### Imperative commands
106109

107110
If you update DaemonSets using
108111
[imperative commands](/docs/tasks/manage-kubernetes-objects/imperative-command/),
109-
use `kubectl edit` or `kubectl patch`:
110-
111-
```shell
112-
kubectl edit ds/<daemonset-name>
113-
```
112+
use `kubectl edit` :
114113

115114
```shell
116-
kubectl patch ds/<daemonset-name> -p=<strategic-merge-patch>
115+
kubectl edit ds/fluentd-elasticsearch -n kube-system
117116
```
118117

119118
##### Updating only the container image
@@ -122,21 +121,21 @@ If you just need to update the container image in the DaemonSet template, i.e.
122121
`.spec.template.spec.containers[*].image`, use `kubectl set image`:
123122

124123
```shell
125-
kubectl set image ds/<daemonset-name> <container-name>=<container-new-image>
124+
kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system
126125
```
127126

128-
### Step 4: Watching the rolling update status
127+
### Watching the rolling update status
129128

130129
Finally, watch the rollout status of the latest DaemonSet rolling update:
131130

132131
```shell
133-
kubectl rollout status ds/<daemonset-name>
132+
kubectl rollout status ds/fluentd-elasticsearch -n kube-system
134133
```
135134

136135
When the rollout is complete, the output is similar to this:
137136

138137
```shell
139-
daemonset "<daemonset-name>" successfully rolled out
138+
daemonset "fluentd-elasticsearch" successfully rolled out
140139
```
141140

142141
## Troubleshooting
@@ -156,7 +155,7 @@ When this happens, find the nodes that don't have the DaemonSet pods scheduled o
156155
by comparing the output of `kubectl get nodes` and the output of:
157156

158157
```shell
159-
kubectl get pods -l <daemonset-selector-key>=<daemonset-selector-value> -o wide
158+
kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system
160159
```
161160

162161
Once you've found those nodes, delete some non-DaemonSet pods from the node to
@@ -183,6 +182,13 @@ If `.spec.minReadySeconds` is specified in the DaemonSet, clock skew between
183182
master and nodes will make DaemonSet unable to detect the right rollout
184183
progress.
185184

185+
## Clean up
186+
187+
Delete DaemonSet from a namespace :
188+
189+
```shell
190+
kubectl delete ds fluentd-elasticsearch -n kube-system
191+
```
186192

187193
{{% /capture %}}
188194

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
apiVersion: apps/v1
2+
kind: DaemonSet
3+
metadata:
4+
name: fluentd-elasticsearch
5+
namespace: kube-system
6+
labels:
7+
k8s-app: fluentd-logging
8+
spec:
9+
selector:
10+
matchLabels:
11+
name: fluentd-elasticsearch
12+
updateStrategy:
13+
type: RollingUpdate
14+
rollingUpdate:
15+
maxUnavailable: 1
16+
template:
17+
metadata:
18+
labels:
19+
name: fluentd-elasticsearch
20+
spec:
21+
tolerations:
22+
# this toleration is to have the daemonset runnable on master nodes
23+
# remove it if your masters can't run pods
24+
- key: node-role.kubernetes.io/master
25+
effect: NoSchedule
26+
containers:
27+
- name: fluentd-elasticsearch
28+
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
29+
resources:
30+
limits:
31+
memory: 200Mi
32+
requests:
33+
cpu: 100m
34+
memory: 200Mi
35+
volumeMounts:
36+
- name: varlog
37+
mountPath: /var/log
38+
- name: varlibdockercontainers
39+
mountPath: /var/lib/docker/containers
40+
readOnly: true
41+
terminationGracePeriodSeconds: 30
42+
volumes:
43+
- name: varlog
44+
hostPath:
45+
path: /var/log
46+
- name: varlibdockercontainers
47+
hostPath:
48+
path: /var/lib/docker/containers
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
apiVersion: apps/v1
2+
kind: DaemonSet
3+
metadata:
4+
name: fluentd-elasticsearch
5+
namespace: kube-system
6+
labels:
7+
k8s-app: fluentd-logging
8+
spec:
9+
selector:
10+
matchLabels:
11+
name: fluentd-elasticsearch
12+
updateStrategy:
13+
type: RollingUpdate
14+
rollingUpdate:
15+
maxUnavailable: 1
16+
template:
17+
metadata:
18+
labels:
19+
name: fluentd-elasticsearch
20+
spec:
21+
tolerations:
22+
# this toleration is to have the daemonset runnable on master nodes
23+
# remove it if your masters can't run pods
24+
- key: node-role.kubernetes.io/master
25+
effect: NoSchedule
26+
containers:
27+
- name: fluentd-elasticsearch
28+
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
29+
volumeMounts:
30+
- name: varlog
31+
mountPath: /var/log
32+
- name: varlibdockercontainers
33+
mountPath: /var/lib/docker/containers
34+
readOnly: true
35+
terminationGracePeriodSeconds: 30
36+
volumes:
37+
- name: varlog
38+
hostPath:
39+
path: /var/log
40+
- name: varlibdockercontainers
41+
hostPath:
42+
path: /var/lib/docker/containers

0 commit comments

Comments
 (0)