Skip to content

Commit dfb8d40

Browse files
author
Rajesh Deshpande
committed
Adding example for DaemonSet Rolling Update task
Adding example for DaemonSet Rolling Update task Adding fluentd daemonset example Adding fluentd daemonset example Creating fluend daemonset for update Creating fluend daemonset for update Adding proper description for YAML file Adding proper description for YAML file
1 parent 5321c65 commit dfb8d40

File tree

3 files changed

+135
-35
lines changed

3 files changed

+135
-35
lines changed

content/en/docs/tasks/manage-daemon/update-daemon-set.md

Lines changed: 45 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -43,21 +43,43 @@ To enable the rolling update feature of a DaemonSet, you must set its
4343
You may want to set [`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) (default
4444
to 1) and [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (default to 0) as well.
4545

46+
### Creating a DaemonSet with `RollingUpdate` update strategy
4647

47-
### Step 1: Checking DaemonSet `RollingUpdate` update strategy
48+
This YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate'
4849

49-
First, check the update strategy of your DaemonSet, and make sure it's set to
50+
{{< codenew file="controllers/fluentd-daemonset.yaml" >}}
51+
52+
After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
53+
54+
```shell
55+
kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
56+
```
57+
58+
Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to
59+
update the DaemonSet with `kubectl apply`.
60+
61+
```shell
62+
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
63+
```
64+
65+
### Checking DaemonSet `RollingUpdate` update strategy
66+
67+
Check the update strategy of your DaemonSet, and make sure it's set to
5068
`RollingUpdate`:
5169

5270
```shell
53-
kubectl get ds/<daemonset-name> -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
71+
kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system
5472
```
5573

5674
If you haven't created the DaemonSet in the system, check your DaemonSet
5775
manifest with the following command instead:
5876

5977
```shell
60-
kubectl apply -f ds.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
78+
<<<<<<< HEAD
79+
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
80+
=======
81+
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
82+
>>>>>>> Adding example for DaemonSet Rolling Update task
6183
```
6284

6385
The output from both commands should be:
@@ -69,28 +91,13 @@ RollingUpdate
6991
If the output isn't `RollingUpdate`, go back and modify the DaemonSet object or
7092
manifest accordingly.
7193

72-
### Step 2: Creating a DaemonSet with `RollingUpdate` update strategy
7394

74-
If you have already created the DaemonSet, you may skip this step and jump to
75-
step 3.
76-
77-
After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
78-
79-
```shell
80-
kubectl create -f ds.yaml
81-
```
82-
83-
Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to
84-
update the DaemonSet with `kubectl apply`.
85-
86-
```shell
87-
kubectl apply -f ds.yaml
88-
```
89-
90-
### Step 3: Updating a DaemonSet template
95+
### Updating a DaemonSet template
9196

9297
Any updates to a `RollingUpdate` DaemonSet `.spec.template` will trigger a rolling
93-
update. This can be done with several different `kubectl` commands.
98+
update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different `kubectl` commands.
99+
100+
{{< codenew file="controllers/fluentd-daemonset-update.yaml" >}}
94101

95102
#### Declarative commands
96103

@@ -99,21 +106,17 @@ If you update DaemonSets using
99106
use `kubectl apply`:
100107

101108
```shell
102-
kubectl apply -f ds-v2.yaml
109+
kubectl apply -f https://k8s.io/examples/application/fluentd-daemonset-update.yaml
103110
```
104111

105112
#### Imperative commands
106113

107114
If you update DaemonSets using
108115
[imperative commands](/docs/tasks/manage-kubernetes-objects/imperative-command/),
109-
use `kubectl edit` or `kubectl patch`:
110-
111-
```shell
112-
kubectl edit ds/<daemonset-name>
113-
```
116+
use `kubectl edit` :
114117

115118
```shell
116-
kubectl patch ds/<daemonset-name> -p=<strategic-merge-patch>
119+
kubectl edit ds/fluentd-elasticsearch -n kube-system
117120
```
118121

119122
##### Updating only the container image
@@ -122,21 +125,21 @@ If you just need to update the container image in the DaemonSet template, i.e.
122125
`.spec.template.spec.containers[*].image`, use `kubectl set image`:
123126

124127
```shell
125-
kubectl set image ds/<daemonset-name> <container-name>=<container-new-image>
128+
kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system
126129
```
127130

128-
### Step 4: Watching the rolling update status
131+
### Watching the rolling update status
129132

130133
Finally, watch the rollout status of the latest DaemonSet rolling update:
131134

132135
```shell
133-
kubectl rollout status ds/<daemonset-name>
136+
kubectl rollout status ds/fluentd-elasticsearch -n kube-system
134137
```
135138

136139
When the rollout is complete, the output is similar to this:
137140

138141
```shell
139-
daemonset "<daemonset-name>" successfully rolled out
142+
daemonset "fluentd-elasticsearch" successfully rolled out
140143
```
141144

142145
## Troubleshooting
@@ -156,7 +159,7 @@ When this happens, find the nodes that don't have the DaemonSet pods scheduled o
156159
by comparing the output of `kubectl get nodes` and the output of:
157160

158161
```shell
159-
kubectl get pods -l <daemonset-selector-key>=<daemonset-selector-value> -o wide
162+
kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system
160163
```
161164

162165
Once you've found those nodes, delete some non-DaemonSet pods from the node to
@@ -183,6 +186,13 @@ If `.spec.minReadySeconds` is specified in the DaemonSet, clock skew between
183186
master and nodes will make DaemonSet unable to detect the right rollout
184187
progress.
185188

189+
## Clean up
190+
191+
Delete DaemonSet from a namespace :
192+
193+
```shell
194+
kubectl delete ds fluentd-elasticsearch -n kube-system
195+
```
186196

187197
{{% /capture %}}
188198

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
apiVersion: apps/v1
2+
kind: DaemonSet
3+
metadata:
4+
name: fluentd-elasticsearch
5+
namespace: kube-system
6+
labels:
7+
k8s-app: fluentd-logging
8+
spec:
9+
selector:
10+
matchLabels:
11+
name: fluentd-elasticsearch
12+
updateStrategy:
13+
type: RollingUpdate
14+
rollingUpdate:
15+
maxUnavailable: 1
16+
template:
17+
metadata:
18+
labels:
19+
name: fluentd-elasticsearch
20+
spec:
21+
tolerations:
22+
# this toleration is to have the daemonset runnable on master nodes
23+
# remove it if your masters can't run pods
24+
- key: node-role.kubernetes.io/master
25+
effect: NoSchedule
26+
containers:
27+
- name: fluentd-elasticsearch
28+
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
29+
resources:
30+
limits:
31+
memory: 200Mi
32+
requests:
33+
cpu: 100m
34+
memory: 200Mi
35+
volumeMounts:
36+
- name: varlog
37+
mountPath: /var/log
38+
- name: varlibdockercontainers
39+
mountPath: /var/lib/docker/containers
40+
readOnly: true
41+
terminationGracePeriodSeconds: 30
42+
volumes:
43+
- name: varlog
44+
hostPath:
45+
path: /var/log
46+
- name: varlibdockercontainers
47+
hostPath:
48+
path: /var/lib/docker/containers
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
apiVersion: apps/v1
2+
kind: DaemonSet
3+
metadata:
4+
name: fluentd-elasticsearch
5+
namespace: kube-system
6+
labels:
7+
k8s-app: fluentd-logging
8+
spec:
9+
selector:
10+
matchLabels:
11+
name: fluentd-elasticsearch
12+
updateStrategy:
13+
type: RollingUpdate
14+
rollingUpdate:
15+
maxUnavailable: 1
16+
template:
17+
metadata:
18+
labels:
19+
name: fluentd-elasticsearch
20+
spec:
21+
tolerations:
22+
# this toleration is to have the daemonset runnable on master nodes
23+
# remove it if your masters can't run pods
24+
- key: node-role.kubernetes.io/master
25+
effect: NoSchedule
26+
containers:
27+
- name: fluentd-elasticsearch
28+
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
29+
volumeMounts:
30+
- name: varlog
31+
mountPath: /var/log
32+
- name: varlibdockercontainers
33+
mountPath: /var/lib/docker/containers
34+
readOnly: true
35+
terminationGracePeriodSeconds: 30
36+
volumes:
37+
- name: varlog
38+
hostPath:
39+
path: /var/log
40+
- name: varlibdockercontainers
41+
hostPath:
42+
path: /var/lib/docker/containers

0 commit comments

Comments
 (0)