You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[upgrade/health] Checking API Server health: Healthy
50
71
[upgrade/health] Checking Node health: All Nodes are healthy
51
-
[upgrade/health] Checking if control plane is Static Pod-hosted or Self-Hosted: Static Pod-hosted.
52
-
[upgrade/health] NOTE: kubeadm will upgrade your Static Pod-hosted control plane to a Self-Hosted one when upgrading if --feature-gates=SelfHosting=true is set (which is the default)
53
-
[upgrade/health] If you strictly want to continue using a Static Pod-hosted control plane, set --feature-gates=SelfHosting=true when running 'kubeadm upgrade apply'
54
-
[upgrade/health] Checking Static Pod manifests exists on disk: All required Static Pod manifests exist on disk
55
-
[upgrade] Making sure the configuration is correct:
56
-
[upgrade/config] Reading configuration from the cluster (you can get this with 'kubectl -n kube-system get cm kubeadm-config -oyaml')
72
+
[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk
73
+
[upgrade/config] Making sure the configuration is correct:
74
+
[upgrade/config] Reading configuration from the cluster...
75
+
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
57
76
[upgrade] Fetching available versions to upgrade to:
58
77
[upgrade/versions] Cluster version: v1.7.1
59
78
[upgrade/versions] kubeadm version: v1.8.0
60
-
[upgrade/versions] Latest stable version: v1.7.3
61
-
[upgrade/versions] Latest version in the v1.7 series: v1.7.3
79
+
[upgrade/versions] Latest stable version: v1.8.0
80
+
[upgrade/versions] Latest version in the v1.7 series: v1.7.6
62
81
63
-
Components that must be upgraded manually after you've upgraded the control plane with `kubeadm upgrade apply`:
82
+
Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply':
64
83
COMPONENT CURRENT AVAILABLE
65
-
Kubelet 1 x v1.7.0 v1.7.3
84
+
Kubelet 1 x v1.7.1 v1.7.6
66
85
67
86
Upgrade to the latest version in the v1.7 series:
68
87
69
88
COMPONENT CURRENT AVAILABLE
70
-
API Server v1.7.1 v1.7.3
71
-
Controller Manager v1.7.1 v1.7.3
72
-
Scheduler v1.7.1 v1.7.3
73
-
Kube Proxy v1.7.1 v1.7.3
89
+
API Server v1.7.1 v1.7.6
90
+
Controller Manager v1.7.1 v1.7.6
91
+
Scheduler v1.7.1 v1.7.6
92
+
Kube Proxy v1.7.1 v1.7.6
93
+
Kube DNS 1.14.4 1.14.4
94
+
95
+
You can now apply the upgrade by executing the following command:
The `kubeadm upgrade plan` checks that your cluster is in an upgradeable state and fetches the versions available to upgrade to in an user-friendly way.
82
124
83
-
1. Pick a version to upgrade to and run, for example, `kubeadm upgrade apply` as follows:
125
+
4. Pick a version to upgrade to and run, for example, `kubeadm upgrade apply` as follows:
84
126
85
127
```shell
86
-
$ kubeadm upgrade apply --version v1.8.0
128
+
$ kubeadm upgrade apply v1.8.0
129
+
[preflight] Running pre-flight checks
87
130
[upgrade] Making sure the cluster is healthy:
88
131
[upgrade/health] Checking API Server health: Healthy
89
132
[upgrade/health] Checking Node health: All Nodes are healthy
90
-
[upgrade/health] Checking if control plane is Static Pod-hosted or Self-Hosted: Static Pod-hosted.
91
-
[upgrade/health] NOTE: kubeadm will upgrade your Static Pod-hosted control plane to a Self-Hosted one when upgrading if --feature-gates=SelfHosting=true is set (which is the default)
92
-
[upgrade/health] If you strictly want to continue using a Static Pod-hosted control plane, set --feature-gates=SelfHosting=true when running 'kubeadm upgrade apply'
93
-
[upgrade/health] Checking Static Pod manifests exists on disk: All required Static Pod manifests exist on disk
94
-
[upgrade] Making sure the configuration is correct:
95
-
[upgrade/config] Reading configuration from the cluster (you can get this with 'kubectl -n kube-system get cm kubeadm-config -oyaml')
133
+
[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk
134
+
[upgrade/config] Making sure the configuration is correct:
135
+
[upgrade/config] Reading configuration from the cluster...
136
+
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
96
137
[upgrade/version] You have chosen to upgrade to version "v1.8.0"
97
138
[upgrade/versions] Cluster version: v1.7.1
98
139
[upgrade/versions] kubeadm version: v1.8.0
99
-
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: Y
100
140
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
101
141
[upgrade/prepull] Prepulling image for component kube-scheduler.
102
142
[upgrade/prepull] Prepulling image for component kube-apiserver.
103
143
[upgrade/prepull] Prepulling image for component kube-controller-manager.
104
-
[upgrade/prepull] Prepulled image for component kube-scheduler.
144
+
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
145
+
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
146
+
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
147
+
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
105
148
[upgrade/prepull] Prepulled image for component kube-apiserver.
106
149
[upgrade/prepull] Prepulled image for component kube-controller-manager.
150
+
[upgrade/prepull] Prepulled image for component kube-scheduler.
107
151
[upgrade/prepull] Successfully prepulled the images for all the control plane components
108
152
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.8.0"...
109
-
[upgrade/staticpods] Wrote upgraded Static Pod manifests to "/tmp/kubeadm-upgrade830923296"
110
-
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backuped old manifest to "/tmp/kubeadm-upgrade830923296/old-manifests/kube-apiserver.yaml"
153
+
[upgrade/staticpods] Writing upgraded Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769"
154
+
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-apiserver.yaml"
155
+
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-controller-manager.yaml"
156
+
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-scheduler.yaml"
157
+
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-apiserver.yaml"
111
158
[upgrade/staticpods] Waiting for the kubelet to restart the component
112
159
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backuped old manifest to "/tmp/kubeadm-upgrade830923296/old-manifests/kube-controller-manager.yaml"
161
+
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-controller-manager.yaml"
115
162
[upgrade/staticpods] Waiting for the kubelet to restart the component
116
163
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backuped old manifest to "/tmp/kubeadm-upgrade830923296/old-manifests/kube-scheduler.yaml"
165
+
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-scheduler.yaml"
119
166
[upgrade/staticpods] Waiting for the kubelet to restart the component
120
167
[apiclient] Found 1 Pods for label selector component=kube-scheduler
121
-
[apiclient] Found 0 Pods for label selector component=kube-scheduler
122
-
[apiclient] Found 1 Pods for label selector component=kube-scheduler
- It upgrades the control plane components or rollbacks if any of them fails to come up.
144
189
- It applies the new `kube-dns` and `kube-proxy` manifests and enforces that all necessary RBAC rules are created.
145
190
146
-
1. Manually upgrade your Software Defined Network (SDN).
191
+
5. Manually upgrade your Software Defined Network (SDN).
147
192
148
193
Your Container Network Interface (CNI) provider might have its own upgrade instructions to follow now.
149
194
Check the [addons](/docs/concepts/cluster-administration/addons/) page to
@@ -161,29 +206,35 @@ $ kubectl cordon $WORKER
161
206
$ kubectl drain $WORKER
162
207
```
163
208
164
-
1. Upgrade the `kubelet` version on the `$WORKER` node, either by using a Linux distribution-specific package manager such as `apt-get` or `yum` or manually as described in the following:
209
+
2. Upgrade the `kubelet` version on the `$WORKER` node by using a Linux distribution-specific package manager:
210
+
211
+
If the node is running a Debian-based distro such as Ubuntu, run:
Now, the new version of the `kubelet` should be running on the `$WORKER` node. Verify this using the following command:
225
+
Now the new version of the `kubelet` should be running on the `$WORKER` node. Verify this using the following command:
175
226
176
227
```shell
177
228
$ systemctl status kubelet
178
229
```
179
230
180
-
1. Bring the `$WORKER` node back online by marking it schedulable:
231
+
3. Bring the `$WORKER` node back online by marking it schedulable:
181
232
182
233
```shell
183
234
$ kubectl uncordon $WORKER
184
235
```
185
236
186
-
1. After upgrading `kubelet` on each worker node in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster):
237
+
4. After upgrading `kubelet` on each worker node in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster):
0 commit comments