Skip to content

Commit f4ce844

Browse files
committed
addresses kubeadm upgrade review comments
CC: @luxas
1 parent afc78e5 commit f4ce844

File tree

1 file changed

+103
-52
lines changed

1 file changed

+103
-52
lines changed

docs/tasks/administer-cluster/kubeadm-upgrade-cmd.md renamed to docs/tasks/administer-cluster/kubeadm-upgrade-1-8.md

Lines changed: 103 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,12 @@ approvers:
44
- luxas
55
- roberthbailey
66
- jbeda
7-
title: Upgrading kubeadm clusters
7+
title: Upgrading kubeadm clusters from 1.7 to 1.8
88
---
99

1010
{% capture overview %}
1111

12-
This guide is for upgrading `kubeadm` clusters from version 1.7.x to 1.8.x.
12+
This guide is for upgrading `kubeadm` clusters from version 1.7.x to 1.8.x, as well as 1.7.x to 1.7.y and 1.8.x to 1.8.y where `y > x`.
1313
See also [upgrading kubeadm clusters from 1.6 to 1.7](/docs/tasks/administer-cluster/kubeadm-upgrade-1-7/) if you're on a 1.6 cluster currently.
1414

1515
{% endcapture %}
@@ -41,95 +41,140 @@ $ export ARCH=amd64 # or: arm, arm64, ppc64le, s390x
4141
$ curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm
4242
```
4343

44-
1. On the master node, run the following:
44+
2. If this the first time you use `kubeadm upgrade`, in order to preserve the configuration for future upgrades, do:
45+
46+
Note that for below you will need to recall what CLI args you passed to `kubeadm init` the first time.
47+
48+
If you used flags, do:
49+
50+
```shell
51+
$ kubeadm config upload from-flags [flags]
52+
```
53+
54+
Where `flags` can be empty.
55+
56+
If you used a config file, do:
57+
58+
```shell
59+
$ kubeadm config upload from-file --config [config]
60+
```
61+
62+
Where the `config` is mandatory.
63+
64+
3. On the master node, run the following:
4565

4666
```shell
4767
$ kubeadm upgrade plan
68+
[preflight] Running pre-flight checks
4869
[upgrade] Making sure the cluster is healthy:
4970
[upgrade/health] Checking API Server health: Healthy
5071
[upgrade/health] Checking Node health: All Nodes are healthy
51-
[upgrade/health] Checking if control plane is Static Pod-hosted or Self-Hosted: Static Pod-hosted.
52-
[upgrade/health] NOTE: kubeadm will upgrade your Static Pod-hosted control plane to a Self-Hosted one when upgrading if --feature-gates=SelfHosting=true is set (which is the default)
53-
[upgrade/health] If you strictly want to continue using a Static Pod-hosted control plane, set --feature-gates=SelfHosting=true when running 'kubeadm upgrade apply'
54-
[upgrade/health] Checking Static Pod manifests exists on disk: All required Static Pod manifests exist on disk
55-
[upgrade] Making sure the configuration is correct:
56-
[upgrade/config] Reading configuration from the cluster (you can get this with 'kubectl -n kube-system get cm kubeadm-config -oyaml')
72+
[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk
73+
[upgrade/config] Making sure the configuration is correct:
74+
[upgrade/config] Reading configuration from the cluster...
75+
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
5776
[upgrade] Fetching available versions to upgrade to:
5877
[upgrade/versions] Cluster version: v1.7.1
5978
[upgrade/versions] kubeadm version: v1.8.0
60-
[upgrade/versions] Latest stable version: v1.7.3
61-
[upgrade/versions] Latest version in the v1.7 series: v1.7.3
79+
[upgrade/versions] Latest stable version: v1.8.0
80+
[upgrade/versions] Latest version in the v1.7 series: v1.7.6
6281

63-
Components that must be upgraded manually after you've upgraded the control plane with `kubeadm upgrade apply`:
82+
Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply':
6483
COMPONENT CURRENT AVAILABLE
65-
Kubelet 1 x v1.7.0 v1.7.3
84+
Kubelet 1 x v1.7.1 v1.7.6
6685
6786
Upgrade to the latest version in the v1.7 series:
6887
6988
COMPONENT CURRENT AVAILABLE
70-
API Server v1.7.1 v1.7.3
71-
Controller Manager v1.7.1 v1.7.3
72-
Scheduler v1.7.1 v1.7.3
73-
Kube Proxy v1.7.1 v1.7.3
89+
API Server v1.7.1 v1.7.6
90+
Controller Manager v1.7.1 v1.7.6
91+
Scheduler v1.7.1 v1.7.6
92+
Kube Proxy v1.7.1 v1.7.6
93+
Kube DNS 1.14.4 1.14.4
94+
95+
You can now apply the upgrade by executing the following command:
96+
97+
kubeadm upgrade apply v1.7.6
98+
99+
_____________________________________________________________________
100+
101+
Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply':
102+
COMPONENT CURRENT AVAILABLE
103+
Kubelet 1 x v1.7.1 v1.8.0
104+
105+
Upgrade to the latest experimental version:
106+
107+
COMPONENT CURRENT AVAILABLE
108+
API Server v1.7.1 v1.8.0
109+
Controller Manager v1.7.1 v1.8.0
110+
Scheduler v1.7.1 v1.8.0
111+
Kube Proxy v1.7.1 v1.8.0
74112
Kube DNS 1.14.4 1.14.4
75113

76114
You can now apply the upgrade by executing the following command:
77115

78-
kubeadm upgrade apply --version v1.7.3
116+
kubeadm upgrade apply v1.8.0
117+
118+
Note: Before you do can perform this upgrade, you have to update kubeadm to v1.8.0
119+
120+
_____________________________________________________________________
79121
```
80122

81123
The `kubeadm upgrade plan` checks that your cluster is in an upgradeable state and fetches the versions available to upgrade to in an user-friendly way.
82124

83-
1. Pick a version to upgrade to and run, for example, `kubeadm upgrade apply` as follows:
125+
4. Pick a version to upgrade to and run, for example, `kubeadm upgrade apply` as follows:
84126

85127
```shell
86-
$ kubeadm upgrade apply --version v1.8.0
128+
$ kubeadm upgrade apply v1.8.0
129+
[preflight] Running pre-flight checks
87130
[upgrade] Making sure the cluster is healthy:
88131
[upgrade/health] Checking API Server health: Healthy
89132
[upgrade/health] Checking Node health: All Nodes are healthy
90-
[upgrade/health] Checking if control plane is Static Pod-hosted or Self-Hosted: Static Pod-hosted.
91-
[upgrade/health] NOTE: kubeadm will upgrade your Static Pod-hosted control plane to a Self-Hosted one when upgrading if --feature-gates=SelfHosting=true is set (which is the default)
92-
[upgrade/health] If you strictly want to continue using a Static Pod-hosted control plane, set --feature-gates=SelfHosting=true when running 'kubeadm upgrade apply'
93-
[upgrade/health] Checking Static Pod manifests exists on disk: All required Static Pod manifests exist on disk
94-
[upgrade] Making sure the configuration is correct:
95-
[upgrade/config] Reading configuration from the cluster (you can get this with 'kubectl -n kube-system get cm kubeadm-config -oyaml')
133+
[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk
134+
[upgrade/config] Making sure the configuration is correct:
135+
[upgrade/config] Reading configuration from the cluster...
136+
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
96137
[upgrade/version] You have chosen to upgrade to version "v1.8.0"
97138
[upgrade/versions] Cluster version: v1.7.1
98139
[upgrade/versions] kubeadm version: v1.8.0
99-
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: Y
100140
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
101141
[upgrade/prepull] Prepulling image for component kube-scheduler.
102142
[upgrade/prepull] Prepulling image for component kube-apiserver.
103143
[upgrade/prepull] Prepulling image for component kube-controller-manager.
104-
[upgrade/prepull] Prepulled image for component kube-scheduler.
144+
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
145+
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
146+
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
147+
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
105148
[upgrade/prepull] Prepulled image for component kube-apiserver.
106149
[upgrade/prepull] Prepulled image for component kube-controller-manager.
150+
[upgrade/prepull] Prepulled image for component kube-scheduler.
107151
[upgrade/prepull] Successfully prepulled the images for all the control plane components
108152
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.8.0"...
109-
[upgrade/staticpods] Wrote upgraded Static Pod manifests to "/tmp/kubeadm-upgrade830923296"
110-
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backuped old manifest to "/tmp/kubeadm-upgrade830923296/old-manifests/kube-apiserver.yaml"
153+
[upgrade/staticpods] Writing upgraded Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769"
154+
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-apiserver.yaml"
155+
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-controller-manager.yaml"
156+
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-scheduler.yaml"
157+
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-apiserver.yaml"
111158
[upgrade/staticpods] Waiting for the kubelet to restart the component
112159
[apiclient] Found 1 Pods for label selector component=kube-apiserver
113160
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
114-
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backuped old manifest to "/tmp/kubeadm-upgrade830923296/old-manifests/kube-controller-manager.yaml"
161+
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-controller-manager.yaml"
115162
[upgrade/staticpods] Waiting for the kubelet to restart the component
116163
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
117164
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
118-
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backuped old manifest to "/tmp/kubeadm-upgrade830923296/old-manifests/kube-scheduler.yaml"
165+
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-scheduler.yaml"
119166
[upgrade/staticpods] Waiting for the kubelet to restart the component
120167
[apiclient] Found 1 Pods for label selector component=kube-scheduler
121-
[apiclient] Found 0 Pods for label selector component=kube-scheduler
122-
[apiclient] Found 1 Pods for label selector component=kube-scheduler
123168
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
124-
[apiclient] Found 0 Pods for label selector k8s-app=self-hosted-kube-apiserver
125-
[apiclient] Found 1 Pods for label selector k8s-app=self-hosted-kube-apiserver
126-
[apiclient] Found 0 Pods for label selector k8s-app=self-hosted-kube-controller-manager
127-
[apiclient] Found 1 Pods for label selector k8s-app=self-hosted-kube-controller-manager
128-
[apiclient] Found 0 Pods for label selector k8s-app=self-hosted-kube-scheduler
129-
[apiclient] Found 1 Pods for label selector k8s-app=self-hosted-kube-scheduler
130-
[apiconfig] Created RBAC rules
131-
[addons] Applied essential addon: kube-proxy
169+
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
170+
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
171+
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
132172
[addons] Applied essential addon: kube-dns
173+
[addons] Applied essential addon: kube-proxy
174+
175+
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.8.0". Enjoy!
176+
177+
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn.
133178
```
134179

135180
`kubeadm upgrade apply` does the following:
@@ -143,7 +188,7 @@ $ kubeadm upgrade apply --version v1.8.0
143188
- It upgrades the control plane components or rollbacks if any of them fails to come up.
144189
- It applies the new `kube-dns` and `kube-proxy` manifests and enforces that all necessary RBAC rules are created.
145190

146-
1. Manually upgrade your Software Defined Network (SDN).
191+
5. Manually upgrade your Software Defined Network (SDN).
147192

148193
Your Container Network Interface (CNI) provider might have its own upgrade instructions to follow now.
149194
Check the [addons](/docs/concepts/cluster-administration/addons/) page to
@@ -161,29 +206,35 @@ $ kubectl cordon $WORKER
161206
$ kubectl drain $WORKER
162207
```
163208

164-
1. Upgrade the `kubelet` version on the `$WORKER` node, either by using a Linux distribution-specific package manager such as `apt-get` or `yum` or manually as described in the following:
209+
2. Upgrade the `kubelet` version on the `$WORKER` node by using a Linux distribution-specific package manager:
210+
211+
If the node is running a Debian-based distro such as Ubuntu, run:
212+
213+
```shell
214+
$ apt-get update
215+
$ apt-get install -y kubelet
216+
```
217+
218+
If the node is running CentOS or the like, run:
165219

166220
```shell
167-
$ sudo systemctl stop kubelet
168-
$ curl -s -L -o kubelet \
169-
https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubelet
170-
$ chmod +x kubectl && sudo mv kubelet /usr/local/bin/
171-
$ sudo systemctl start kubelet
221+
$ yum update
222+
$ yum install -y kubelet
172223
```
173224

174-
Now, the new version of the `kubelet` should be running on the `$WORKER` node. Verify this using the following command:
225+
Now the new version of the `kubelet` should be running on the `$WORKER` node. Verify this using the following command:
175226

176227
```shell
177228
$ systemctl status kubelet
178229
```
179230

180-
1. Bring the `$WORKER` node back online by marking it schedulable:
231+
3. Bring the `$WORKER` node back online by marking it schedulable:
181232

182233
```shell
183234
$ kubectl uncordon $WORKER
184235
```
185236

186-
1. After upgrading `kubelet` on each worker node in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster):
237+
4. After upgrading `kubelet` on each worker node in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster):
187238

188239
```shell
189240
$ kubectl get nodes

0 commit comments

Comments
 (0)