|
| 1 | +--- |
| 2 | +approvers: |
| 3 | +- pipejakob |
| 4 | +- luxas |
| 5 | +- roberthbailey |
| 6 | +- jbeda |
| 7 | +title: Upgrading kubeadm clusters from 1.7 to 1.8 |
| 8 | +--- |
| 9 | + |
| 10 | +{% capture overview %} |
| 11 | + |
| 12 | +This guide is for upgrading `kubeadm` clusters from version 1.7.x to 1.8.x, as well as 1.7.x to 1.7.y and 1.8.x to 1.8.y where `y > x`. |
| 13 | +See also [upgrading kubeadm clusters from 1.6 to 1.7](/docs/tasks/administer-cluster/kubeadm-upgrade-1-7/) if you're on a 1.6 cluster currently. |
| 14 | + |
| 15 | +{% endcapture %} |
| 16 | + |
| 17 | +{% capture prerequisites %} |
| 18 | + |
| 19 | +Before proceeding: |
| 20 | + |
| 21 | +- You need to have a functional `kubeadm` Kubernetes cluster running version 1.7.0 or higher in order to use the process described here. |
| 22 | +- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v180-beta1) carefully. |
| 23 | +- As `kubeadm upgrade` does not upgrade etcd make sure to back it up. You can, for example, use `etcdctl backup` to take care of this. |
| 24 | +- Note that `kubeadm upgrade` will not touch any of your workloads, only Kubernetes-internal components. As a best-practice you should back up what's important to you. For example, any app-level state, such as a database an app might depend on (like MySQL or MongoDB) must be backed up beforehand. |
| 25 | + |
| 26 | +Also, note that only one minor version upgrade is supported. That is, you can only upgrade from, say 1.7 to 1.8, not from 1.7 to 1.9. |
| 27 | + |
| 28 | +{% endcapture %} |
| 29 | + |
| 30 | +{% capture steps %} |
| 31 | + |
| 32 | +## Upgrading your control plane |
| 33 | + |
| 34 | +You have to carry out the following steps by executing these commands on your master node: |
| 35 | + |
| 36 | +1. Install the most recent version of `kubeadm` using `curl` like so: |
| 37 | + |
| 38 | +```shell |
| 39 | +$ export VERSION=v1.8.0 # or any given released Kubernetes version |
| 40 | +$ export ARCH=amd64 # or: arm, arm64, ppc64le, s390x |
| 41 | +$ curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm |
| 42 | +``` |
| 43 | + |
| 44 | +2. If this the first time you use `kubeadm upgrade`, in order to preserve the configuration for future upgrades, do: |
| 45 | + |
| 46 | +Note that for below you will need to recall what CLI args you passed to `kubeadm init` the first time. |
| 47 | + |
| 48 | +If you used flags, do: |
| 49 | + |
| 50 | +```shell |
| 51 | +$ kubeadm config upload from-flags [flags] |
| 52 | +``` |
| 53 | + |
| 54 | +Where `flags` can be empty. |
| 55 | + |
| 56 | +If you used a config file, do: |
| 57 | + |
| 58 | +```shell |
| 59 | +$ kubeadm config upload from-file --config [config] |
| 60 | +``` |
| 61 | + |
| 62 | +Where the `config` is mandatory. |
| 63 | + |
| 64 | +3. On the master node, run the following: |
| 65 | + |
| 66 | +```shell |
| 67 | +$ kubeadm upgrade plan |
| 68 | +[preflight] Running pre-flight checks |
| 69 | +[upgrade] Making sure the cluster is healthy: |
| 70 | +[upgrade/health] Checking API Server health: Healthy |
| 71 | +[upgrade/health] Checking Node health: All Nodes are healthy |
| 72 | +[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk |
| 73 | +[upgrade/config] Making sure the configuration is correct: |
| 74 | +[upgrade/config] Reading configuration from the cluster... |
| 75 | +[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' |
| 76 | +[upgrade] Fetching available versions to upgrade to: |
| 77 | +[upgrade/versions] Cluster version: v1.7.1 |
| 78 | +[upgrade/versions] kubeadm version: v1.8.0 |
| 79 | +[upgrade/versions] Latest stable version: v1.8.0 |
| 80 | +[upgrade/versions] Latest version in the v1.7 series: v1.7.6 |
| 81 | + |
| 82 | +Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply': |
| 83 | +COMPONENT CURRENT AVAILABLE |
| 84 | +Kubelet 1 x v1.7.1 v1.7.6 |
| 85 | +
|
| 86 | +Upgrade to the latest version in the v1.7 series: |
| 87 | +
|
| 88 | +COMPONENT CURRENT AVAILABLE |
| 89 | +API Server v1.7.1 v1.7.6 |
| 90 | +Controller Manager v1.7.1 v1.7.6 |
| 91 | +Scheduler v1.7.1 v1.7.6 |
| 92 | +Kube Proxy v1.7.1 v1.7.6 |
| 93 | +Kube DNS 1.14.4 1.14.4 |
| 94 | +
|
| 95 | +You can now apply the upgrade by executing the following command: |
| 96 | +
|
| 97 | + kubeadm upgrade apply v1.7.6 |
| 98 | +
|
| 99 | +_____________________________________________________________________ |
| 100 | +
|
| 101 | +Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply': |
| 102 | +COMPONENT CURRENT AVAILABLE |
| 103 | +Kubelet 1 x v1.7.1 v1.8.0 |
| 104 | + |
| 105 | +Upgrade to the latest experimental version: |
| 106 | + |
| 107 | +COMPONENT CURRENT AVAILABLE |
| 108 | +API Server v1.7.1 v1.8.0 |
| 109 | +Controller Manager v1.7.1 v1.8.0 |
| 110 | +Scheduler v1.7.1 v1.8.0 |
| 111 | +Kube Proxy v1.7.1 v1.8.0 |
| 112 | +Kube DNS 1.14.4 1.14.4 |
| 113 | + |
| 114 | +You can now apply the upgrade by executing the following command: |
| 115 | + |
| 116 | + kubeadm upgrade apply v1.8.0 |
| 117 | + |
| 118 | +Note: Before you do can perform this upgrade, you have to update kubeadm to v1.8.0 |
| 119 | + |
| 120 | +_____________________________________________________________________ |
| 121 | +``` |
| 122 | + |
| 123 | +The `kubeadm upgrade plan` checks that your cluster is in an upgradeable state and fetches the versions available to upgrade to in an user-friendly way. |
| 124 | + |
| 125 | +4. Pick a version to upgrade to and run, for example, `kubeadm upgrade apply` as follows: |
| 126 | + |
| 127 | +```shell |
| 128 | +$ kubeadm upgrade apply v1.8.0 |
| 129 | +[preflight] Running pre-flight checks |
| 130 | +[upgrade] Making sure the cluster is healthy: |
| 131 | +[upgrade/health] Checking API Server health: Healthy |
| 132 | +[upgrade/health] Checking Node health: All Nodes are healthy |
| 133 | +[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk |
| 134 | +[upgrade/config] Making sure the configuration is correct: |
| 135 | +[upgrade/config] Reading configuration from the cluster... |
| 136 | +[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' |
| 137 | +[upgrade/version] You have chosen to upgrade to version "v1.8.0" |
| 138 | +[upgrade/versions] Cluster version: v1.7.1 |
| 139 | +[upgrade/versions] kubeadm version: v1.8.0 |
| 140 | +[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler] |
| 141 | +[upgrade/prepull] Prepulling image for component kube-scheduler. |
| 142 | +[upgrade/prepull] Prepulling image for component kube-apiserver. |
| 143 | +[upgrade/prepull] Prepulling image for component kube-controller-manager. |
| 144 | +[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler |
| 145 | +[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler |
| 146 | +[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver |
| 147 | +[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager |
| 148 | +[upgrade/prepull] Prepulled image for component kube-apiserver. |
| 149 | +[upgrade/prepull] Prepulled image for component kube-controller-manager. |
| 150 | +[upgrade/prepull] Prepulled image for component kube-scheduler. |
| 151 | +[upgrade/prepull] Successfully prepulled the images for all the control plane components |
| 152 | +[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.8.0"... |
| 153 | +[upgrade/staticpods] Writing upgraded Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769" |
| 154 | +[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-apiserver.yaml" |
| 155 | +[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-controller-manager.yaml" |
| 156 | +[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests432902769/kube-scheduler.yaml" |
| 157 | +[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-apiserver.yaml" |
| 158 | +[upgrade/staticpods] Waiting for the kubelet to restart the component |
| 159 | +[apiclient] Found 1 Pods for label selector component=kube-apiserver |
| 160 | +[upgrade/staticpods] Component "kube-apiserver" upgraded successfully! |
| 161 | +[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-controller-manager.yaml" |
| 162 | +[upgrade/staticpods] Waiting for the kubelet to restart the component |
| 163 | +[apiclient] Found 1 Pods for label selector component=kube-controller-manager |
| 164 | +[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! |
| 165 | +[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests155856668/kube-scheduler.yaml" |
| 166 | +[upgrade/staticpods] Waiting for the kubelet to restart the component |
| 167 | +[apiclient] Found 1 Pods for label selector component=kube-scheduler |
| 168 | +[upgrade/staticpods] Component "kube-scheduler" upgraded successfully! |
| 169 | +[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace |
| 170 | +[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials |
| 171 | +[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token |
| 172 | +[addons] Applied essential addon: kube-dns |
| 173 | +[addons] Applied essential addon: kube-proxy |
| 174 | + |
| 175 | +[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.8.0". Enjoy! |
| 176 | + |
| 177 | +[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn. |
| 178 | +``` |
| 179 | + |
| 180 | +`kubeadm upgrade apply` does the following: |
| 181 | + |
| 182 | +- It checks that your cluster is in an upgradeable state, that is: |
| 183 | + - The API Server is reachable, |
| 184 | + - All nodes are in the `Ready` state, and |
| 185 | + - The control plane is healthy |
| 186 | +- It enforces the version skew policies. |
| 187 | +- It makes sure the control plane images are available or available to pull to the machine. |
| 188 | +- It upgrades the control plane components or rollbacks if any of them fails to come up. |
| 189 | +- It applies the new `kube-dns` and `kube-proxy` manifests and enforces that all necessary RBAC rules are created. |
| 190 | + |
| 191 | +5. Manually upgrade your Software Defined Network (SDN). |
| 192 | + |
| 193 | + Your Container Network Interface (CNI) provider might have its own upgrade instructions to follow now. |
| 194 | + Check the [addons](/docs/concepts/cluster-administration/addons/) page to |
| 195 | + find your CNI provider and see if there are additional upgrade steps |
| 196 | + necessary. |
| 197 | + |
| 198 | +## Upgrading your worker nodes |
| 199 | + |
| 200 | +For each worker node (referred to as `$WORKER` below) in your cluster, upgrade `kubelet` by executing the following commands: |
| 201 | + |
| 202 | +1. Prepare the node for maintenance, marking it unschedulable and evicting the workload: |
| 203 | + |
| 204 | +```shell |
| 205 | +$ kubectl cordon $WORKER |
| 206 | +$ kubectl drain $WORKER |
| 207 | +``` |
| 208 | + |
| 209 | +2. Upgrade the `kubelet` version on the `$WORKER` node by using a Linux distribution-specific package manager: |
| 210 | + |
| 211 | +If the node is running a Debian-based distro such as Ubuntu, run: |
| 212 | + |
| 213 | +```shell |
| 214 | +$ apt-get update |
| 215 | +$ apt-get install -y kubelet |
| 216 | +``` |
| 217 | + |
| 218 | +If the node is running CentOS or the like, run: |
| 219 | + |
| 220 | +```shell |
| 221 | +$ yum update |
| 222 | +$ yum install -y kubelet |
| 223 | +``` |
| 224 | + |
| 225 | +Now the new version of the `kubelet` should be running on the `$WORKER` node. Verify this using the following command: |
| 226 | + |
| 227 | +```shell |
| 228 | +$ systemctl status kubelet |
| 229 | +``` |
| 230 | + |
| 231 | +3. Bring the `$WORKER` node back online by marking it schedulable: |
| 232 | + |
| 233 | +```shell |
| 234 | +$ kubectl uncordon $WORKER |
| 235 | +``` |
| 236 | + |
| 237 | +4. After upgrading `kubelet` on each worker node in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster): |
| 238 | + |
| 239 | +```shell |
| 240 | +$ kubectl get nodes |
| 241 | +``` |
| 242 | + |
| 243 | +If the `STATUS` column of the above command shows `Ready` for all of your worker nodes, you are done. |
| 244 | + |
| 245 | +## Recovering from a bad state |
| 246 | + |
| 247 | +If `kubeadm upgrade` somehow fails and fails to roll back, due to an unexpected shutdown during execution for instance, |
| 248 | +you may run `kubeadm upgrade` again as it is idempotent and should eventually make sure the actual state is the desired state you are declaring. |
| 249 | + |
| 250 | +You can use `kubeadm upgrade` to change a running cluster with `x.x.x --> x.x.x` with `--force`, which can be used to recover from a bad state. |
| 251 | + |
| 252 | +{% endcapture %} |
| 253 | + |
| 254 | +{% include templates/task.md %} |
0 commit comments