@@ -325,57 +325,3 @@ stale data. Note that in practice, the restore takes a bit of time. During the
325
325
restoration, critical components will lose leader lock and restart themselves.
326
326
{{< /note >}}
327
327
328
- ## Upgrading and rolling back etcd clusters
329
-
330
- As of Kubernetes v1.13.0, etcd2 is no longer supported as a storage backend for
331
- new or existing Kubernetes clusters. The timeline for Kubernetes support for
332
- etcd2 and etcd3 is as follows:
333
-
334
- - Kubernetes v1.0: etcd2 only
335
- - Kubernetes v1.5.1: etcd3 support added, new clusters still default to etcd2
336
- - Kubernetes v1.6.0: new clusters created with ` kube-up.sh ` default to etcd3,
337
- and ` kube-apiserver ` defaults to etcd3
338
- - Kubernetes v1.9.0: deprecation of etcd2 storage backend announced
339
- - Kubernetes v1.13.0: etcd2 storage backend removed, ` kube-apiserver ` will
340
- refuse to start with ` --storage-backend=etcd2 ` , with the
341
- message ` etcd2 is no longer a supported storage backend `
342
-
343
- Before upgrading a v1.12.x kube-apiserver using ` --storage-backend=etcd2 ` to
344
- v1.13.x, etcd v2 data must be migrated to the v3 storage backend and
345
- kube-apiserver invocations must be changed to use ` --storage-backend=etcd3 ` .
346
-
347
- The process for migrating from etcd2 to etcd3 is highly dependent on how the
348
- etcd cluster was deployed and configured, as well as how the Kubernetes
349
- cluster was deployed and configured. We recommend that you consult your cluster
350
- provider's documentation to see if there is a predefined solution.
351
-
352
- If your cluster was created via ` kube-up.sh ` and is still using etcd2 as its
353
- storage backend, please consult the [ Kubernetes v1.12 etcd cluster upgrade
354
- docs] ( https://v1-12.docs.kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#upgrading-and-rolling-back-etcd-clusters )
355
-
356
- ## Known issue: etcd client balancer with secure endpoints
357
-
358
- The etcd v3 client, released in etcd v3.3.13 or earlier, has a [ critical
359
- bug] ( https://github.com/kubernetes/kubernetes/issues/72102 ) which affects the
360
- kube-apiserver and HA deployments. The etcd client balancer failover does not
361
- properly work against secure endpoints. As a result, etcd servers may fail or
362
- disconnect briefly from the kube-apiserver. This affects kube-apiserver HA
363
- deployments.
364
-
365
- The fix was made in etcd v3.4 (and backported to v3.3.14 or later): the new
366
- client now creates its own credential bundle to correctly set authority target
367
- in dial function.
368
-
369
- Because the fix requires gRPC dependency upgrade (to v1.23.0), downstream
370
- Kubernetes [ did not backport etcd
371
- upgrades] ( https://github.com/kubernetes/kubernetes/issues/72102#issuecomment-526645978 ) .
372
- Which means the [ etcd fix in
373
- kube-apiserver] ( https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab )
374
- is only available from Kubernetes 1.16.
375
-
376
- To urgently fix this bug for Kubernetes 1.15 or earlier, build a custom
377
- kube-apiserver. You can make local changes to
378
- [ ` vendor/google.golang.org/grpc/credentials/credentials.go ` ] ( https://github.com/kubernetes/kubernetes/blob/7b85be021cd2943167cd3d6b7020f44735d9d90b/vendor/google.golang.org/grpc/credentials/credentials.go#L135 )
379
- with
380
- [ etcd@db61ee106] ( https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab ) .
381
-
0 commit comments