Skip to content

Commit c617542

Browse files
committed
document one should restart all system components after restoring etcd
1 parent 6691d8e commit c617542

File tree

1 file changed

+13
-0
lines changed

1 file changed

+13
-0
lines changed

content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -200,6 +200,19 @@ If the access URLs of the restored cluster is changed from the previous cluster,
200200

201201
If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue.
202202

203+
{{< note >}}
204+
If any API servers are running in your cluster, you should not attempt to restore instances of etcd.
205+
Instead, follow these steps to restore etcd:
206+
207+
- stop *all* kube-apiserver instances
208+
- restore state in all etcd instances
209+
- restart all kube-apiserver instances
210+
211+
We also recommend restarting any components (e.g. kube-scheduler, kube-controller-manager, kubelet) to ensure that they don't
212+
rely on some stale data. Note that in practice, the restore takes a bit of time.
213+
During the restoration, critical components will lose leader lock and restart themselves.
214+
{{< /note >}}
215+
203216
## Upgrading and rolling back etcd clusters
204217
205218
As of Kubernetes v1.13.0, etcd2 is no longer supported as a storage backend for

0 commit comments

Comments
 (0)