Skip to content

Commit 0acf0e9

Browse files
committed
kubeadm: update HA etcd guide for clarity and fix an issue
There were a couple of reported problems with this guide: - The introductory paragraph talks about single control plane nodes and does not mention the different options for HA etcd. Clear the language to reduce the confusion and cross-link to the ha-topology page. - The hostname / IP detection in kubeadm can end up with values not suitable for the certificates that kubeadm generates for all etcd instances. Ensure that the hostnames / IPs are pinned by the user in the example script. Side cleanup related to the dockershim removal: - Use containerd in the setup example and don't mention docker as a requirement.
1 parent aae389a commit 0acf0e9

File tree

1 file changed

+36
-16
lines changed

1 file changed

+36
-16
lines changed

content/en/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md

Lines changed: 36 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -16,19 +16,20 @@ or upgrades for such nodes. The long term plan is to empower the tool
1616
aspects.
1717
{{< /note >}}
1818

19-
Kubeadm defaults to running a single member etcd cluster in a static pod managed
20-
by the kubelet on the control plane node. This is not a high availability setup
21-
as the etcd cluster contains only one member and cannot sustain any members
22-
becoming unavailable. This task walks through the process of creating a high
23-
availability etcd cluster of three members that can be used as an external etcd
24-
when using kubeadm to set up a kubernetes cluster.
19+
By default, kubeadm runs a local etcd instance on each control plane node.
20+
It is also possible to treat the etcd cluster as external and provision
21+
etcd instances on separate hosts. The differences between the two approaches are covered in the
22+
[Options for Highly Available topology][/docs/setup/production-environment/tools/kubeadm/ha-topology] page.
23+
This task walks through the process of creating a high availability external
24+
etcd cluster of three members that can be used by kubeadm during cluster creation.
2525

2626
## {{% heading "prerequisites" %}}
2727

28-
* Three hosts that can talk to each other over ports 2379 and 2380. This
28+
* Three hosts that can talk to each other over TCP ports 2379 and 2380. This
2929
document assumes these default ports. However, they are configurable through
3030
the kubeadm config file.
31-
* Each host must [have docker, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
31+
* Each host must have systemd and a bash compatible shell installed.
32+
* Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
3233
* Each host should have access to the Kubernetes container image registry (`k8s.gcr.io`) or list/pull the required etcd image using
3334
`kubeadm config images list/pull`. This guide will setup etcd instances as
3435
[static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet.
@@ -48,6 +49,11 @@ the certificates described below; no other cryptographic tooling is required for
4849
this example.
4950
{{< /note >}}
5051

52+
{{< note >}}
53+
The examples below use IPv4 addresses but you can also configure kubeadm, the kubelet and etcd
54+
to use IPv6 addresses. Dual-stack is supported by some Kubernetes options, but not by etcd. For more details
55+
on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support/).
56+
{{< /note >}}
5157

5258
1. Configure the kubelet to be a service manager for etcd.
5359

@@ -59,8 +65,9 @@ this example.
5965
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
6066
[Service]
6167
ExecStart=
62-
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
63-
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd
68+
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
69+
# Replace the value of "--container-runtime-endpoint" for a different container runtime if needed.
70+
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
6471
Restart=always
6572
EOF
6673
@@ -80,21 +87,34 @@ this example.
8087
member running on it using the following script.
8188
8289
```sh
83-
# Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts
90+
# Update HOST0, HOST1 and HOST2 with the IPs of your hosts
8491
export HOST0=10.0.0.6
8592
export HOST1=10.0.0.7
8693
export HOST2=10.0.0.8
8794
95+
# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts
96+
export NAME0="infra0"
97+
export NAME1="infra1"
98+
export NAME2="infra2"
99+
88100
# Create temp directories to store files that will end up on other hosts.
89101
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
90102
91-
ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
92-
NAMES=("infra0" "infra1" "infra2")
103+
HOSTS=(${HOST0} ${HOST1} ${HOST2})
104+
NAMES=(${NAME0} ${NAME1} ${NAME2})
93105
94-
for i in "${!ETCDHOSTS[@]}"; do
95-
HOST=${ETCDHOSTS[$i]}
106+
for i in "${!HOSTS[@]}"; do
107+
HOST=${HOSTS[$i]}
96108
NAME=${NAMES[$i]}
97109
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
110+
---
111+
apiVersion: "kubeadm.k8s.io/v1beta3"
112+
kind: InitConfiguration
113+
nodeRegistration:
114+
name: ${NAME}
115+
localAPIEndpoint:
116+
advertiseAddress: ${HOST}
117+
---
98118
apiVersion: "kubeadm.k8s.io/v1beta3"
99119
kind: ClusterConfiguration
100120
etcd:
@@ -104,7 +124,7 @@ this example.
104124
peerCertSANs:
105125
- "${HOST}"
106126
extraArgs:
107-
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380,${NAMES[2]}=https://${ETCDHOSTS[2]}:2380
127+
initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380
108128
initial-cluster-state: new
109129
name: ${NAME}
110130
listen-peer-urls: https://${HOST}:2380

0 commit comments

Comments
 (0)