You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
See [Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/) for steps on creating a high availability kubeadm cluster by adding more control plane
403
-
nodes.
396
+
See [Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/)
397
+
for steps on creating a high availability kubeadm cluster by adding more control plane nodes.
404
398
405
399
### Adding worker nodes {#join-nodes}
406
400
@@ -439,7 +433,7 @@ privileges by using `kubectl create (cluster)rolebinding`.
439
433
440
434
### (Optional) Proxying API Server to localhost
441
435
442
-
If you want to connect to the API Server from outside the cluster you can use
436
+
If you want to connect to the API Server from outside the cluster, you can use
443
437
`kubectl proxy`:
444
438
445
439
```bash
@@ -474,7 +468,8 @@ Before removing the node, reset the state installed by `kubeadm`:
474
468
kubeadm reset
475
469
```
476
470
477
-
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:
471
+
The reset process does not reset or clean up iptables rules or IPVS tables.
472
+
If you wish to reset iptables, you must do so manually:
@@ -503,7 +499,6 @@ See the [`kubeadm reset`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/)
503
499
reference documentation for more information about this subcommand and its
504
500
options.
505
501
506
-
507
502
## Version skew policy {#version-skew-policy}
508
503
509
504
While kubeadm allows version skew against some components that it manages, it is recommended that you
@@ -519,6 +514,7 @@ field when using `--config`. This option will control the versions
519
514
of kube-apiserver, kube-controller-manager, kube-scheduler and kube-proxy.
520
515
521
516
Example:
517
+
522
518
* kubeadm is at {{< skew currentVersion >}}
523
519
*`kubernetesVersion` must be at {{< skew currentVersion >}} or {{< skew currentVersionAddMinor -1 >}}
524
520
@@ -528,8 +524,10 @@ Similarly to the Kubernetes version, kubeadm can be used with a kubelet version
528
524
the same version as kubeadm or three versions older.
529
525
530
526
Example:
527
+
531
528
* kubeadm is at {{< skew currentVersion >}}
532
-
* kubelet on the host must be at {{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}}, {{< skew currentVersionAddMinor -2 >}} or {{< skew currentVersionAddMinor -3 >}}
529
+
* kubelet on the host must be at {{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}},
@@ -54,24 +53,26 @@ on control plane nodes when using `kubeadm init` and `kubeadm join --control-pla
54
53
55
54
## External etcd topology
56
55
57
-
An HA cluster with external etcd is a [topology](https://en.wikipedia.org/wiki/Network_topology) where the distributed data storage cluster provided by etcd is external to the cluster formed by the nodes that run control plane components.
56
+
An HA cluster with external etcd is a [topology](https://en.wikipedia.org/wiki/Network_topology)
57
+
where the distributed data storage cluster provided by etcd is external to the cluster formed by
58
+
the nodes that run control plane components.
58
59
59
-
Like the stacked etcd topology, each control plane node in an external etcd topology runs an instance of the `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`. And the `kube-apiserver` is exposed to worker nodes using a load balancer. However, etcd members run on separate hosts, and each etcd host communicates with the `kube-apiserver` of each control plane node.
60
+
Like the stacked etcd topology, each control plane node in an external etcd topology runs
61
+
an instance of the `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`.
62
+
And the `kube-apiserver` is exposed to worker nodes using a load balancer. However,
63
+
etcd members run on separate hosts, and each etcd host communicates with the
64
+
`kube-apiserver` of each control plane node.
60
65
61
66
This topology decouples the control plane and etcd member. It therefore provides an HA setup where
62
67
losing a control plane instance or an etcd member has less impact and does not affect
63
68
the cluster redundancy as much as the stacked HA topology.
64
69
65
70
However, this topology requires twice the number of hosts as the stacked HA topology.
66
-
A minimum of three hosts for control plane nodes and three hosts for etcd nodes are required for an HA cluster with this topology.
71
+
A minimum of three hosts for control plane nodes and three hosts for etcd nodes are
Copy file name to clipboardExpand all lines: content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md
+13-9Lines changed: 13 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,8 @@ cluster using kubeadm:
17
17
control plane nodes and etcd members are separated.
18
18
19
19
Before proceeding, you should carefully consider which approach best meets the needs of your applications
20
-
and environment. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) outlines the advantages and disadvantages of each.
20
+
and environment. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/)
21
+
outlines the advantages and disadvantages of each.
21
22
22
23
If you encounter issues with setting up the HA cluster, please report these
23
24
in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
@@ -85,6 +86,7 @@ You need:
85
86
<!-- end of shared prerequisites -->
86
87
87
88
And you also need:
89
+
88
90
- Three or more additional machines, that will become etcd cluster members.
89
91
Having an odd number of members in the etcd cluster is a requirement for achieving
Each host should have access read and fetch images from the Kubernetes container image registry, `registry.k8s.io`.
101
-
If you want to deploy a highly-available cluster where the hosts do not have access to pull images, this is possible. You must ensure by some other means that the correct container images are already available on the relevant hosts.
102
+
Each host should have access read and fetch images from the Kubernetes container image registry,
103
+
`registry.k8s.io`. If you want to deploy a highly-available cluster where the hosts do not have
104
+
access to pull images, this is possible. You must ensure by some other means that the correct
105
+
container images are already available on the relevant hosts.
102
106
103
107
### Command line interface {#kubectl}
104
108
@@ -288,7 +292,6 @@ in the kubeadm config file.
288
292
289
293
1. Create a file called `kubeadm-config.yaml` with the following contents:
290
294
291
-
292
295
```yaml
293
296
---
294
297
apiVersion: kubeadm.k8s.io/v1beta4
@@ -366,13 +369,13 @@ SSH is required if you want to control all nodes from a single machine.
366
369
1. Enable ssh-agent on your main device that has access to all other nodes in
367
370
the system:
368
371
369
-
```
372
+
```shell
370
373
eval$(ssh-agent)
371
374
```
372
375
373
376
1. Add your SSH identity to the session:
374
377
375
-
```
378
+
```shell
376
379
ssh-add ~/.ssh/path_to_private_key
377
380
```
378
381
@@ -382,14 +385,14 @@ SSH is required if you want to control all nodes from a single machine.
382
385
have logged into via SSH to access the SSH agent on your PC. Consider alternative
383
386
methods if you do not fully trust the security of your user session on the node.
384
387
385
-
```
388
+
```shell
386
389
ssh -A 10.0.0.7
387
390
```
388
391
389
392
- When using sudo on any node, make sure to preserve the environment so SSH
390
393
forwarding works:
391
394
392
-
```
395
+
```shell
393
396
sudo -E -s
394
397
```
395
398
@@ -399,6 +402,7 @@ SSH is required if you want to control all nodes from a single machine.
399
402
400
403
In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the
401
404
other control plane nodes.
405
+
402
406
```sh
403
407
USER=ubuntu # customizable
404
408
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
@@ -413,7 +417,7 @@ SSH is required if you want to control all nodes from a single machine.
413
417
# Skip the next line if you are using external etcd
@@ -197,7 +198,8 @@ These instructions are for Kubernetes v{{< skew currentVersion >}}.
197
198
```
198
199
199
200
{{< note >}}
200
-
In releases older than Debian 12 and Ubuntu 22.04, directory `/etc/apt/keyrings` does not exist by default, and it should be created before the curl command.
201
+
In releases older than Debian 12 and Ubuntu 22.04, directory `/etc/apt/keyrings` does not
202
+
exist by default, and it should be created before the curl command.
201
203
{{< /note >}}
202
204
203
205
3. Add the appropriate Kubernetes `apt` repository. Please note that this repository have packages
@@ -240,11 +242,11 @@ In releases older than Debian 12 and Ubuntu 22.04, directory `/etc/apt/keyrings`
240
242
241
243
{{< caution >}}
242
244
- Setting SELinux in permissive mode by running `setenforce 0` and `sed ...`
243
-
effectively disables it. This is required to allow containers to access the host
244
-
filesystem; for example, some cluster network plugins require that. You have to
245
-
do this until SELinux support is improved in the kubelet.
245
+
effectively disables it. This is required to allow containers to access the host
246
+
filesystem; for example, some cluster network plugins require that. You have to
247
+
do this until SELinux support is improved in the kubelet.
246
248
- You can leave SELinux enabled if you know how to configure it but it may require
247
-
settings that are not supported by kubeadm.
249
+
settings that are not supported by kubeadm.
248
250
{{< /caution >}}
249
251
250
252
2. Add the Kubernetes `yum` repository. The `exclude` parameter in the
0 commit comments