Skip to content

Commit 5cc2a00

Browse files
authored
Merge pull request #48375 from windsonsea/creadm
Tweak and clean up four kubeadm files
2 parents 47f1f78 + 67c5917 commit 5cc2a00

File tree

4 files changed

+65
-60
lines changed

4 files changed

+65
-60
lines changed

content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md

Lines changed: 32 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -27,21 +27,16 @@ of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the
2727
cloud or on-premises, you can integrate `kubeadm` into provisioning systems such
2828
as Ansible or Terraform.
2929

30-
31-
3230
## {{% heading "prerequisites" %}}
3331

34-
3532
To follow this guide, you need:
3633

3734
- One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
38-
- 2 GiB or more of RAM per machine--any less leaves little room for your
39-
apps.
35+
- 2 GiB or more of RAM per machine--any less leaves little room for your apps.
4036
- At least 2 CPUs on the machine that you use as a control-plane node.
4137
- Full network connectivity among all machines in the cluster. You can use either a
4238
public or a private network.
4339

44-
4540
You also need to use a version of `kubeadm` that can deploy the version
4641
of Kubernetes that you want to use in your new cluster.
4742

@@ -58,8 +53,6 @@ slightly as the tool evolves, but the overall implementation should be pretty st
5853
Any commands under `kubeadm alpha` are, by definition, supported on an alpha level.
5954
{{< /note >}}
6055

61-
62-
6356
<!-- steps -->
6457

6558
## Objectives
@@ -74,12 +67,14 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
7467

7568
#### Component installation
7669

77-
Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}} and kubeadm on all the hosts.
78-
For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
70+
Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}}
71+
and kubeadm on all the hosts. For detailed instructions and other prerequisites, see
72+
[Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
7973

8074
{{< note >}}
8175
If you have already installed kubeadm, see the first two steps of the
82-
[Upgrading Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes) document for instructions on how to upgrade kubeadm.
76+
[Upgrading Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes)
77+
document for instructions on how to upgrade kubeadm.
8378

8479
When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
8580
kubeadm to tell it what to do. This crashloop is expected and normal.
@@ -166,17 +161,17 @@ The control-plane node is the machine where the control plane components run, in
166161
communicates with).
167162

168163
1. (Recommended) If you have plans to upgrade this single control-plane `kubeadm` cluster
169-
to [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/)
170-
you should specify the `--control-plane-endpoint` to set the shared endpoint for all control-plane nodes.
171-
Such an endpoint can be either a DNS name or an IP address of a load-balancer.
164+
to [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/)
165+
you should specify the `--control-plane-endpoint` to set the shared endpoint for all control-plane nodes.
166+
Such an endpoint can be either a DNS name or an IP address of a load-balancer.
172167
1. Choose a Pod network add-on, and verify whether it requires any arguments to
173-
be passed to `kubeadm init`. Depending on which
174-
third-party provider you choose, you might need to set the `--pod-network-cidr` to
175-
a provider-specific value. See [Installing a Pod network add-on](#pod-network).
168+
be passed to `kubeadm init`. Depending on which
169+
third-party provider you choose, you might need to set the `--pod-network-cidr` to
170+
a provider-specific value. See [Installing a Pod network add-on](#pod-network).
176171
1. (Optional) `kubeadm` tries to detect the container runtime by using a list of well
177-
known endpoints. To use different container runtime or if there are more than one installed
178-
on the provisioned node, specify the `--cri-socket` argument to `kubeadm`. See
179-
[Installing a runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
172+
known endpoints. To use different container runtime or if there are more than one installed
173+
on the provisioned node, specify the `--cri-socket` argument to `kubeadm`. See
174+
[Installing a runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
180175

181176
To initialize the control-plane node run:
182177

@@ -186,7 +181,7 @@ kubeadm init <args>
186181

187182
### Considerations about apiserver-advertise-address and ControlPlaneEndpoint
188183

189-
While `--apiserver-advertise-address` can be used to set the advertise address for this particular
184+
While `--apiserver-advertise-address` can be used to set the advertised address for this particular
190185
control-plane node's API server, `--control-plane-endpoint` can be used to set the shared endpoint
191186
for all control-plane nodes.
192187

@@ -201,7 +196,7 @@ Here is an example mapping:
201196

202197
Where `192.168.0.102` is the IP address of this node and `cluster-endpoint` is a custom DNS name that maps to this IP.
203198
This will allow you to pass `--control-plane-endpoint=cluster-endpoint` to `kubeadm init` and pass the same DNS name to
204-
`kubeadm join`. Later you can modify `cluster-endpoint` to point to the address of your load-balancer in an
199+
`kubeadm join`. Later you can modify `cluster-endpoint` to point to the address of your load-balancer in a
205200
high availability scenario.
206201

207202
Turning a single control plane cluster created without `--control-plane-endpoint` into a highly available cluster
@@ -334,7 +329,6 @@ support [Network Policy](/docs/concepts/services-networking/network-policies/).
334329
See a list of add-ons that implement the
335330
[Kubernetes networking model](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model).
336331

337-
338332
Please refer to the [Installing Addons](/docs/concepts/cluster-administration/addons/#networking-and-network-policy)
339333
page for a non-exhaustive list of networking addons supported by Kubernetes.
340334
You can install a Pod network add-on with the following command on the
@@ -399,8 +393,8 @@ kubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancer
399393

400394
### Adding more control plane nodes
401395

402-
See [Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/) for steps on creating a high availability kubeadm cluster by adding more control plane
403-
nodes.
396+
See [Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/)
397+
for steps on creating a high availability kubeadm cluster by adding more control plane nodes.
404398

405399
### Adding worker nodes {#join-nodes}
406400

@@ -439,7 +433,7 @@ privileges by using `kubectl create (cluster)rolebinding`.
439433

440434
### (Optional) Proxying API Server to localhost
441435

442-
If you want to connect to the API Server from outside the cluster you can use
436+
If you want to connect to the API Server from outside the cluster, you can use
443437
`kubectl proxy`:
444438

445439
```bash
@@ -474,7 +468,8 @@ Before removing the node, reset the state installed by `kubeadm`:
474468
kubeadm reset
475469
```
476470

477-
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:
471+
The reset process does not reset or clean up iptables rules or IPVS tables.
472+
If you wish to reset iptables, you must do so manually:
478473

479474
```bash
480475
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
@@ -487,6 +482,7 @@ ipvsadm -C
487482
```
488483

489484
Now remove the node:
485+
490486
```bash
491487
kubectl delete node <node name>
492488
```
@@ -503,7 +499,6 @@ See the [`kubeadm reset`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/)
503499
reference documentation for more information about this subcommand and its
504500
options.
505501

506-
507502
## Version skew policy {#version-skew-policy}
508503

509504
While kubeadm allows version skew against some components that it manages, it is recommended that you
@@ -519,6 +514,7 @@ field when using `--config`. This option will control the versions
519514
of kube-apiserver, kube-controller-manager, kube-scheduler and kube-proxy.
520515

521516
Example:
517+
522518
* kubeadm is at {{< skew currentVersion >}}
523519
* `kubernetesVersion` must be at {{< skew currentVersion >}} or {{< skew currentVersionAddMinor -1 >}}
524520

@@ -528,8 +524,10 @@ Similarly to the Kubernetes version, kubeadm can be used with a kubelet version
528524
the same version as kubeadm or three versions older.
529525

530526
Example:
527+
531528
* kubeadm is at {{< skew currentVersion >}}
532-
* kubelet on the host must be at {{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}}, {{< skew currentVersionAddMinor -2 >}} or {{< skew currentVersionAddMinor -3 >}}
529+
* kubelet on the host must be at {{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}},
530+
{{< skew currentVersionAddMinor -2 >}} or {{< skew currentVersionAddMinor -3 >}}
533531

534532
### kubeadm's skew against kubeadm
535533

@@ -542,6 +540,7 @@ the same node with `kubeadm upgrade`. Similar rules apply to the rest of the kub
542540
with the exception of `kubeadm upgrade`.
543541

544542
Example for `kubeadm join`:
543+
545544
* kubeadm version {{< skew currentVersion >}} was used to create a cluster with `kubeadm init`
546545
* Joining nodes must use a kubeadm binary that is at version {{< skew currentVersion >}}
547546

@@ -550,9 +549,10 @@ version or one MINOR version newer than the version of kubeadm used for managing
550549
node.
551550

552551
Example for `kubeadm upgrade`:
552+
553553
* kubeadm version {{< skew currentVersionAddMinor -1 >}} was used to create or upgrade the node
554554
* The version of kubeadm used for upgrading the node must be at {{< skew currentVersionAddMinor -1 >}}
555-
or {{< skew currentVersion >}}
555+
or {{< skew currentVersion >}}
556556

557557
To learn more about the version skew between the different Kubernetes component see
558558
the [Version Skew Policy](/releases/version-skew-policy/).
@@ -577,8 +577,7 @@ Workarounds:
577577
### Platform compatibility {#multi-platform}
578578

579579
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
580-
following the [multi-platform
581-
proposal](https://git.k8s.io/design-proposals-archive/multi-platform.md).
580+
following the [multi-platform proposal](https://git.k8s.io/design-proposals-archive/multi-platform.md).
582581

583582
Multiplatform container images for the control plane and addons are also supported since v1.12.
584583

@@ -591,10 +590,9 @@ supports your chosen platform.
591590
If you are running into difficulties with kubeadm, please consult our
592591
[troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
593592

594-
595593
<!-- discussion -->
596594

597-
## What's next {#whats-next}
595+
## {{% heading "whatsnext" %}}
598596

599597
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
600598
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)

content/en/docs/setup/production-environment/tools/kubeadm/ha-topology.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,11 @@ You can set up an HA cluster:
1818
You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster.
1919

2020
{{< note >}}
21-
kubeadm bootstraps the etcd cluster statically. Read the etcd [Clustering Guide](https://github.com/etcd-io/etcd/blob/release-3.4/Documentation/op-guide/clustering.md#static)
21+
kubeadm bootstraps the etcd cluster statically. Read the etcd
22+
[Clustering Guide](https://github.com/etcd-io/etcd/blob/release-3.4/Documentation/op-guide/clustering.md#static)
2223
for more details.
2324
{{< /note >}}
2425

25-
26-
2726
<!-- body -->
2827

2928
## Stacked etcd topology
@@ -54,24 +53,26 @@ on control plane nodes when using `kubeadm init` and `kubeadm join --control-pla
5453

5554
## External etcd topology
5655

57-
An HA cluster with external etcd is a [topology](https://en.wikipedia.org/wiki/Network_topology) where the distributed data storage cluster provided by etcd is external to the cluster formed by the nodes that run control plane components.
56+
An HA cluster with external etcd is a [topology](https://en.wikipedia.org/wiki/Network_topology)
57+
where the distributed data storage cluster provided by etcd is external to the cluster formed by
58+
the nodes that run control plane components.
5859

59-
Like the stacked etcd topology, each control plane node in an external etcd topology runs an instance of the `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`. And the `kube-apiserver` is exposed to worker nodes using a load balancer. However, etcd members run on separate hosts, and each etcd host communicates with the `kube-apiserver` of each control plane node.
60+
Like the stacked etcd topology, each control plane node in an external etcd topology runs
61+
an instance of the `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`.
62+
And the `kube-apiserver` is exposed to worker nodes using a load balancer. However,
63+
etcd members run on separate hosts, and each etcd host communicates with the
64+
`kube-apiserver` of each control plane node.
6065

6166
This topology decouples the control plane and etcd member. It therefore provides an HA setup where
6267
losing a control plane instance or an etcd member has less impact and does not affect
6368
the cluster redundancy as much as the stacked HA topology.
6469

6570
However, this topology requires twice the number of hosts as the stacked HA topology.
66-
A minimum of three hosts for control plane nodes and three hosts for etcd nodes are required for an HA cluster with this topology.
71+
A minimum of three hosts for control plane nodes and three hosts for etcd nodes are
72+
required for an HA cluster with this topology.
6773

6874
![External etcd topology](/images/kubeadm/kubeadm-ha-topology-external-etcd.svg)
6975

70-
71-
7276
## {{% heading "whatsnext" %}}
7377

74-
7578
- [Set up a highly available cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/)
76-
77-

content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,8 @@ cluster using kubeadm:
1717
control plane nodes and etcd members are separated.
1818

1919
Before proceeding, you should carefully consider which approach best meets the needs of your applications
20-
and environment. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) outlines the advantages and disadvantages of each.
20+
and environment. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/)
21+
outlines the advantages and disadvantages of each.
2122

2223
If you encounter issues with setting up the HA cluster, please report these
2324
in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
@@ -85,6 +86,7 @@ You need:
8586
<!-- end of shared prerequisites -->
8687

8788
And you also need:
89+
8890
- Three or more additional machines, that will become etcd cluster members.
8991
Having an odd number of members in the etcd cluster is a requirement for achieving
9092
optimal voting quorum.
@@ -97,8 +99,10 @@ _See [External etcd topology](/docs/setup/production-environment/tools/kubeadm/h
9799

98100
### Container images
99101

100-
Each host should have access read and fetch images from the Kubernetes container image registry, `registry.k8s.io`.
101-
If you want to deploy a highly-available cluster where the hosts do not have access to pull images, this is possible. You must ensure by some other means that the correct container images are already available on the relevant hosts.
102+
Each host should have access read and fetch images from the Kubernetes container image registry,
103+
`registry.k8s.io`. If you want to deploy a highly-available cluster where the hosts do not have
104+
access to pull images, this is possible. You must ensure by some other means that the correct
105+
container images are already available on the relevant hosts.
102106

103107
### Command line interface {#kubectl}
104108

@@ -288,7 +292,6 @@ in the kubeadm config file.
288292
289293
1. Create a file called `kubeadm-config.yaml` with the following contents:
290294
291-
292295
```yaml
293296
---
294297
apiVersion: kubeadm.k8s.io/v1beta4
@@ -366,13 +369,13 @@ SSH is required if you want to control all nodes from a single machine.
366369
1. Enable ssh-agent on your main device that has access to all other nodes in
367370
the system:
368371

369-
```
372+
```shell
370373
eval $(ssh-agent)
371374
```
372375

373376
1. Add your SSH identity to the session:
374377

375-
```
378+
```shell
376379
ssh-add ~/.ssh/path_to_private_key
377380
```
378381

@@ -382,14 +385,14 @@ SSH is required if you want to control all nodes from a single machine.
382385
have logged into via SSH to access the SSH agent on your PC. Consider alternative
383386
methods if you do not fully trust the security of your user session on the node.
384387

385-
```
388+
```shell
386389
ssh -A 10.0.0.7
387390
```
388391

389392
- When using sudo on any node, make sure to preserve the environment so SSH
390393
forwarding works:
391394

392-
```
395+
```shell
393396
sudo -E -s
394397
```
395398

@@ -399,6 +402,7 @@ SSH is required if you want to control all nodes from a single machine.
399402

400403
In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the
401404
other control plane nodes.
405+
402406
```sh
403407
USER=ubuntu # customizable
404408
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
@@ -413,7 +417,7 @@ SSH is required if you want to control all nodes from a single machine.
413417
# Skip the next line if you are using external etcd
414418
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
415419
done
416-
```
420+
```
417421

418422
{{< caution >}}
419423
Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates

content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,8 @@ see the [Creating a cluster with kubeadm](/docs/setup/production-environment/too
3131
The `kubeadm` installation is done via binaries that use dynamic linking and assumes that your target system provides `glibc`.
3232
This is a reasonable assumption on many Linux distributions (including Debian, Ubuntu, Fedora, CentOS, etc.)
3333
but it is not always the case with custom and lightweight distributions which don't include `glibc` by default, such as Alpine Linux.
34-
The expectation is that the distribution either includes `glibc` or a [compatibility layer](https://wiki.alpinelinux.org/wiki/Running_glibc_programs)
34+
The expectation is that the distribution either includes `glibc` or a
35+
[compatibility layer](https://wiki.alpinelinux.org/wiki/Running_glibc_programs)
3536
that provides the expected symbols.
3637
{{< /note >}}
3738

@@ -197,7 +198,8 @@ These instructions are for Kubernetes v{{< skew currentVersion >}}.
197198
```
198199

199200
{{< note >}}
200-
In releases older than Debian 12 and Ubuntu 22.04, directory `/etc/apt/keyrings` does not exist by default, and it should be created before the curl command.
201+
In releases older than Debian 12 and Ubuntu 22.04, directory `/etc/apt/keyrings` does not
202+
exist by default, and it should be created before the curl command.
201203
{{< /note >}}
202204

203205
3. Add the appropriate Kubernetes `apt` repository. Please note that this repository have packages
@@ -240,11 +242,11 @@ In releases older than Debian 12 and Ubuntu 22.04, directory `/etc/apt/keyrings`
240242

241243
{{< caution >}}
242244
- Setting SELinux in permissive mode by running `setenforce 0` and `sed ...`
243-
effectively disables it. This is required to allow containers to access the host
244-
filesystem; for example, some cluster network plugins require that. You have to
245-
do this until SELinux support is improved in the kubelet.
245+
effectively disables it. This is required to allow containers to access the host
246+
filesystem; for example, some cluster network plugins require that. You have to
247+
do this until SELinux support is improved in the kubelet.
246248
- You can leave SELinux enabled if you know how to configure it but it may require
247-
settings that are not supported by kubeadm.
249+
settings that are not supported by kubeadm.
248250
{{< /caution >}}
249251

250252
2. Add the Kubernetes `yum` repository. The `exclude` parameter in the

0 commit comments

Comments
 (0)