Skip to content

Commit e408bd7

Browse files
committed
[zh] sync /kubeadm/setup-ha-etcd-with-kubeadm.md
1 parent 6486762 commit e408bd7

File tree

2 files changed

+73
-34
lines changed

2 files changed

+73
-34
lines changed

content/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md

Lines changed: 61 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@ title: 使用 kubeadm 创建一个高可用 etcd 集群
33
content_type: task
44
weight: 70
55
---
6-
76
<!--
87
reviewers:
98
- sig-cluster-lifecycle
@@ -32,15 +31,13 @@ It is also possible to treat the etcd cluster as external and provision
3231
etcd instances on separate hosts. The differences between the two approaches are covered in the
3332
[Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology) page.
3433
-->
35-
3634
默认情况下,kubeadm 在每个控制平面节点上运行一个本地 etcd 实例。也可以使用外部的 etcd 集群,并在不同的主机上提供 etcd 实例。
37-
这两种方法的区别在 [高可用拓扑的选项](/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology) 页面中阐述。
35+
这两种方法的区别在[高可用拓扑的选项](/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology)页面中阐述。
3836

3937
<!--
4038
This task walks through the process of creating a high availability external
4139
etcd cluster of three members that can be used by kubeadm during cluster creation.
4240
-->
43-
4441
这个任务将指导你创建一个由三个成员组成的高可用外部 etcd 集群,该集群在创建过程中可被 kubeadm 使用。
4542

4643
## {{% heading "prerequisites" %}}
@@ -50,7 +47,8 @@ etcd cluster of three members that can be used by kubeadm during cluster creatio
5047
document assumes these default ports. However, they are configurable through
5148
the kubeadm config file.
5249
-->
53-
- 三个可以通过 2379 和 2380 端口相互通信的主机。本文档使用这些作为默认端口。不过,它们可以通过 kubeadm 的配置文件进行自定义。
50+
- 三个可以通过 2379 和 2380 端口相互通信的主机。本文档使用这些作为默认端口。
51+
不过,它们可以通过 kubeadm 的配置文件进行自定义。
5452
<!--
5553
- Each host must have systemd and a bash compatible shell installed.
5654
- Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
@@ -63,20 +61,20 @@ etcd cluster of three members that can be used by kubeadm during cluster creatio
6361
[static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet.
6462
-->
6563
- 每个主机都应该能够访问 Kubernetes 容器镜像仓库 (registry.k8s.io),
66-
或者使用 `kubeadm config images list/pull` 列出/拉取所需的 etcd 镜像。
67-
本指南将把 etcd 实例设置为由 kubelet 管理的[静态 Pod](/zh-cn/docs/tasks/configure-pod-container/static-pod/)
64+
或者使用 `kubeadm config images list/pull` 列出/拉取所需的 etcd 镜像。
65+
本指南将把 etcd 实例设置为由 kubelet 管理的[静态 Pod](/zh-cn/docs/tasks/configure-pod-container/static-pod/)
6866
<!--
6967
- Some infrastructure to copy files between hosts. For example `ssh` and `scp`
7068
can satisfy this requirement.
7169
-->
72-
- 一些可以用来在主机间复制文件的基础设施。例如 `ssh``scp` 就可以满足需求
70+
- 一些可以用来在主机间复制文件的基础设施。例如 `ssh``scp` 就可以满足此需求
7371

7472
<!-- steps -->
7573

7674
<!--
7775
## Setting up the cluster
7876
-->
79-
## 建立集群
77+
## 建立集群 {#setting-up-cluster}
8078

8179
<!--
8280
The general approach is to generate all certs on one node and only distribute
@@ -99,25 +97,37 @@ The examples below use IPv4 addresses but you can also configure kubeadm, the ku
9997
to use IPv6 addresses. Dual-stack is supported by some Kubernetes options, but not by etcd. For more details
10098
on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support/).
10199
-->
102-
下面的例子使用 IPv4 地址,但是你也可以使用 IPv6 地址配置 kubeadm、kubelet 和 etcd。一些 Kubernetes 选项支持双协议栈,但是 etcd 不支持。
103-
关于 Kubernetes 双协议栈支持的更多细节,请参见 [kubeadm 的双栈支持](/zh-cn/docs/setup/production-environment/tools/kubeadm/dual-stack-support/)
100+
下面的例子使用 IPv4 地址,但是你也可以使用 IPv6 地址配置 kubeadm、kubelet 和 etcd。
101+
一些 Kubernetes 选项支持双协议栈,但是 etcd 不支持。关于 Kubernetes 双协议栈支持的更多细节,
102+
请参见 [kubeadm 的双栈支持](/zh-cn/docs/setup/production-environment/tools/kubeadm/dual-stack-support/)
104103
{{< /note >}}
105104

106105
<!--
107106
1. Configure the kubelet to be a service manager for etcd.
108-
109-
{{< note >}}You must do this on every host where etcd should be running.{{< /note >}}
110-
Since etcd was created first, you must override the service priority by creating a new unit file
111-
that has higher precedence than the kubeadm-provided kubelet unit file.
112107
-->
113108
1. 将 kubelet 配置为 etcd 的服务管理器。
114109

115110
{{< note >}}
111+
<!--
112+
You must do this on every host where etcd should be running.
113+
-->
116114
你必须在要运行 etcd 的所有主机上执行此操作。
117115
{{< /note >}}
116+
117+
<!--
118+
Since etcd was created first, you must override the service priority by creating a new unit file
119+
that has higher precedence than the kubeadm-provided kubelet unit file.
120+
-->
118121
由于 etcd 是首先创建的,因此你必须通过创建具有更高优先级的新文件来覆盖
119122
kubeadm 提供的 kubelet 单元文件。
120123

124+
<!--
125+
```sh
126+
cat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf
127+
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
128+
# Replace the value of "containerRuntimeEndpoint" for a different container runtime if needed.
129+
```
130+
-->
121131
```sh
122132
cat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf
123133
# 将下面的 "systemd" 替换为你的容器运行时所使用的 cgroup 驱动。
@@ -126,12 +136,19 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
126136
#
127137
apiVersion: kubelet.config.k8s.io/v1beta1
128138
kind: KubeletConfiguration
139+
authentication:
140+
anonymous:
141+
enabled: false
142+
webhook:
143+
enabled: false
144+
authorization:
145+
mode: AlwaysAllow
129146
cgroupDriver: systemd
130147
address: 127.0.0.1
131148
containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock
132149
staticPodPath: /etc/kubernetes/manifests
133150
EOF
134-
151+
135152
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
136153
[Service]
137154
ExecStart=
@@ -162,8 +179,23 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
162179
163180
使用以下脚本为每个将要运行 etcd 成员的主机生成一个 kubeadm 配置文件。
164181
182+
<!--
165183
```sh
166-
# 使用你的主机 IP 替换 HOST0、HOST1 和 HOST2 的 IP 地址
184+
# Update HOST0, HOST1 and HOST2 with the IPs of your hosts
185+
export HOST0=10.0.0.6
186+
export HOST1=10.0.0.7
187+
export HOST2=10.0.0.8
188+
189+
# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts
190+
export NAME0="infra0"
191+
export NAME1="infra1"
192+
export NAME2="infra2"
193+
194+
# Create temp directories to store files that will end up on other hosts
195+
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
196+
-->
197+
```sh
198+
# 使用你的主机 IP 更新 HOST0、HOST1 和 HOST2 的 IP 地址
167199
export HOST0=10.0.0.6
168200
export HOST1=10.0.0.7
169201
export HOST2=10.0.0.8
@@ -219,7 +251,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
219251
`/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied,
220252
proceed to the next step, "Create certificates for each member".
221253
-->
222-
3. 生成证书颁发机构
254+
3. 生成证书颁发机构
223255
224256
如果你已经拥有 CA,那么唯一的操作是复制 CA 的 `crt` 和 `key` 文件到
225257
`etc/kubernetes/pki/etcd/ca.crt` 和 `/etc/kubernetes/pki/etcd/ca.key`。
@@ -245,8 +277,15 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
245277
<!--
246278
1. Create certificates for each member.
247279
-->
248-
4. 为每个成员创建证书
280+
4. 为每个成员创建证书
249281
282+
<!--
283+
```sh
284+
# cleanup non-reusable certificates
285+
# No need to move the certs because they are for HOST0
286+
# clean up certs that should not be copied off this host
287+
```
288+
-->
250289
```shell
251290
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
252291
kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
@@ -279,7 +318,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
279318
The certificates have been generated and now they must be moved to their
280319
respective hosts.
281320
-->
282-
5. 复制证书和 kubeadm 配置
321+
5. 复制证书和 kubeadm 配置
283322
284323
证书已生成,现在必须将它们移动到对应的主机。
285324
@@ -371,7 +410,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
371410
manifests. On each host run the `kubeadm` command to generate a static manifest
372411
for etcd.
373412
-->
374-
7. 创建静态 Pod 清单
413+
7. 创建静态 Pod 清单
375414
376415
既然证书和配置已经就绪,是时候去创建清单了。
377416
在每台主机上运行 `kubeadm` 命令来生成 etcd 使用的静态清单。
@@ -385,7 +424,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
385424
<!--
386425
1. Optional: Check the cluster health.
387426
-->
388-
8. 可选:检查集群运行状况
427+
8. 可选:检查集群运行状况
389428
390429
<!--
391430
If `etcdctl` isn't available, you can run this tool inside a container image.

content/zh-cn/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@ title: 对 kubeadm 进行故障排查
33
content_type: concept
44
weight: 20
55
---
6-
76
<!--
87
title: Troubleshooting kubeadm
98
content_type: concept
@@ -376,10 +375,10 @@ in kube-apiserver logs. To fix the issue you must follow these steps:
376375
-->
377376
5. 手动编辑 `kubelet.conf` 指向轮换的 kubelet 客户端证书,方法是将 `client-certificate-data` 和 `client-key-data` 替换为:
378377

379-
```yaml
380-
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
381-
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
382-
```
378+
```yaml
379+
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
380+
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
381+
```
383382

384383
<!--
385384
1. Restart the kubelet.
@@ -457,8 +456,8 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
457456

458457
Then restart `kubelet`:
459458
-->
460-
解决方法是通知 `kubelet` 使用哪个 `--node-ip`。当使用 DigitalOcean 时,可以是公网IP(分配给 `eth0` 的),
461-
或者是私网IP(分配给 `eth1` 的)。私网 IP 是可选的。
459+
解决方法是通知 `kubelet` 使用哪个 `--node-ip`。当使用 DigitalOcean 时,可以是(分配给 `eth0` 的)公网 IP
460+
或者是(分配给 `eth1` 的)私网 IP。私网 IP 是可选的。
462461
[kubadm `NodeRegistrationOptions` 结构](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/#kubeadm-k8s-io-v1beta3-NodeRegistrationOptions)
463462
的 `KubeletExtraArgs` 部分被用来处理这种情况。
464463

@@ -487,7 +486,7 @@ where the `coredns` pods are not starting. To solve that you can try one of the
487486

488487
- 升级到 [Docker 的较新版本](/zh-cn/docs/setup/production-environment/container-runtimes/#docker)。
489488

490-
- [禁用 SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux).
489+
- [禁用 SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux)
491490

492491
- 修改 `coredns` 部署以设置 `allowPrivilegeEscalation` 为 `true`:
493492

@@ -548,7 +547,7 @@ To work around the issue, choose one of these options:
548547
<!--
549548
- Install one of the more recent recommended versions, such as 18.06:
550549
-->
551-
- 安装较新的推荐版本之一,例如 18.06:
550+
- 安装较新的推荐版本之一,例如 18.06
552551

553552
```shell
554553
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
@@ -645,7 +644,7 @@ for the feature to work.
645644

646645
在类似 Fedora CoreOS 或者 Flatcar Container Linux 这类 Linux 发行版本中,
647646
目录 `/usr` 是以只读文件系统的形式挂载的。
648-
在支持 [FlexVolume](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md)时,
647+
在支持 [FlexVolume](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md) 时,
649648
类似 kubelet 和 kube-controller-manager 这类 Kubernetes 组件使用默认路径
650649
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`
651650
而 FlexVolume 的目录 **必须是可写入的**,该功能特性才能正常工作。
@@ -658,7 +657,8 @@ To workaround this issue you can configure the flex-volume directory using the k
658657
On the primary control-plane Node (created using `kubeadm init`) pass the following
659658
file using `--config`:
660659
-->
661-
为了解决这个问题,你可以使用 kubeadm 的[配置文件](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/) 来配置 FlexVolume 的目录。
660+
为了解决这个问题,你可以使用 kubeadm 的[配置文件](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/)来配置
661+
FlexVolume 的目录。
662662

663663
在(使用 `kubeadm init` 创建的)主控制节点上,使用 `--config`
664664
参数传入如下文件:
@@ -694,7 +694,7 @@ nodeRegistration:
694694
Alternatively, you can modify `/etc/fstab` to make the `/usr` mount writeable, but please
695695
be advised that this is modifying a design principle of the Linux distribution.
696696
-->
697-
或者,你要可以更改 `/etc/fstab` 使得 `/usr` 目录能够以可写入的方式挂载,
697+
或者,你可以更改 `/etc/fstab` 使得 `/usr` 目录能够以可写入的方式挂载,
698698
不过请注意这样做本质上是在更改 Linux 发行版的某种设计原则。
699699

700700
<!--

0 commit comments

Comments
 (0)