Skip to content

Commit 923b19e

Browse files
authored
Merge pull request #27960 from zhiguo-lu/zh-trans-task-configure-cgroup-driver
[zh] translate Tasks/Configuring a cgroup driver
2 parents cb45398 + 1c846aa commit 923b19e

File tree

1 file changed

+240
-0
lines changed

1 file changed

+240
-0
lines changed
Lines changed: 240 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,240 @@
1+
---
2+
title: 配置 cgroup 驱动
3+
content_type: task
4+
weight: 10
5+
---
6+
<!--
7+
---
8+
title: Configuring a cgroup driver
9+
content_type: task
10+
weight: 10
11+
---
12+
-->
13+
14+
<!-- overview -->
15+
16+
<!--
17+
This page explains how to configure the kubelet cgroup driver to match the container
18+
runtime cgroup driver for kubeadm clusters.
19+
-->
20+
本页阐述如何配置 kubelet 的 cgroup 驱动以匹配 kubeadm 集群中的容器运行时的 cgroup 驱动。
21+
22+
## {{% heading "prerequisites" %}}
23+
24+
<!--
25+
You should be familiar with the Kubernetes
26+
[container runtime requirements](/docs/setup/production-environment/container-runtimes).
27+
-->
28+
你应该熟悉 Kubernetes 的[容器运行时需求](/zh/docs/setup/production-environment/container-runtimes)
29+
<!-- steps -->
30+
31+
<!--
32+
## Configuring the container runtime cgroup driver
33+
-->
34+
## 配置容器运行时 cgroup 驱动 {#configuring-the-container-runtime-cgroup-driver}
35+
36+
<!--
37+
The [Container runtimes](/docs/setup/production-environment/container-runtimes) page
38+
explains that the `systemd` driver is recommended for kubeadm based setups instead
39+
of the `cgroupfs` driver, because kubeadm manages the kubelet as a systemd service.
40+
-->
41+
[容器运行时](/zh/docs/setup/production-environment/container-runtimes)页面提到:
42+
由于 kubeadm 把 kubelet 视为一个系统服务来管理,所以对基于 kubeadm 的安装,
43+
我们推荐使用 `systemd` 驱动,不推荐 `cgroupfs` 驱动。
44+
45+
<!--
46+
The page also provides details on how to setup a number of different container runtimes with the
47+
`systemd` driver by default.
48+
-->
49+
此页还详述了如何安装若干不同的容器运行时,并将 `systemd` 设为其默认驱动。
50+
51+
<!--
52+
## Configuring the kubelet cgroup driver
53+
-->
54+
## 配置 kubelet 的 cgroup 驱动
55+
56+
<!--
57+
kubeadm allows you to pass a `KubeletConfiguration` structure during `kubeadm init`.
58+
This `KubeletConfiguration` can include the `cgroupDriver` field which controls the cgroup
59+
driver of the kubelet.
60+
-->
61+
kubeadm 支持在执行 `kubeadm init` 时,传递一个 `KubeletConfiguration` 结构体。
62+
`KubeletConfiguration` 包含 `cgroupDriver` 字段,可用于控制 kubelet 的 cgroup 驱动。
63+
64+
<!--
65+
If the user is not setting the `cgroupDriver` field under `KubeletConfiguration`,
66+
`kubeadm init` will default it to `systemd`.
67+
-->
68+
69+
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
70+
71+
{{< note >}}
72+
如果用户没有在 `KubeletConfiguration` 中设置 `cgroupDriver` 字段,
73+
`kubeadm init` 会将它设置为默认值 `systemd`
74+
{{< /note >}}
75+
76+
<!--
77+
A minimal example of configuring the field explicitly:
78+
-->
79+
这是一个最小化的示例,其中显式的配置了此字段:
80+
81+
```yaml
82+
# kubeadm-config.yaml
83+
kind: ClusterConfiguration
84+
apiVersion: kubeadm.k8s.io/v1beta2
85+
kubernetesVersion: v1.21.0
86+
---
87+
kind: KubeletConfiguration
88+
apiVersion: kubelet.config.k8s.io/v1beta1
89+
cgroupDriver: systemd
90+
```
91+
92+
<!--
93+
Such a configuration file can then be passed to the kubeadm command:
94+
-->
95+
这样一个配置文件就可以传递给 kubeadm 命令了:
96+
97+
```shell
98+
kubeadm init --config kubeadm-config.yaml
99+
```
100+
101+
<!--
102+
Kubeadm uses the same `KubeletConfiguration` for all nodes in the cluster.
103+
The `KubeletConfiguration` is stored in a [ConfigMap](/docs/concepts/configuration/configmap)
104+
object under the `kube-system` namespace.
105+
106+
Executing the sub commands `init`, `join` and `upgrade` would result in kubeadm
107+
writing the `KubeletConfiguration` as a file under `/var/lib/kubelet/config.yaml`
108+
and passing it to the local node kubelet.
109+
-->
110+
{{< note >}}
111+
Kubeadm 对集群所有的节点,使用相同的 `KubeletConfiguration`
112+
`KubeletConfiguration` 存放于 `kube-system` 命名空间下的某个
113+
[ConfigMap](/zh/docs/concepts/configuration/configmap) 对象中。
114+
115+
执行 `init``join``upgrade` 等子命令会促使 kubeadm
116+
`KubeletConfiguration` 写入到文件 `/var/lib/kubelet/config.yaml` 中,
117+
继而把它传递给本地节点的 kubelet。
118+
119+
{{< /note >}}
120+
121+
<!--
122+
## Using the `cgroupfs` driver
123+
-->
124+
# 使用 `cgroupfs` 驱动
125+
126+
<!--
127+
As this guide explains using the `cgroupfs` driver with kubeadm is not recommended.
128+
129+
To continue using `cgroupfs` and to prevent `kubeadm upgrade` from modifying the
130+
`KubeletConfiguration` cgroup driver on existing setups, you must be explicit
131+
about its value. This applies to a case where you do not wish future versions
132+
of kubeadm to apply the `systemd` driver by default.
133+
-->
134+
正如本指南阐述的:不推荐与 kubeadm 一起使用 `cgroupfs` 驱动。
135+
136+
如仍需使用 `cgroupfs`
137+
且要防止 `kubeadm upgrade` 修改现有系统中 `KubeletConfiguration` 的 cgroup 驱动,
138+
你必须显式声明它的值。
139+
此方法应对的场景为:在将来某个版本的 kubeadm 中,你不想使用默认的 `systemd` 驱动。
140+
141+
<!--
142+
See the below section on "Modify the kubelet ConfigMap" for details on
143+
how to be explicit about the value.
144+
145+
If you wish to configure a container runtime to use the `cgroupfs` driver,
146+
you must refer to the documentation of the container runtime of your choice.
147+
-->
148+
参阅以下章节“修改 kubelet 的 ConfigMap”,了解显式设置该值的方法。
149+
150+
如果你希望配置容器运行时来使用 `cgroupfs` 驱动,
151+
则必须参考所选容器运行时的文档。
152+
153+
<!--
154+
## Migrating to the `systemd` driver
155+
-->
156+
## 迁移到 `systemd` 驱动
157+
158+
<!--
159+
To change the cgroup driver of an existing kubeadm cluster to `systemd` in-place,
160+
a similar procedure to a kubelet upgrade is required. This must include both
161+
steps outlined below.
162+
-->
163+
要将现有 kubeadm 集群的 cgroup 驱动就地升级为 `systemd`
164+
需要执行一个与 kubelet 升级类似的过程。
165+
该过程必须包含下面两个步骤:
166+
167+
<!--
168+
Alternatively, it is possible to replace the old nodes in the cluster with new ones
169+
that use the `systemd` driver. This requires executing only the first step below
170+
before joining the new nodes and ensuring the workloads can safely move to the new
171+
nodes before deleting the old nodes.
172+
-->
173+
{{< note >}}
174+
还有一种方法,可以用已配置了 `systemd` 的新节点替换掉集群中的老节点。
175+
按这种方法,在加入新节点、确保工作负载可以安全迁移到新节点、及至删除旧节点这一系列操作之前,
176+
只需执行以下第一个步骤。
177+
{{< /note >}}
178+
179+
<!--
180+
### Modify the kubelet ConfigMap
181+
-->
182+
### 修改 kubelet 的 ConfigMap
183+
184+
<!--
185+
- Find the kubelet ConfigMap name using `kubectl get cm -n kube-system | grep kubelet-config`.
186+
- Call `kubectl edit cm kubelet-config-x.yy -n kube-system` (replace `x.yy` with
187+
the Kubernetes version).
188+
- Either modify the existing `cgroupDriver` value or add a new field that looks like this:
189+
-->
190+
- 用命令 `kubectl get cm -n kube-system | grep kubelet-config` 找到 kubelet 的 ConfigMap 名称。
191+
- 运行 `kubectl edit cm kubelet-config-x.yy -n kube-system` (把 `x.yy` 替换为 Kubernetes 版本)。
192+
- 修改现有 `cgroupDriver` 的值,或者新增如下式样的字段:
193+
194+
```yaml
195+
cgroupDriver: systemd
196+
```
197+
<!--
198+
This field must be present under the `kubelet:` section of the ConfigMap.
199+
-->
200+
该字段必须出现在 ConfigMap 的 `kubelet:` 小节下。
201+
202+
<!--
203+
### Update the cgroup driver on all nodes
204+
-->
205+
### 更新所有节点的 cgroup 驱动
206+
207+
<!--
208+
For each node in the cluster:
209+
210+
- [Drain the node](/docs/tasks/administer-cluster/safely-drain-node) using `kubectl drain <node-name> --ignore-daemonsets`
211+
- Stop the kubelet using `systemctl stop kubelet`
212+
- Stop the container runtime
213+
- Modify the container runtime cgroup driver to `systemd`
214+
- Set `cgroupDriver: systemd` in `/var/lib/kubelet/config.yaml`
215+
- Start the container runtime
216+
- Start the kubelet using `systemctl start kubelet`
217+
- [Uncordon the node](/docs/tasks/administer-cluster/safely-drain-node) using `kubectl uncordon <node-name>`
218+
-->
219+
对于集群中的每一个节点:
220+
221+
- 执行命令 `kubectl drain <node-name> --ignore-daemonsets`,以
222+
[腾空节点](/zh/docs/tasks/administer-cluster/safely-drain-node)
223+
- 执行命令 `systemctl stop kubelet`,以停止 kubelet
224+
- 停止容器运行时
225+
- 修改容器运行时 cgroup 驱动为 `systemd`
226+
- 在文件 `/var/lib/kubelet/config.yaml` 中添加设置 `cgroupDriver: systemd`
227+
- 启动容器运行时
228+
- 执行命令 `systemctl start kubelet`,以启动 kubelet
229+
- 执行命令 `kubectl uncordon <node-name>`,以
230+
[取消节点隔离](/zh/docs/tasks/administer-cluster/safely-drain-node)
231+
232+
<!--
233+
Execute these steps on nodes one at a time to ensure workloads
234+
have sufficient time to schedule on different nodes.
235+
236+
Once the process is complete ensure that all nodes and workloads are healthy.
237+
-->
238+
在节点上依次执行上述步骤,确保工作负载有充足的时间被调度到其他节点。
239+
240+
流程完成后,确认所有节点和工作负载均健康如常。

0 commit comments

Comments
 (0)