Skip to content

Commit e2c2bb3

Browse files
committed
[zh] Sync1.25 /production-environment/container-runtimes.md
1 parent 162c7fb commit e2c2bb3

File tree

1 file changed

+112
-108
lines changed

1 file changed

+112
-108
lines changed

content/zh-cn/docs/setup/production-environment/container-runtimes.md

Lines changed: 112 additions & 108 deletions
Original file line numberDiff line numberDiff line change
@@ -140,45 +140,118 @@ sudo sysctl --system
140140
On Linux, {{< glossary_tooltip text="control groups" term_id="cgroup" >}}
141141
are used to constrain resources that are allocated to processes.
142142
-->
143-
## Cgroup 驱动程序 {#cgroup-drivers}
143+
## cgroup 驱动 {#cgroup-drivers}
144144

145-
在 Linux 上,{{<glossary_tooltip text="控制组(CGroup)" term_id="cgroup" >}}
146-
用于限制分配给进程的资源。
145+
在 Linux 上,{{<glossary_tooltip text="控制组(CGroup)" term_id="cgroup" >}}用于限制分配给进程的资源。
147146

148147
<!--
148+
Both {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
149+
underlying container runtime need to interface with control groups to enforce
150+
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/) and set
151+
resources such as cpu/memory requests and limits. To interface with control
152+
groups, the kubelet and the container runtime need to use a *cgroup driver*.
153+
It's critical that the kubelet and the container runtime uses the same cgroup
154+
driver and are configured the same.
155+
-->
156+
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 和底层容器运行时都需要对接控制组来强制执行
157+
[为 Pod 和容器管理资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/)
158+
并为诸如 CPU、内存这类资源设置请求和限制。若要对接控制组,kubelet 和容器运行时需要使用一个 **cgroup 驱动**
159+
关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。
160+
161+
<!--
162+
There are two cgroup drivers available:
163+
164+
* [`cgroupfs`](#cgroupfs-cgroup-driver)
165+
* [`systemd`](#systemd-cgroup-driver)
166+
-->
167+
可用的 cgroup 驱动有两个:
168+
169+
* [`cgroupfs`](#cgroupfs-cgroup-driver)
170+
* [`systemd`](#systemd-cgroup-driver)
171+
172+
<!--
173+
### cgroupfs driver {#cgroupfs-cgroup-driver}
174+
175+
The `cgroupfs` driver is the default cgroup driver in the kubelet. When the `cgroupfs`
176+
driver is used, the kubelet and the container runtime directly interface with
177+
the cgroup filesystem to configure cgroups.
178+
179+
The `cgroupfs` driver is **not** recommended when
180+
[systemd](https://www.freedesktop.org/wiki/Software/systemd/) is the
181+
init system because systemd expects a single cgroup manager on
182+
the system. Additionally, if you use [cgroup v2](/docs/concepts/architecture/cgroups)
183+
, use the `systemd` cgroup driver instead of
184+
`cgroupfs`.
185+
-->
186+
### cgroupfs 驱动 {#cgroupfs-cgroup-driver}
187+
188+
`cgroupfs` 驱动是 kubelet 中默认的 cgroup 驱动。当使用 `cgroupfs` 驱动时,
189+
kubelet 和容器运行时将直接对接 cgroup 文件系统来配置 cgroup。
190+
191+
[systemd](https://www.freedesktop.org/wiki/Software/systemd/) 是初始化系统时,
192+
**** 推荐使用 `cgroupfs` 驱动,因为 systemd 期望系统上只有一个 cgroup 管理器。
193+
此外,如果你使用 [cgroup v2](/zh-cn/docs/concepts/architecture/cgroups)
194+
则应用 `systemd` cgroup 驱动取代 `cgroupfs`
195+
196+
<!--
197+
### systemd cgroup driver {#systemd-cgroup-driver}
198+
149199
When [systemd](https://www.freedesktop.org/wiki/Software/systemd/) is chosen as the init
150200
system for a Linux distribution, the init process generates and consumes a root control group
151201
(`cgroup`) and acts as a cgroup manager.
152-
Systemd has a tight integration with cgroups and allocates a cgroup per systemd unit. It's possible
153-
to configure your container runtime and the kubelet to use `cgroupfs`. Using `cgroupfs` alongside
154-
systemd means that there will be two different cgroup managers.
202+
203+
systemd has a tight integration with cgroups and allocates a cgroup per systemd
204+
unit. As a result, if you use `systemd` as the init system with the `cgroupfs`
205+
driver, the system gets two different cgroup managers.
155206
-->
207+
### systemd cgroup 驱动 {#systemd-cgroup-driver}
208+
156209
当某个 Linux 系统发行版使用 [systemd](https://www.freedesktop.org/wiki/Software/systemd/)
157210
作为其初始化系统时,初始化进程会生成并使用一个 root 控制组(`cgroup`),并充当 cgroup 管理器。
158-
Systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。
159-
你也可以配置容器运行时和 kubelet 使用 `cgroupfs`
160-
连同 systemd 一起使用 `cgroupfs` 意味着将有两个不同的 cgroup 管理器。
161211

212+
systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。
213+
因此,如果你 `systemd` 用作初始化系统,同时使用 `cgroupfs` 驱动,则系统中会存在两个不同的 cgroup 管理器。
214+
215+
<!--
216+
Two cgroup managers result in two views of the available and in-use resources in
217+
the system. In some cases, nodes that are configured to use `cgroupfs` for the
218+
kubelet and container runtime, but use `systemd` for the rest of the processes become
219+
unstable under resource pressure.
220+
221+
The approach to mitigate this instability is to use `systemd` as the cgroup driver for
222+
the kubelet and the container runtime when systemd is the selected init system.
223+
-->
224+
同时存在两个 cgroup 管理器将造成系统中针对可用的资源和使用中的资源出现两个视图。某些情况下,
225+
将 kubelet 和容器运行时配置为使用 `cgroupfs`、但为剩余的进程使用 `systemd`
226+
的那些节点将在资源压力增大时变得不稳定。
227+
228+
当 systemd 是选定的初始化系统时,缓解这个不稳定问题的方法是针对 kubelet 和容器运行时将
229+
`systemd` 用作 cgroup 驱动。
162230
<!--
163-
A single cgroup manager simplifies the view of what resources are being allocated
164-
and will by default have a more consistent view of the available and in-use resources.
165-
When there are two cgroup managers on a system, you end up with two views of those resources.
166-
In the field, people have reported cases where nodes that are configured to use `cgroupfs`
167-
for the kubelet and Docker, but `systemd` for the rest of the processes, become unstable under
168-
resource pressure.
169-
-->
170-
单个 cgroup 管理器将简化分配资源的视图,并且默认情况下将对可用资源和使用中的资源具有更一致的视图。
171-
当有两个管理器共存于一个系统中时,最终将对这些资源产生两种视图。
172-
在此领域人们已经报告过一些案例,某些节点配置让 kubelet 和 docker 使用
173-
`cgroupfs`,而节点上运行的其余进程则使用 systemd;
174-
这类节点在资源压力下会变得不稳定。
231+
To set `systemd` as the cgroup driver, edit the
232+
[`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/)
233+
option of `cgroupDriver` and set it to `systemd`. For example:
234+
-->
235+
要将 `systemd` 设置为 cgroup 驱动,需编辑 [`KubeletConfiguration`](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/)
236+
`cgroupDriver` 选项,并将其设置为 `systemd`。例如:
237+
238+
```yaml
239+
apiVersion: kubelet.config.k8s.io/v1beta1
240+
kind: KubeletConfiguration
241+
...
242+
cgroupDriver: systemd
243+
```
175244
176245
<!--
177-
Changing the settings such that your container runtime and kubelet use `systemd` as the cgroup driver
178-
stabilized the system. To configure this for Docker, set `native.cgroupdriver=systemd`.
246+
If you configure `systemd` as the cgroup driver for the kubelet, you must also
247+
configure `systemd` as the cgroup driver for the container runtime. Refer to
248+
the documentation for your container runtime for instructions. For example:
179249
-->
180-
更改设置,令容器运行时和 kubelet 使用 `systemd` 作为 cgroup 驱动,以此使系统更为稳定。
181-
对于 Docker,要设置 `native.cgroupdriver=systemd` 选项。
250+
如果你将 `systemd` 配置为 kubelet 的 cgroup 驱动,你也必须将 `systemd` 配置为容器运行时的 cgroup 驱动。
251+
参阅容器运行时文档,了解指示说明。例如:
252+
253+
* [containerd](#containerd-systemd)
254+
* [CRI-O](#cri-o)
182255

183256
{{< caution >}}
184257
<!--
@@ -199,80 +272,6 @@ cgroup 驱动,当为现有 Pod 重新创建 PodSandbox 时会产生错误。
199272
或者使用自动化方案来重新安装。
200273
{{< /caution >}}
201274

202-
<!--
203-
### Cgroup version 2 {#cgroup-v2}
204-
205-
Cgroup v2 is the next version of the cgroup Linux API. Differently than cgroup v1, there is a single
206-
hierarchy instead of a different one for each controller.
207-
-->
208-
### Cgroup v2 {#cgroup-v2}
209-
210-
Cgroup v2 是 cgroup Linux API 的下一个版本。与 cgroup v1 不同的是,
211-
Cgroup v2 只有一个层次结构,而不是每个控制器有一个不同的层次结构。
212-
213-
<!--
214-
The new version offers several improvements over cgroup v1, some of these improvements are:
215-
216-
- cleaner and easier to use API
217-
- safe sub-tree delegation to containers
218-
- newer features like Pressure Stall Information
219-
-->
220-
新版本对 cgroup v1 进行了多项改进,其中一些改进是:
221-
222-
- 更简洁、更易于使用的 API
223-
- 可将安全子树委派给容器
224-
- 更新的功能,如压力失速信息(Pressure Stall Information)
225-
226-
<!--
227-
Even if the kernel supports a hybrid configuration where some controllers are managed by cgroup v1
228-
and some others by cgroup v2, Kubernetes supports only the same cgroup version to manage all the
229-
controllers.
230-
231-
If systemd doesn't use cgroup v2 by default, you can configure the system to use it by adding
232-
`systemd.unified_cgroup_hierarchy=1` to the kernel command line.
233-
-->
234-
尽管内核支持混合配置,即其中一些控制器由 cgroup v1 管理,另一些由 cgroup v2 管理,
235-
Kubernetes 仅支持使用同一 cgroup 版本来管理所有控制器。
236-
237-
如果 systemd 默认不使用 cgroup v2,你可以通过在内核命令行中添加
238-
`systemd.unified_cgroup_hierarchy=1` 来配置系统去使用它。
239-
240-
<!--
241-
```shell
242-
# This example is for a Linux OS that uses the DNF package manager
243-
# Your system might use a different method for setting the command line
244-
# that the Linux kernel uses.
245-
sudo dnf install -y grubby && \
246-
sudo grubby \
247-
--update-kernel=ALL \
248-
--args="systemd.unified_cgroup_hierarchy=1"
249-
```
250-
-->
251-
252-
```shell
253-
# 此示例适用于使用 DNF 包管理器的 Linux 操作系统
254-
# 你的系统可能使用不同的方法来设置 Linux 内核使用的命令行。
255-
sudo dnf install -y grubby && \
256-
sudo grubby \
257-
--update-kernel=ALL \
258-
--args="systemd.unified_cgroup_hierarchy=1"
259-
```
260-
261-
<!--
262-
If you change the command line for the kernel, you must reboot the node before your
263-
change takes effect.
264-
265-
There should not be any noticeable difference in the user experience when switching to cgroup v2, unless
266-
users are accessing the cgroup file system directly, either on the node or from within the containers.
267-
268-
In order to use it, cgroup v2 must be supported by the CRI runtime as well.
269-
-->
270-
如果更改内核的命令行,则必须重新启动节点才能使更改生效。
271-
272-
切换到 cgroup v2 时,用户体验不应有任何明显差异,
273-
除非用户直接在节点上或在容器内访问 cgroup 文件系统。
274-
为了使用它,CRI 运行时也必须支持 cgroup v2。
275-
276275
<!--
277276
### Migrating to the `systemd` driver in kubeadm managed clusters
278277

@@ -281,8 +280,8 @@ follow [configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/conf
281280
-->
282281
### 将 kubeadm 管理的集群迁移到 `systemd` 驱动
283282

284-
如果你希望将现有的由 kubeadm 管理的集群迁移到 `systemd` cgroup 驱动程序
285-
请按照[配置 cgroup 驱动程序](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)操作。
283+
如果你希望将现有的由 kubeadm 管理的集群迁移到 `systemd` cgroup 驱动
284+
请按照[配置 cgroup 驱动](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)操作。
286285

287286
<!--
288287
## CRI version support {#cri-versions}
@@ -348,9 +347,9 @@ On Windows the default CRI endpoint is `npipe://./pipe/containerd-containerd`.
348347

349348
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
350349
-->
351-
#### 配置 `systemd` cgroup 驱动程序 {#containerd-systemd}
350+
#### 配置 `systemd` cgroup 驱动 {#containerd-systemd}
352351

353-
结合 `runc` 使用 `systemd` cgroup 驱动,在 `/etc/containerd/config.toml` 中设置
352+
结合 `runc` 使用 `systemd` cgroup 驱动,在 `/etc/containerd/config.toml` 中设置
354353

355354
```
356355
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
@@ -359,6 +358,11 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`,
359358
SystemdCgroup = true
360359
```
361360
361+
<!--
362+
The `systemd` cgroup driver is recommended if you use [cgroup v2](/docs/concepts/architecture/cgroups).
363+
-->
364+
如果你使用 [cgroup v2](/zh-cn/docs/concepts/architecture/cgroups),则推荐 `systemd` cgroup 驱动。
365+
362366
{{< note >}}
363367
<!--
364368
If you installed containerd from a package (for example, RPM or `.deb`), you may find
@@ -405,7 +409,7 @@ sandbox image by setting the following config:
405409

406410
```toml
407411
[plugins."io.containerd.grpc.v1.cri"]
408-
sandbox_image = "k8s.gcr.io/pause:3.2"
412+
sandbox_image = "registry.k8s.io/pause:3.2"
409413
```
410414

411415
<!--
@@ -432,10 +436,10 @@ for you. To switch to the `cgroupfs` cgroup driver, either edit
432436
`/etc/crio/crio.conf` or place a drop-in configuration in
433437
`/etc/crio/crio.conf.d/02-cgroup-manager.conf`, for example:
434438
-->
435-
#### cgroup 驱动程序 {#cgroup-driver}
439+
#### cgroup 驱动 {#cgroup-driver}
436440

437-
CRI-O 默认使用 systemd cgroup 驱动程序,这对你来说可能工作得很好。
438-
要切换到 `cgroupfs` cgroup 驱动程序,请编辑 `/etc/crio/crio.conf` 或在
441+
CRI-O 默认使用 systemd cgroup 驱动,这对你来说可能工作得很好。
442+
要切换到 `cgroupfs` cgroup 驱动,请编辑 `/etc/crio/crio.conf` 或在
439443
`/etc/crio/crio.conf.d/02-cgroup-manager.conf` 中放置一个插入式配置,例如:
440444

441445
```toml
@@ -522,7 +526,7 @@ You can use Mirantis Container Runtime with Kubernetes using the open source
522526
-->
523527
### Mirantis 容器运行时 {#mcr}
524528

525-
[Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html) (MCR)
529+
[Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html) (MCR)
526530
是一种商用容器运行时,以前称为 Docker 企业版。
527531
你可以使用 MCR 中包含的开源 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)
528532
组件将 Mirantis Container Runtime 与 Kubernetes 一起使用。

0 commit comments

Comments
 (0)