Skip to content

Commit f7cb61d

Browse files
authored
Merge pull request #33603 from TinySong/task-2
[zh]Sync task-2
2 parents f230e10 + c788d89 commit f7cb61d

File tree

4 files changed

+59
-120
lines changed

4 files changed

+59
-120
lines changed

content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md

Lines changed: 41 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -14,19 +14,11 @@ weight: 70
1414
<!-- overview -->
1515

1616
<!--
17-
With Kubernetes 1.20 dockershim was deprecated. From the
18-
[Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)
19-
you might already know that most apps do not have a direct dependency on runtime hosting
20-
containers. However, there are still a lot of telemetry and security agents
21-
that has a dependency on docker to collect containers metadata, logs and
22-
metrics. This document aggregates information on how to detect tese
23-
dependencies and links on how to migrate these agents to use generic tools or
24-
alternative runtimes.
25-
-->
26-
在 Kubernetes 1.20 版本中,dockershim 被弃用。
27-
在博文[弃用 Dockershim 常见问题](/zh/blog/2020/12/02/dockershim-faq/)中,
28-
你大概已经了解到,大多数应用并没有直接通过运行时来托管容器。
29-
但是,仍然有大量的遥测和安全代理依赖 docker 来收集容器元数据、日志和指标。
17+
Kubernetes' support for direct integration with Docker Engine is deprecated, and will be removed. Most apps do not have a direct dependency on runtime hosting containers. However, there are still a lot of telemetry and monitoring agents that has a dependency on docker to collect containers metadata, logs and metrics. This document aggregates information on how to detect these dependencies and links on how to migrate these agents to use generic tools or alternative runtimes.
18+
-->
19+
Kubernetes 对与 Docker Engine 直接集成的支持已被弃用并将被删除。
20+
大多数应用程序不直接依赖于托管容器的运行时。但是,仍然有大量的遥测和监控代理依赖
21+
docker 来收集容器元数据、日志和指标。
3022
本文汇总了一些信息和链接:信息用于阐述如何探查这些依赖,链接用于解释如何迁移这些代理去使用通用的工具或其他容器运行。
3123

3224
<!--
@@ -35,51 +27,59 @@ alternative runtimes.
3527
## 遥测和安全代理 {#telemetry-and-security-agents}
3628

3729
<!--
38-
There are a few ways agents may run on Kubernetes cluster. Agents may run on
39-
nodes directly or as DaemonSets.
30+
Within a Kubernetes cluster there are a few different ways to run telemetry or security agents.
31+
Some agents have a direct dependency on Docker Engine when they run as DaemonSets or
32+
directly on nodes.
4033
-->
41-
为了让代理运行在 Kubernetes 集群中,我们有几种办法
42-
代理既可以直接在节点上运行,也可以作为守护进程运行
34+
Kubernetes 集群中,有几种不同的方式来运行遥测或安全代理
35+
一些代理在以 DaemonSet 的形式运行或直接在节点上运行时,直接依赖于 Docker Engine
4336

4437
<!--
45-
### Why do telemetry agents rely on Docker?
38+
### Why do some telemetry agents communicate with Docker Engine?
4639
-->
47-
### 为什么遥测代理依赖于 Docker? {#why-do-telemetry-agents-relyon-docker}
40+
### 为什么有些遥测代理会与 Docker Engine 通信?
4841

4942
<!--
50-
Historically, Kubernetes was built on top of Docker. Kubernetes is managing
51-
networking and scheduling, Docker was placing and operating containers on a
52-
node. So you can get scheduling-related metadata like a pod name from Kubernetes
53-
and containers state information from Docker. Over time more runtimes were
54-
created to manage containers. Also there are projects and Kubernetes features
55-
that generalize container status information extraction across many runtimes.
43+
Historically, Kubernetes was written to work specifically with Docker Engine.
44+
Kubernetes took care of networking and scheduling, relying on Docker Engine for launching
45+
and running containers (within Pods) on a node. Some information that is relevant to telemetry,
46+
such as a pod name, is only available from Kubernetes components. Other data, such as container
47+
metrics, is not the responsibility of the container runtime. Early telemetry agents needed to query the
48+
container runtime **and** Kubernetes to report an accurate picture. Over time, Kubernetes gained
49+
the ability to support multiple runtimes, and now supports any runtime that is compatible with
50+
the container runtime interface.
51+
5652
-->
57-
因为历史原因,Kubernetes 建立在 Docker 之上。
58-
Kubernetes 管理网络和调度,Docker 则在具体的节点上定位并操作容器。
59-
所以,你可以从 Kubernetes 取得调度相关的元数据,比如 Pod 名称;从 Docker 取得容器状态信息。
60-
后来,人们开发了更多的运行时来管理容器。
61-
同时一些项目和 Kubernetes 特性也不断涌现,支持跨多个运行时收集容器状态信息。
53+
从历史上看,Kubernetes 是专门为与 Docker Engine 一起工作而编写的。
54+
Kubernetes 负责网络和调度,依靠 Docker Engine
55+
在节点上启动并运行容器(在 Pod 内)。一些与遥测相关的信息,例如 pod 名称,
56+
只能从 Kubernetes 组件中获得。其他数据,例如容器指标,不是容器运行时的责任。
57+
早期遥测代理需要查询容器运行时**** Kubernetes 以报告准确的信息。
58+
随着时间的推移,Kubernetes 获得了支持多种运行时的能力,现在支持任何兼容容器运行时接口的运行时。
6259

6360
<!--
64-
Some agents are tied specifically to the Docker tool. The agents may run
65-
commands like [`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
61+
Some telemetry agents rely specifically on Docker Engine tooling. For example, an agent
62+
might run a command such as
63+
[`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
6664
or [`docker top`](https://docs.docker.com/engine/reference/commandline/top/) to list
67-
containers and processes or [docker logs](https://docs.docker.com/engine/reference/commandline/logs/)
68-
to subscribe on docker logs. With the deprecating of Docker as a container runtime,
65+
containers and processes or [`docker logs`](https://docs.docker.com/engine/reference/commandline/logs/)
66+
+to receive streamed logs. If nodes in your existing cluster use
67+
+Docker Engine, and you switch to a different container runtime,
6968
these commands will not work any longer.
7069
-->
71-
一些代理和 Docker 工具紧密绑定。此类代理可以这样运行命令,比如用
70+
一些代理和 Docker 工具紧密绑定。比如代理会用到
7271
[`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
7372
[`docker top`](https://docs.docker.com/engine/reference/commandline/top/)
7473
这类命令来列出容器和进程,用
75-
[docker logs](https://docs.docker.com/engine/reference/commandline/logs/)
74+
[`docker logs`](https://docs.docker.com/engine/reference/commandline/logs/)
7675
订阅 Docker 的日志。
77-
但随着 Docker 作为容器运行时被弃用,这些命令将不再工作。
76+
如果现有集群中的节点使用 Docker Engine,在你切换到其它容器运行时的时候,
77+
这些命令将不再起作用。
7878

7979
<!--
80-
### Identify DaemonSets that depend on Docker {#identify-docker-dependency }
80+
### Identify DaemonSets that depend on Docker Engine {#identify-docker-dependency}
8181
-->
82-
### 识别依赖于 Docker 的 DaemonSet {#identify-docker-dependency}
82+
### 识别依赖于 Docker Engine 的 DaemonSet {#identify-docker-dependency}
8383

8484
<!--
8585
If a pod wants to make calls to the `dockerd` running on the node, the pod must either:
@@ -104,10 +104,10 @@ For example: on COS images, Docker exposes its Unix domain socket at
104104
<!--
105105
Here's a sample shell script to find Pods that have a mount directly mapping the
106106
Docker socket. This script outputs the namespace and name of the pod. You can
107-
remove the grep `/var/run/docker.sock` to review other mounts.
107+
remove the `grep '/var/run/docker.sock'` to review other mounts.
108108
-->
109109
下面是一个 shell 示例脚本,用于查找包含直接映射 Docker 套接字的挂载点的 Pod。
110-
你也可以删掉 grep `/var/run/docker.sock` 这一代码片段以查看其它挂载信息。
110+
你也可以删掉 `grep '/var/run/docker.sock'` 这一代码片段以查看其它挂载信息。
111111

112112
```bash
113113
kubectl get pods --all-namespaces \

content/zh/docs/tasks/configure-pod-container/configure-gmsa.md

Lines changed: 15 additions & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -22,13 +22,12 @@ This page shows how to configure [Group Managed Service Accounts](https://docs.m
2222
服务器将管理操作委派给其他管理员等能力。
2323

2424
<!--
25-
In Kubernetes, GMSA credential specs are configured at a Kubernetes cluster-wide scope as Custom Resources. Windows Pods, as well as individual containers within a Pod, can be configured to use a GMSA for domain based functions (e.g. Kerberos authentication) when interacting with other Windows services. As of v1.16, the Docker runtime supports GMSA for Windows workloads.
25+
In Kubernetes, GMSA credential specs are configured at a Kubernetes cluster-wide scope as Custom Resources. Windows Pods, as well as individual containers within a Pod, can be configured to use a GMSA for domain based functions (e.g. Kerberos authentication) when interacting with other Windows services.
2626
-->
2727
在 Kubernetes 环境中,GMSA 凭据规约配置为 Kubernetes 集群范围的自定义资源
2828
(Custom Resources)形式。Windows Pod 以及各 Pod 中的每个容器可以配置为
2929
使用 GMSA 来完成基于域(Domain)的操作(例如,Kerberos 身份认证),以便
30-
与其他 Windows 服务相交互。自 Kubernetes 1.16 版本起,Docker 运行时为
31-
Windows 负载支持 GMSA。
30+
与其他 Windows 服务相交互。
3231

3332
## {{% heading "prerequisites" %}}
3433

@@ -190,7 +189,7 @@ credspec:
190189
下面的 YAML 配置描述的是一个名为 `gmsa-WebApp1` 的 GMSA 凭据规约:
191190

192191
```yaml
193-
apiVersion: windows.k8s.io/v1alpha1
192+
apiVersion: windows.k8s.io/v1
194193
kind: GMSACredentialSpec
195194
metadata:
196195
name: gmsa-WebApp1 # 这是随意起的一个名字,将用作引用
@@ -381,85 +380,24 @@ As Pod specs with GMSA fields populated (as described above) are applied in a cl
381380
1. 容器运行时为每个 Windows 容器配置所指定的 GMSA 凭据规约,这样容器就可以以
382381
活动目录中该 GMSA 所代表的身份来执行操作,使用该身份来访问域中的服务。
383382

383+
## 使用主机名或 FQDN 对网络共享进行身份验证
384384
<!--
385-
## Containerd
386-
387-
On Windows Server 2019, in order to use GMSA with containerd, you must be running OS Build 17763.1817 (or later) which can be installed using the patch [KB5000822](https://support.microsoft.com/en-us/topic/march-9-2021-kb5000822-os-build-17763-1817-2eb6197f-e3b1-4f42-ab51-84345e063564).
388-
389-
There is also a known issue with containerd that occurs when trying to connect to SMB shares from Pods. Once you have configured GMSA, the pod will be unable to connect to the share using the hostname or FQDN, but connecting to the share using an IP address works as expected.
385+
If you are experiencing issues connecting to SMB shares from Pods using hostname or FQDN, but are able to access the shares via their IPv4 address then make sure the following registry key is set on the Windows nodes.
390386
-->
391-
## Containerd
392-
在 Windows Server 2019 上对 containerd 使用 GMSA,需要使用 Build 17763.1817(或更新的版本),
393-
你可以安装补丁 [KB5000822](https://support.microsoft.com/en-us/topic/march-9-2021-kb5000822-os-build-17763-1817-2eb6197f-e3b1-4f42-ab51-84345e063564)。
387+
如果你在使用主机名或 FQDN 从 Pod 连接到 SMB 共享时遇到问题,但能够通过其 IPv4 地址访问共享,
388+
请确保在 Windows 节点上设置了以下注册表项。
394389

395-
containerd 场景从 Pod 连接 SMB 共享的时候有一个已知问题:
396-
配置了 GMSA 以后,无法通过主机名或者 FQDN 访问 SMB共享,但是通过 IP 地址访问没有问题。
397-
398-
```PowerShell
399-
ping adserver.ad.local
390+
```cmd
391+
reg add "HKLM\SYSTEM\CurrentControlSet\Services\hns\State" /v EnableCompartmentNamespace /t REG_DWORD /d 1
400392
```
401393

402394
<!--
403-
and correctly resolves the hostname to an IPv4 address. The output is similar to:
395+
Running Pods will then need to be recreated to pick up the behavior changes.
396+
More information on how this registry key is used can be found [here](
397+
https://github.com/microsoft/hcsshim/blob/885f896c5a8548ca36c88c4b87fd2208c8d16543/internal/uvm/create.go#L74-L83)
404398
-->
405-
406-
主机名可以被解析为 IPv4 地址,输出类似如下所示:
407-
408-
```
409-
Pinging adserver.ad.local [192.168.111.18] with 32 bytes of data:
410-
Reply from 192.168.111.18: bytes=32 time=6ms TTL=124
411-
Reply from 192.168.111.18: bytes=32 time=5ms TTL=124
412-
Reply from 192.168.111.18: bytes=32 time=5ms TTL=124
413-
Reply from 192.168.111.18: bytes=32 time=5ms TTL=124
414-
```
415-
416-
<!--
417-
However, when attempting to browse the directory using the hostname
418-
-->
419-
但是,当尝试使用主机名浏览目录时:
420-
421-
```PowerShell
422-
cd \\adserver.ad.local\test
423-
```
424-
425-
<!--
426-
you see an error that implies the target share doesn't exist:
427-
-->
428-
你会看到一个错误,提示目标共享不存在:
429-
430-
```
431-
cd : Cannot find path '\\adserver.ad.local\test' because it does not exist.
432-
At line:1 char:1
433-
+ cd \\adserver.ad.local\test
434-
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
435-
+ CategoryInfo : ObjectNotFound: (\\adserver.ad.local\test:String) [Set-Location], ItemNotFoundException
436-
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
437-
```
438-
439-
<!--
440-
but you notice that the error disappears if you browse to the share using its IPv4 address instead; for example:
441-
-->
442-
但是你会注意到,如果你改为使用其 IPv4 地址浏览共享,错误就会消失;例如:
443-
444-
```PowerShell
445-
cd \\192.168.111.18\test
446-
```
447-
448-
<!--
449-
After you change into a directory within the share, you see a prompt similar to:
450-
-->
451-
切换到共享中的目录后,你会看到类似于以下内容的提示:
452-
453-
```
454-
Microsoft.PowerShell.Core\FileSystem::\\192.168.111.18\test>
455-
```
456-
457-
<!--
458-
To correct the behaviour you must run the following on the node `reg add "HKLM\SYSTEM\CurrentControlSet\Services\hns\State" /v EnableCompartmentNamespace /t REG_DWORD /d 1` to add the required registry key. This node change will only take effect in newly created pods, meaning you must now recreate any running pods which require access to SMB shares.
459-
-->
460-
要解决问题,你需要在节点上运行以下命令以添加所需的注册表项
461-
`reg add "HKLM\SYSTEM\CurrentControlSet\Services\hns\State" /v EnableCompartmentNamespace /t REG_DWORD /d 1`
462-
此更改只会在新创建的 Pod 中生效,这意味着你必须重新创建任何需要访问 SMB 共享的正在运行的 Pod。
399+
然后需要重新创建正在运行的 Pod 以使行为更改生效。
400+
有关如何使用此注册表项的更多信息,请参见[此处](https://github.com/microsoft/hcsshim/blob/885f896c5a8548ca36c88c4b87fd2208c8d16543/internal/uvm/create.go#L74-L83)。
463401
<!--
464402
## Troubleshooting
465403

@@ -483,7 +421,7 @@ kubectl exec -it iis-auth-7776966999-n5nzr powershell.exe
483421
`nltest.exe /parentdomain` results in the following error:
484422
-->
485423
`nltest.exe /parentdomain` 导致以下错误:
486-
```
424+
```output
487425
Getting parent domain failed: Status = 1722 0x6ba RPC_S_SERVER_UNAVAILABLE
488426
```
489427

content/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -339,7 +339,8 @@ kubectl describe pod goproxy
339339
-->
340340
## 定义 gRPC 活跃探测器
341341

342-
{{< feature-state for_k8s_version="v1.23" state="alpha" >}}
342+
{{< feature-state for_k8s_version="v1.24" state="beta" >}}
343+
343344

344345
<!--
345346
If your application implements [gRPC Health Checking Protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md),

content/zh/docs/tasks/configure-pod-container/configure-pod-initialization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ The output shows that nginx is serving the web page that was written by the init
135135
[communicating between Containers running in the same Pod](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/).
136136
* Learn more about [Init Containers](/docs/concepts/workloads/pods/init-containers/).
137137
* Learn more about [Volumes](/docs/concepts/storage/volumes/).
138-
* Learn more about [Debugging Init Containers](/docs/tasks/debug-application-cluster/debug-init-containers/)
138+
* Learn more about [Debugging Init Containers](/docs/tasks/debug/debug-application/debug-init-containers/)
139139
-->
140140

141141
* 进一步了解[同一 Pod 中的容器间的通信](/zh/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/)

0 commit comments

Comments
 (0)