Skip to content

Commit 4dfabc0

Browse files
authored
Merge pull request #27919 from howieyuen/debug
[zh] resync task files [2]
2 parents 31ad97d + 2c95932 commit 4dfabc0

File tree

7 files changed

+48
-58
lines changed

7 files changed

+48
-58
lines changed

content/zh/docs/tasks/debug-application-cluster/audit.md

Lines changed: 27 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -82,30 +82,42 @@ Each request can be recorded with an associated _stage_. The defined stages are:
8282
- `ResponseComplete` - 当响应消息体完成并且没有更多数据需要传输的时候。
8383
- `Panic` - 当 panic 发生时生成。
8484

85+
<!--
86+
The configuration of an
87+
[Audit Event configuration](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)
88+
is different from the
89+
[Event](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core)
90+
API object.
91+
-->
92+
{{< note >}}
93+
[审计事件配置](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)
94+
的配置与 [Event](/zh/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core)
95+
API 对象不同。
96+
{{< /note >}}
97+
8598
<!--
8699
The audit logging feature increases the memory consumption of the API server
87100
because some context required for auditing is stored for each request.
88101
Additionally, memory consumption depends on the audit logging configuration.
89102
-->
90-
{{< note >}}
91103
审计日志记录功能会增加 API server 的内存消耗,因为需要为每个请求存储审计所需的某些上下文。
92104
此外,内存消耗取决于审计日志记录的配置。
93-
{{< /note >}}
94105

95106
<!--
96107
## Audit Policy
97108
98109
Audit policy defines rules about what events should be recorded and what data
99110
they should include. The audit policy object structure is defined in the
100-
[`audit.k8s.io` API group][auditing-api]. When an event is processed, it's
111+
[`audit.k8s.io` API group](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy).
112+
When an event is processed, it's
101113
compared against the list of rules in order. The first matching rule sets the
102114
_audit level_ of the event. The defined audit levels are:
103115
-->
104116
## 审计策略 {#audit-policy}
105117

106118
审计政策定义了关于应记录哪些事件以及应包含哪些数据的规则。
107119
审计策略对象结构定义在
108-
[`audit.k8s.io` API 组](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go)
120+
[`audit.k8s.io` API 组](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)
109121
处理事件时,将按顺序与规则列表进行比较。第一个匹配规则设置事件的
110122
_审计级别(Audit Level)_。已定义的审计级别有:
111123

@@ -158,12 +170,18 @@ rules:
158170
If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the
159171
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)
160172
script, which generates the audit policy file. You can see most of the audit policy file by looking directly at the script.
173+
174+
You can also refer to the [`Policy` configuration reference](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)
175+
for details about the fields defined.
161176
-->
162177
如果你在打磨自己的审计配置文件,你可以使用为 Google Container-Optimized OS
163178
设计的审计配置作为出发点。你可以参考
164179
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)
165180
脚本,该脚本能够生成审计策略文件。你可以直接在脚本中看到审计策略的绝大部份内容。
166181

182+
你也可以参考 [`Policy` 配置参考](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)
183+
以获取有关已定义字段的详细信息。
184+
167185
<!--
168186
## Audit backends
169187

@@ -173,10 +191,8 @@ Out of the box, the kube-apiserver provides two backends:
173191
- Log backend, which writes events into the filesystem
174192
- Webhook backend, which sends events to an external HTTP API
175193

176-
In both cases, audit events structure is defined by the API in the
177-
`audit.k8s.io` API group. For Kubernetes {{< param "fullversion" >}}, that
178-
API is at version
179-
[`v1`](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go).
194+
In all cases, audit events follow a structure defined by the Kubernetes API in the
195+
[`audit.k8s.io` API group](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event).
180196
-->
181197

182198
## 审计后端 {#audit-backends}
@@ -186,9 +202,9 @@ API is at version
186202
- Log 后端,将事件写入到文件系统
187203
- Webhook 后端,将事件发送到外部 HTTP API
188204

189-
在这两种情况下,审计事件结构均由 `audit.k8s.io` API 组中的 API 定义。
190-
对于 Kubernetes {{< param "fullversion" >}},该 API 的当前版本是
191-
[`v1`](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go).
205+
在这所有情况下,审计事件均遵循 Kubernetes API
206+
[`audit.k8s.io` API 组](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)
207+
中定义的结构。
192208

193209
<!--
194210
In case of patches, request body is a JSON array with patch operations, not a JSON object

content/zh/docs/tasks/debug-application-cluster/debug-application-introspection.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -257,8 +257,7 @@ The message tells us that there were not enough resources for the Pod on any of
257257
其 message 部分表明没有任何节点拥有足够多的资源。
258258

259259
<!--
260-
To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas.
261-
(Or you could just leave the one Pod pending, which is harmless.)
260+
To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could leave the one Pod pending, which is harmless.)
262261
-->
263262
要纠正这种情况,可以使用 `kubectl scale` 更新 Deployment,以指定 4 个或更少的副本。
264263
(或者你可以让 Pod 继续保持这个状态,这是无害的。)

content/zh/docs/tasks/debug-application-cluster/debug-application.md

Lines changed: 2 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -265,42 +265,18 @@ kubectl get pods --selector=name=nginx,type=frontend
265265
```
266266

267267
<!--
268-
If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't
269-
have the right ports exposed. If your service has a `containerPort` specified, but the Pods that are
270-
selected don't have that port listed, then they won't be added to the endpoints list.
271-
272268
Verify that the pod's `containerPort` matches up with the Service's `targetPort`
273269
-->
274-
如果 Pod 列表符合预期,但是 Endpoints 仍然为空,那么可能暴露的端口不正确。
275-
如果服务指定了 `containerPort`,但是所选中的 Pod 没有列出该端口,这些 Pod
276-
不会被添加到 Endpoints 列表。
277-
278270
验证 Pod 的 `containerPort` 与服务的 `targetPort` 是否匹配。
279271

280272
<!--
281273
#### Network traffic is not forwarded
282274
283-
If you can connect to the service, but the connection is immediately dropped, and there are endpoints
284-
in the endpoints list, it's likely that the proxy can't contact your pods.
285-
286-
There are three things to
287-
check:
288-
289-
* Are your pods working correctly? Look for restart count, and [debug pods](#debugging-pods).
290-
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP.
291-
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
275+
Please see [debugging service](/docs/tasks/debug-application-cluster/debug-service/) for more information.
292276
-->
293277
#### 网络流量未被转发
294278

295-
如果你可以连接到服务上,但是连接立即被断开了,并且在 Endpoints 列表中有末端表项,
296-
可能是代理无法连接到 Pod。
297-
298-
要检查的有以下三项:
299-
300-
* Pod 工作是否正常? 看一下重启计数,并参阅[调试 Pod](#debugging-pods)
301-
* 是否可以直接连接到 Pod?获取 Pod 的 IP 地址,然后尝试直接连接到该 IP;
302-
* 应用是否在配置的端口上进行服务?Kubernetes 不进行端口重映射,所以如果应用在
303-
8080 端口上服务,那么 `containerPort` 字段就要设定为 8080。
279+
请参阅[调试 service](/zh/docs/tasks/debug-application-cluster/debug-service/) 了解更多信息。
304280

305281
## {{% heading "whatsnext" %}}
306282

content/zh/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ case you can try several things:
8787
will never be scheduled.
8888
8989
You can check node capacities with the `kubectl get nodes -o <format>`
90-
command. Here are some example command lines that extract just the necessary
90+
command. Here are some example command lines that extract the necessary
9191
information:
9292
-->
9393
#### 资源不足

content/zh/docs/tasks/debug-application-cluster/debug-running-pod.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
143143
```
144144
145145
This section use the `pause` container image in examples because it does not
146-
contain userland debugging utilities, but this method works with all container
146+
contain debugging utilities, but this method works with all container
147147
images.
148148
-->
149149
## 使用临时容器来调试的例子 {#ephemeral-container-example}
@@ -162,7 +162,7 @@ kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
162162
```
163163

164164
{{< note >}}
165-
本节示例中使用 `pause` 容器镜像,因为它不包含任何用户级调试程序,但是这个方法适用于所有容器镜像。
165+
本节示例中使用 `pause` 容器镜像,因为它不包含调试程序,但是这个方法适用于所有容器镜像。
166166
{{< /note >}}
167167

168168
<!--

content/zh/docs/tasks/debug-application-cluster/debug-service.md

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -29,15 +29,15 @@ Deployment(或其他工作负载控制器)运行了 Pod,并创建 Service
2929
## Running commands in a Pod
3030
3131
For many steps here you will want to see what a Pod running in the cluster
32-
sees. The simplest way to do this is to run an interactive alpine Pod:
32+
sees. The simplest way to do this is to run an interactive busybox Pod:
3333
-->
3434
## 在 Pod 中运行命令
3535

3636
对于这里的许多步骤,你可能希望知道运行在集群中的 Pod 看起来是什么样的。
37-
最简单的方法是运行一个交互式的 alpine Pod:
37+
最简单的方法是运行一个交互式的 busybox Pod:
3838

3939
```none
40-
$ kubectl run -it --rm --restart=Never alpine --image=alpine sh
40+
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
4141
```
4242

4343
<!--
@@ -161,13 +161,13 @@ kubectl get pods -l app=hostnames \
161161
```
162162

163163
<!--
164-
The example container used for this walk-through simply serves its own hostname
164+
The example container used for this walk-through serves its own hostname
165165
via HTTP on port 9376, but if you are debugging your own app, you'll want to
166166
use whatever port number your Pods are listening on.
167167

168168
From within a pod:
169169
-->
170-
用于本教程的示例容器仅通过 HTTP 在端口 9376 上提供其自己的主机名,
170+
用于本教程的示例容器通过 HTTP 在端口 9376 上提供其自己的主机名,
171171
但是如果要调试自己的应用程序,则需要使用你的 Pod 正在侦听的端口号。
172172

173173
在 Pod 内运行:
@@ -260,9 +260,9 @@ service/hostnames exposed
260260
```
261261

262262
<!--
263-
And read it back, just to be sure:
263+
And read it back:
264264
-->
265-
重新运行查询命令,确认没有问题
265+
重新运行查询命令:
266266

267267
```shell
268268
kubectl get svc hostnames
@@ -608,14 +608,13 @@ Earlier you saw that the Pods were running. You can re-check that:
608608
kubectl get pods -l app=hostnames
609609
```
610610
```none
611-
NAME READY STATUS RESTARTS AGE
611+
NAME READY STATUS RESTARTS AGE
612612
hostnames-632524106-bbpiw 1/1 Running 0 1h
613613
hostnames-632524106-ly40y 1/1 Running 0 1h
614614
hostnames-632524106-tlaok 1/1 Running 0 1h
615615
```
616616
<!--
617-
The `-l app=hostnames` argument is a label selector - just like our Service
618-
has.
617+
The `-l app=hostnames` argument is a label selector configured on the Service.
619618
620619
The "AGE" column says that these Pods are about an hour old, which implies that
621620
they are running fine and not crashing.
@@ -627,7 +626,7 @@ If the restart count is high, read more about how to [debug pods](/docs/tasks/de
627626
Inside the Kubernetes system is a control loop which evaluates the selector of
628627
every Service and saves the results into a corresponding Endpoints object.
629628
-->
630-
`-l app=hostnames` 参数是一个标签选择算符 - 和我们 Service 中定义的一样
629+
`-l app=hostnames` 参数是在 Service 上配置的标签选择器
631630

632631
"AGE" 列表明这些 Pod 已经启动一个小时了,这意味着它们运行良好,而未崩溃。
633632

@@ -899,7 +898,7 @@ iptables-save | grep hostnames
899898
```
900899

901900
<!--
902-
There should be 2 rules for each port of your Service (just one in this
901+
There should be 2 rules for each port of your Service (only one in this
903902
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST".
904903
905904
Almost nobody should be using the "userspace" mode any more, so you won't spend

content/zh/docs/tasks/debug-application-cluster/logging-stackdriver.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -493,13 +493,13 @@ a running cluster in the [Deploying section](#deploying).
493493
### 更改 `DaemonSet` 参数 {#changing-daemonset-parameters}
494494

495495
<!--
496-
When you have the Stackdriver Logging `DaemonSet` in your cluster, you can just modify the
497-
`template` field in its spec, daemonset controller will update the pods for you. For example,
498-
let's assume you've just installed the Stackdriver Logging as described above. Now you want to
496+
When you have the Stackdriver Logging `DaemonSet` in your cluster, you can modify the
497+
`template` field in its spec. The DaemonSet controller manages the pods for you.
498+
For example, assume you've installed the Stackdriver Logging as described above. Now you want to
499499
change the memory limit to give fluentd more memory to safely process more logs.
500500
-->
501501
当集群中有 Stackdriver 日志机制的 `DaemonSet` 时,你只需修改其 spec 中的
502-
`template` 字段,daemonset 控制器将为你更新 Pod。
502+
`template` 字段,DaemonSet 控制器将为你管理 Pod。
503503
例如,假设你按照上面的描述已经安装了 Stackdriver 日志机制。
504504
现在,你想更改内存限制,来给 fluentd 提供的更多内存,从而安全地处理更多日志。
505505

0 commit comments

Comments
 (0)