Skip to content

Commit 82da8f9

Browse files
authored
Merge pull request #29451 from steven-my/29329-translation-for-run-app
[zh] translation for the run-app section
2 parents 5c316c2 + 742e7d7 commit 82da8f9

File tree

5 files changed

+33
-33
lines changed

5 files changed

+33
-33
lines changed

content/zh/docs/tasks/debug-application-cluster/audit.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -168,15 +168,15 @@ rules:
168168
169169
<!--
170170
If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the
171-
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)
171+
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh)
172172
script, which generates the audit policy file. You can see most of the audit policy file by looking directly at the script.
173173
174174
You can also refer to the [`Policy` configuration reference](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)
175175
for details about the fields defined.
176176
-->
177177
如果你在打磨自己的审计配置文件,你可以使用为 Google Container-Optimized OS
178178
设计的审计配置作为出发点。你可以参考
179-
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)
179+
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh)
180180
脚本,该脚本能够生成审计策略文件。你可以直接在脚本中看到审计策略的绝大部份内容。
181181

182182
你也可以参考 [`Policy` 配置参考](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)

content/zh/docs/tasks/debug-application-cluster/debug-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your
202202
- Action: Use IaaS providers reliable storage (e.g. GCE PD or AWS EBS volume) for VMs with apiserver+etcd
203203
- Mitigates: Apiserver backing storage lost
204204
205-
- Action: Use [high-availability](/docs/admin/high-availability) configuration
205+
- Action: Use [high-availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) configuration
206206
- Mitigates: Control plane node shutdown or control plane components (scheduler, API server, controller-manager) crashing
207207
- Will tolerate one or more simultaneous node or component failures
208208
- Mitigates: API server backing storage (i.e., etcd's data directory) lost

content/zh/docs/tasks/debug-application-cluster/debug-running-pod.md

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -110,30 +110,27 @@ kubectl exec -it cassandra -- sh
110110
<!--
111111
## Debugging with an ephemeral debug container {#ephemeral-container}
112112
113-
{{< feature-state state="alpha" for_k8s_version="v1.18" >}}
113+
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
114114
115115
{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}}
116116
are useful for interactive troubleshooting when `kubectl exec` is insufficient
117117
because a container has crashed or a container image doesn't include debugging
118118
utilities, such as with [distroless images](
119-
https://github.com/GoogleContainerTools/distroless). `kubectl` has an alpha
120-
command that can create ephemeral containers for debugging beginning with version
121-
`v1.18`.
119+
https://github.com/GoogleContainerTools/distroless).
122120
-->
123121
## 使用临时调试容器来进行调试 {#ephemeral-container}
124122

125-
{{< feature-state state="alpha" for_k8s_version="v1.18" >}}
123+
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
126124

127125
当由于容器崩溃或容器镜像不包含调试程序(例如[无发行版镜像](https://github.com/GoogleContainerTools/distroless)等)
128126
而导致 `kubectl exec` 无法运行时,{{< glossary_tooltip text="临时容器" term_id="ephemeral-container" >}}对于排除交互式故障很有用。
129-
从 'v1.18' 版本开始,'kubectl' 有一个可以创建用于调试的临时容器的 alpha 命令。
130127

131128
<!--
132129
### Example debugging using ephemeral containers {#ephemeral-container-example}
133130
134131
The examples in this section require the `EphemeralContainers` [feature gate](
135132
/docs/reference/command-line-tools-reference/feature-gates/) enabled in your
136-
cluster and `kubectl` version v1.18 or later.
133+
cluster and `kubectl` version v1.22 or later.
137134
138135
You can use the `kubectl debug` command to add ephemeral containers to a
139136
running Pod. First, create a pod for the example:
@@ -151,7 +148,7 @@ images.
151148
{{< note >}}
152149
本示例需要你的集群已经开启 `EphemeralContainers`
153150
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
154-
`kubectl` 版本为 v1.18 或者更高。
151+
`kubectl` 版本为 v1.22 或者更高。
155152
{{< /note >}}
156153

157154
你可以使用 `kubectl debug` 命令来给正在运行中的 Pod 增加一个临时容器。
@@ -224,7 +221,7 @@ creates.
224221
The `--target` parameter must be supported by the {{< glossary_tooltip
225222
text="Container Runtime" term_id="container-runtime" >}}. When not supported,
226223
the Ephemeral Container may not be started, or it may be started with an
227-
isolated process namespace.
224+
isolated process namespace so that `ps` does not reveal processes in other containers.
228225
229226
You can view the state of the newly created ephemeral container using `kubectl describe`:
230227
-->
@@ -234,7 +231,8 @@ You can view the state of the newly created ephemeral container using `kubectl d
234231

235232
{{< note >}}
236233
{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}必须支持`--target`参数。
237-
如果不支持,则临时容器可能不会启动,或者可能使用隔离的进程命名空间启动。
234+
如果不支持,则临时容器可能不会启动,或者可能使用隔离的进程命名空间启动,
235+
以便 `ps` 不显示其他容器内的进程。
238236
{{< /note >}}
239237

240238
你可以使用 `kubectl describe` 查看新创建的临时容器的状态:

content/zh/docs/tasks/run-application/delete-stateful-set.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -66,21 +66,21 @@ kubectl delete service <服务名称>
6666
```
6767

6868
<!--
69-
When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=false`.
69+
When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=orphan`.
7070
For example:
7171
--->
7272
当通过 `kubectl` 删除 StatefulSet 时,StatefulSet 会被缩容为 0。
7373
属于该 StatefulSet 的所有 Pod 也被删除。
74-
如果你只想删除 StatefulSet 而不删除 Pod,使用 `--cascade=false`
74+
如果你只想删除 StatefulSet 而不删除 Pod,使用 `--cascade=orphan`
7575

7676
```shell
77-
kubectl delete -f <file.yaml> --cascade=false
77+
kubectl delete -f <file.yaml> --cascade=orphan
7878
```
7979

8080
<!--
81-
By passing `--cascade=false` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app=myapp`, you can then delete them as follows:
81+
By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app=myapp`, you can then delete them as follows:
8282
--->
83-
通过将 `--cascade=false` 传递给 `kubectl delete`,在删除 StatefulSet 对象之后,
83+
通过将 `--cascade=orphan` 传递给 `kubectl delete`,在删除 StatefulSet 对象之后,
8484
StatefulSet 管理的 Pod 会被保留下来。如果 Pod 具有标签 `app=myapp`,则可以按照
8585
如下方式删除它们:
8686

content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md

Lines changed: 17 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -367,27 +367,29 @@ The detailed documentation of `kubectl autoscale` can be found [here](/docs/refe
367367
<!--
368368
## Autoscaling during rolling update
369369
370-
Currently in Kubernetes, it is possible to perform a rolling update by using the deployment object,
371-
which manages the underlying replica sets for you.
372-
Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
373-
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets.
370+
Kubernetes lets you perform a rolling update on a Deployment. In that
371+
case, the Deployment manages the underlying ReplicaSets for you.
372+
When you configure autoscaling for a Deployment, you bind a
373+
HorizontalPodAutoscaler to a single Deployment. The HorizontalPodAutoscaler
374+
manages the `replicas` field of the Deployment. The deployment controller is responsible
375+
for setting the `replicas` of the underlying ReplicaSets so that they add up to a suitable
376+
number during the rollout and also afterwards.
374377
-->
375378
## 滚动升级时扩缩 {#autoscaling-during-rolling-update}
376379

377-
目前在 Kubernetes 中,可以针对 ReplicationController 或 Deployment 执行
378-
滚动更新,它们会为你管理底层副本数。
379-
Pod 水平扩缩只支持后一种:HPA 会被绑定到 Deployment 对象,
380-
HPA 设置副本数量时,Deployment 会设置底层副本数。
380+
Kubernetes 允许你在 Deployment 上执行滚动更新。在这种情况下,Deployment 为你管理下层的 ReplicaSet。
381+
当你为一个 Deployment 配置自动扩缩时,你要为每个 Deployment 绑定一个 HorizontalPodAutoscaler。
382+
HorizontalPodAutoscaler 管理 Deployment 的 `replicas` 字段。
383+
Deployment Controller 负责设置下层 ReplicaSet 的 `replicas` 字段,
384+
以便确保在上线及后续过程副本个数合适。
381385

382386
<!--
383-
Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
384-
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update.
385-
The reason this doesn't work is that when rolling update creates a new replication controller,
386-
the Horizontal Pod Autoscaler will not be bound to the new replication controller.
387+
If you perform a rolling update of a StatefulSet that has an autoscaled number of
388+
replicas, the StatefulSet directly manages its set of Pods (there is no intermediate resource
389+
similar to ReplicaSet).
387390
-->
388-
通过直接操控副本控制器执行滚动升级时,HPA 不能工作,
389-
也就是说你不能将 HPA 绑定到某个 RC 再执行滚动升级。
390-
HPA 不能工作的原因是它无法绑定到滚动更新时所新创建的副本控制器。
391+
如果你对一个副本个数被自动扩缩的 StatefulSet 执行滚动更新, 该 StatefulSet
392+
会直接管理它的 Pod 集合 (不存在类似 ReplicaSet 这样的中间资源)。
391393

392394
<!--
393395
## Support for cooldown/delay

0 commit comments

Comments
 (0)