Skip to content

Commit e5d5f82

Browse files
committed
[zh]sync admin task files
1 parent 42a93ae commit e5d5f82

File tree

4 files changed

+76
-25
lines changed

4 files changed

+76
-25
lines changed

content/zh/docs/tasks/administer-cluster/access-cluster-api.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -55,11 +55,11 @@ kubectl config view
5555
```
5656

5757
<!--
58-
Many of the [examples](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/) provide an introduction to using
58+
Many of the [examples](https://github.com/kubernetes/examples/tree/master/) provide an introduction to using
5959
kubectl. Complete documentation is found in the [kubectl manual](/docs/reference/kubectl/overview/).
6060
-->
6161

62-
许多[样例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)
62+
许多[样例](https://github.com/kubernetes/examples/tree/master/)
6363
提供了使用 kubectl 的介绍。完整文档请见 [kubectl 手册](/zh/docs/reference/kubectl/overview/)
6464

6565
<!--
@@ -300,10 +300,10 @@ func main() {
300300
```
301301

302302
<!--
303-
If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod).
303+
If the application is deployed as a Pod in the cluster, see [Accessing the API from within a Pod](/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod).
304304
-->
305305
如果该应用程序部署为集群中的一个
306-
Pod,请参阅[下一节](#accessing-the-api-from-within-accessing-the-api-from-within-a-pod)
306+
Pod,请参阅[从 Pod 内访问 API](/zh/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)
307307

308308
<!-- #### Python client -->
309309
#### Python 客户端 {#python-client}

content/zh/docs/tasks/administer-cluster/controller-manager-leader-migration.md

Lines changed: 33 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ content_type: task
1717

1818
<!-- overview -->
1919

20-
{{< feature-state state="alpha" for_k8s_version="v1.21" >}}
20+
{{< feature-state state="beta" for_k8s_version="v1.22" >}}
2121

2222
{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="云管理控制器是">}}
2323

@@ -43,17 +43,14 @@ For a single-node control plane, or if unavailability of controller managers can
4343
对于单节点控制平面,或者在升级过程中可以容忍控制器管理器不可用的情况,则不需要领导者迁移,并且可以忽略本指南。
4444

4545
<!--
46-
Leader Migration is an alpha feature that is disabled by default and it requires `--enable-leader-migration` to be set on controller managers.
47-
It can be enabled by setting the feature gate `ControllerManagerLeaderMigration` plus `--enable-leader-migration` on `kube-controller-manager` or `cloud-controller-manager`.
46+
Leader Migration can be enabled by setting `--enable-leader-migration` on `kube-controller-manager` or `cloud-controller-manager`.
4847
Leader Migration only applies during the upgrade and can be safely disabled or left enabled after the upgrade is complete.
4948
5049
This guide walks you through the manual process of upgrading the control plane from `kube-controller-manager` with
5150
built-in cloud provider to running both `kube-controller-manager` and `cloud-controller-manager`.
5251
If you use a tool to administrator the cluster, please refer to the documentation of the tool and the cloud provider for more details.
5352
-->
54-
领导者迁移是一项 Alpha 阶段功能,默认情况下处于禁用状态,它需要设置控制器管理器的 `--enable-leader-migration` 参数。
55-
可以通过在 `kube-controller-manager``cloud-controller-manager` 上设置特性门控
56-
`ControllerManagerLeaderMigration``--enable-leader-migration` 来启用。
53+
领导者迁移可以通过在 `kube-controller-manager``cloud-controller-manager` 上设置 `--enable-leader-migration` 来启用。
5754
领导者迁移仅在升级期间适用,并且可以安全地禁用,也可以在升级完成后保持启用状态。
5855

5956
本指南将引导你手动将控制平面从内置的云驱动的 `kube-controller-manager` 升级为
@@ -64,14 +61,14 @@ If you use a tool to administrator the cluster, please refer to the documentatio
6461

6562
<!--
6663
It is assumed that the control plane is running Kubernetes version N and to be upgraded to version N + 1.
67-
Although it is possible to migrate within the same version, ideally the migration should be performed as part of a upgrade so that changes of configuration can be aligned to releases.
64+
Although it is possible to migrate within the same version, ideally the migration should be performed as part of a upgrade so that changes of configuration can be aligned to each release.
6865
The exact versions of N and N + 1 depend on each cloud provider. For example, if a cloud provider builds a `cloud-controller-manager` to work with Kubernetes 1.22, then N can be 1.21 and N + 1 can be 1.22.
6966
7067
The control plane nodes should run `kube-controller-manager` with Leader Election enabled through `--leader-elect=true`.
7168
As of version N, an in-tree cloud privider must be set with `--cloud-provider` flag and `cloud-controller-manager` should not yet be deployed.
7269
-->
7370
假定控制平面正在运行 Kubernetes N 版本,并且要升级到 N+1 版本。
74-
尽管可以在同一版本中进行迁移,但理想情况下,迁移应作为升级的一部分执行,以便可以更改配置与发布保持一致
71+
尽管可以在同一版本中进行迁移,但理想情况下,迁移应作为升级的一部分执行,以便可以更改配置与每个发布版本保持一致
7572
N 和 N+1的确切版本取决于各个云驱动。例如,如果云驱动构建了一个可与 Kubernetes 1.22 配合使用的 `cloud-controller-manager`
7673
则 N 可以为 1.21,N+1 可以为 1.22。
7774

@@ -80,19 +77,21 @@ N 和 N+1的确切版本取决于各个云驱动。例如,如果云驱动构
8077

8178
<!--
8279
The out-of-tree cloud provider must have built a `cloud-controller-manager` with Leader Migration implementation.
83-
If the cloud provider imports `k8s.io/cloud-provider` and `k8s.io/controller-manager` of version v0.21.0 or later, Leader Migration will be avaliable.
80+
If the cloud provider imports `k8s.io/cloud-provider` and `k8s.io/controller-manager` of version v0.21.0 or later, Leader Migration will be available.
81+
However, for version before v0.22.0, Leader Migration is alpha and requires feature gate `ControllerManagerLeaderMigration` to be enabled.
8482
8583
This guide assumes that kubelet of each control plane node starts `kube-controller-manager`
8684
and `cloud-controller-manager` as static pods defined by their manifests.
8785
If the components run in a different setting, please adjust the steps accordingly.
8886
89-
For authorization, this guide assumes that the cluser uses RBAC.
87+
For authorization, this guide assumes that the cluster uses RBAC.
9088
If another authorization mode grants permissions to `kube-controller-manager` and `cloud-controller-manager` components,
9189
please grant the needed access in a way that matches the mode.
9290
-->
9391
树外云驱动必须已经构建了一个实现领导者迁移的 `cloud-controller-manager`
9492
如果云驱动导入了 v0.21.0 或更高版本的 `k8s.io/cloud-provider``k8s.io/controller-manager`
9593
则可以进行领导者迁移。
94+
但是,对 v0.22.0 以下的版本,领导者迁移是一项 Alpha 阶段功能,它需要启用特性门控 `ControllerManagerLeaderMigration`
9695

9796
本指南假定每个控制平面节点的 kubelet 以静态 pod 的形式启动 `kube-controller-manager`
9897
`cloud-controller-manager`,静态 pod 的定义在清单文件中。
@@ -137,19 +136,21 @@ Do the same to the `system::leader-locking-cloud-controller-manager` role.
137136
<!--
138137
### Initial Leader Migration configuration
139138
140-
Leader Migration requires a configuration file representing the state of controller-to-manager assignment.
141-
At this moment, with in-tree cloud provider, `kube-controller-manager` runs `route`, `service`, and `cloud-node-lifecycle`.
142-
The following example configuration shows the assignment.
139+
Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, `kube-controller-manager` runs `route`, `service`, and `cloud-node-lifecycle`. The following example configuration shows the assignment.
140+
141+
Leader Migration can be enabled without a configuration. Please see [Default Configuration](#default-configuration) for details.
143142
-->
144143
### 初始领导者迁移配置
145144

146-
领导者迁移需要一个表示控制器到管理器分配状态的配置文件
145+
领导者迁移可以选择使用一个表示控制器到管理器分配状态的配置文件
147146
目前,对于树内云驱动,`kube-controller-manager` 运行 `route``service``cloud-node-lifecycle`
148147
以下示例配置显示了分配。
149148

149+
领导者迁移可以不指定配置来启用。请参阅 [默认配置](#default-configuration) 以获取更多详细信息。
150+
150151
```yaml
151152
kind: LeaderMigrationConfiguration
152-
apiVersion: controllermanager.config.k8s.io/v1alpha1
153+
apiVersion: controllermanager.config.k8s.io/v1beta1
153154
leaderName: cloud-provider-extraction-migration
154155
resourceLock: leases
155156
controllerLeaders:
@@ -166,7 +167,6 @@ On each control plane node, save the content to `/etc/leadermigration.conf`,
166167
and update the manifest of `kube-controller-manager` so that the file is mounted inside the container at the same location.
167168
Also, update the same manifest to add the following arguments:
168169

169-
- `--feature-gates=ControllerManagerLeaderMigration=true` to enable Leader Migration which is an alpha feature
170170
- `--enable-leader-migration` to enable Leader Migration on the controller manager
171171
- `--leader-migration-config=/etc/leadermigration.conf` to set configuration file
172172

@@ -176,7 +176,6 @@ Restart `kube-controller-manager` on each node. At this moment, `kube-controller
176176
并更新 `kube-controller-manager` 清单,以便将文件安装在容器内的同一位置。
177177
另外,更新相同的清单,添加以下参数:
178178

179-
- `--feature-gates=ControllerManagerLeaderMigration=true` 启用领导者迁移(这是 Alpha 版功能)
180179
- `--enable-leader-migration` 在控制器管理器上启用领导者迁移
181180
- `--leader-migration-config=/etc/leadermigration.conf` 设置配置文件
182181

@@ -196,7 +195,7 @@ Please note `component` field of each `controllerLeaders` changing from `kube-co
196195

197196
```yaml
198197
kind: LeaderMigrationConfiguration
199-
apiVersion: controllermanager.config.k8s.io/v1alpha1
198+
apiVersion: controllermanager.config.k8s.io/v1beta1
200199
leaderName: cloud-provider-extraction-migration
201200
resourceLock: leases
202201
controllerLeaders:
@@ -286,6 +285,22 @@ To re-enable Leader Migration, recreate the configuration file and add its mount
286285
最后删除 `/etc/leadermigration.conf`。
287286
要重新启用领导者迁移,请重新创建配置文件,并将其挂载和启用领导者迁移的标志添加回到 `cloud-controller-manager`。
288287

288+
<!--
289+
### Default Configuration
290+
291+
Starting Kubernetes 1.22, Leader Migration provides a default configuration suitable for the default controller-to-manager assignment.
292+
The default configuration can be enabled by setting `--enable-leader-migration` but without `--leader-migration-config=`.
293+
294+
For `kube-controller-manager` and `cloud-controller-manager`, if there are no flags that enable any in-tree cloud provider or change ownership of controllers, the default configuration can be used to avoid manual creation of the configuration file.
295+
-->
296+
### 默认配置 {#default-configuration}
297+
298+
从 Kubernetes 1.22 开始,领导者迁移提供了一个默认配置,它适用于默认的控制器到管理器分配。
299+
可以通过设置 `--enable-leader-migration`,但不设置 `--leader-migration-config=` 来启用默认配置。
300+
301+
对于 `kube-controller-manager` 和 `cloud-controller-manager`,如果没有用参数来启用树内云驱动或者改变控制器属主,
302+
则可以使用默认配置来避免手动创建配置文件。
303+
289304
## {{% heading "whatsnext" %}}
290305
<!--
291306
- Read the [Controller Manager Leader Migration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration) enhancement proposal

content/zh/docs/tasks/administer-cluster/cpu-management-policies.md

Lines changed: 37 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,19 +88,28 @@ CPU 管理器定期通过 CRI 写入资源更新,以保证内存中 CPU 分配
8888
同步频率通过新增的 Kubelet 配置参数 `--cpu-manager-reconcile-period` 来设置。
8989
如果不指定,默认与 `--node-status-update-frequency` 的周期相同。
9090

91+
<!--
92+
The behavior of the static policy can be fine-tuned using the `--cpu-manager-policy-options` flag.
93+
The flag takes a comma-separated list of `key=value` policy options.
94+
-->
95+
Static 策略的行为可以使用 `--cpu-manager-policy-options` 参数来微调。
96+
该参数采用一个逗号分隔的 `key=value` 策略选项列表。
97+
9198
<!--
9299
### None policy
93100
94101
The `none` policy explicitly enables the existing default CPU
95102
affinity scheme, providing no affinity beyond what the OS scheduler does
96103
automatically.  Limits on CPU usage for
97-
[Guaranteed pods](/docs/tasks/configure-pod-container/quality-service-pod/)
104+
[Guaranteed pods](/docs/tasks/configure-pod-container/quality-service-pod/) and
105+
[Burstable pods](/docs/tasks/configure-pod-container/quality-service-pod/)
98106
are enforced using CFS quota.
99107
-->
100108
### none 策略
101109

102110
`none` 策略显式地启用现有的默认 CPU 亲和方案,不提供操作系统调度器默认行为之外的亲和性策略。
103111
通过 CFS 配额来实现 [Guaranteed pods](/zh/docs/tasks/configure-pod-container/quality-service-pod/)
112+
[Burstable pods](/zh/docs/tasks/configure-pod-container/quality-service-pod/)
104113
的 CPU 使用限制。
105114

106115
<!--
@@ -310,3 +319,30 @@ equal to one. The `nginx` container is granted 2 exclusive CPUs.
310319
同时,容器对 CPU 资源的限制值是一个大于或等于 1 的整数值。
311320
所以,该 `nginx` 容器被赋予 2 个独占 CPU。
312321

322+
<!--
323+
#### Static policy options
324+
325+
If the `full-pcpus-only` policy option is specified, the static policy will always allocate full physical cores.
326+
You can enable this option by adding `full-pcups-only=true` to the CPUManager policy options.
327+
-->
328+
#### Static 策略选项
329+
330+
如果使用 `full-pcpus-only` 策略选项,static 策略总是会分配完整的物理核心。
331+
你可以通过在 CPUManager 策略选项里加上 `full-pcups-only=true` 来启用该选项。
332+
<!--
333+
By default, without this option, the static policy allocates CPUs using a topology-aware best-fit allocation.
334+
On SMT enabled systems, the policy can allocate individual virtual cores, which correspond to hardware threads.
335+
This can lead to different containers sharing the same physical cores; this behaviour in turn contributes
336+
to the [noisy neighbours problem](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors).
337+
-->
338+
默认情况下,如果不使用该选项,static 策略会使用拓扑感知最适合的分配方法来分配 CPU。
339+
在启用了 SMT 的系统上,此策略所分配是与硬件线程对应的、独立的虚拟核。
340+
这会导致不同的容器共享相同的物理核心,该行为进而会导致
341+
[吵闹的邻居问题](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors)。
342+
<!--
343+
With the option enabled, the pod will be admitted by the kubelet only if the CPU request of all its containers
344+
can be fulfilled by allocating full physical cores.
345+
If the pod does not pass the admission, it will be put in Failed state with the message `SMTAlignmentError`.
346+
-->
347+
启用该选项之后,只有当一个 Pod 里所有容器的 CPU 请求都能够分配到完整的物理核心时,kubelet 才会接受该 Pod。
348+
如果 Pod 没有被准入,它会被置于 Failed 状态,错误消息是 `SMTAlignmentError`。

content/zh/docs/tasks/administer-cluster/limit-storage-consumption.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ content_type: task
1010
<!-- overview -->
1111

1212
<!--
13-
This example demonstrates an easy way to limit the amount of storage consumed in a namespace.
13+
This example demonstrates how to limit the amount of storage consumed in a namespace
1414
-->
15-
此示例演示了一种限制名字空间中存储使用量的简便方法
15+
此示例演示了如何限制一个名字空间中的存储使用量
1616

1717
<!--
1818
The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/),

0 commit comments

Comments
 (0)