Skip to content

Commit 01ebc1f

Browse files
authored
Merge pull request #29742 from chenxuc/admin5
[zh] sync admin cluster docs
2 parents b50819e + 649cc1d commit 01ebc1f

File tree

6 files changed

+61
-39
lines changed

6 files changed

+61
-39
lines changed

content/zh/docs/tasks/administer-cluster/access-cluster-services.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ You have several options for connecting to nodes, pods and services from outside
4646
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
4747
the cluster. See the [services](/docs/concepts/services-networking/service/) and
4848
[kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation.
49-
- Depending on your cluster environment, this may just expose the service to your corporate network,
49+
- Depending on your cluster environment, this may only expose the service to your corporate network,
5050
or it may expose it to the internet. Think about whether the service being exposed is secure.
5151
Does it do its own authentication?
5252
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
@@ -148,15 +148,15 @@ See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/ac
148148
<!--
149149
#### Manually constructing apiserver proxy URLs
150150
151-
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
151+
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
152152
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
153153
154154
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
155155
-->
156156
#### 手动构建 API 服务器代理 URLs {#manually-constructing-apiserver-proxy-urls}
157157

158158
如前所述,你可以使用 `kubectl cluster-info` 命令取得服务的代理 URL。
159-
为了创建包含服务末端、后缀和参数的代理 URLs,你可以简单地在服务的代理 URL 中添加:
159+
为了创建包含服务末端、后缀和参数的代理 URLs,你可以在服务的代理 URL 中添加:
160160
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
161161

162162
如果还没有为你的端口指定名称,你可以不用在 URL 中指定 *port_name*

content/zh/docs/tasks/administer-cluster/certificates.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -176,6 +176,13 @@ manually through `easyrsa`, `openssl` or `cfssl`.
176176
-CAcreateserial -out server.crt -days 10000 \
177177
-extensions v3_ext -extfile csr.conf
178178

179+
<!--
180+
1. View the certificate signing request:
181+
-->
182+
1. 查看证书签名请求:
183+
184+
openssl req -noout -text -in ./server.csr
185+
179186
<!--
180187
1. View the certificate:
181188
-->

content/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,20 +6,25 @@ content_type: concept
66
<!-- overview -->
77

88
<!--
9-
In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine
10-
there are a number of add-ons which, for various reasons, must run on a regular cluster node (rather than the Kubernetes master).
9+
Kubernetes core components such as the API server, scheduler, and controller-manager run on a control plane node.
10+
However, add-ons must run on a regular cluster node.
1111
Some of these add-ons are critical to a fully functional cluster, such as metrics-server, DNS, and UI.
1212
A cluster may stop working properly if a critical add-on is evicted (either manually or as a side effect of another operation like upgrade)
1313
and becomes pending (for example when the cluster is highly utilized and either there are other pending pods that schedule into the space
1414
vacated by the evicted critical add-on pod or the amount of resources available on the node changed for some other reason).
1515
-->
16-
除了在主机上运行的 Kubernetes 核心组件(如 api-server 、scheduler 、controller-manager)之外,还有许多插件,由于各种原因,
17-
必须在常规集群节点(而不是 Kubernetes 主节点)上运行
16+
Kubernetes 核心组件(如 API 服务器、调度器、控制器管理器)在控制平面节点上运行。
17+
但是插件必须在常规集群节点上运行
1818
其中一些插件对于功能完备的群集至关重要,例如 Heapster、DNS 和 UI。
1919
如果关键插件被逐出(手动或作为升级等其他操作的副作用)或者变成挂起状态,群集可能会停止正常工作。
2020
关键插件进入挂起状态的例子有:集群利用率过高;被逐出的关键插件 Pod 释放了空间,但该空间被之前悬决的 Pod 占用;由于其它原因导致节点上可用资源的总量发生变化。
2121

22-
22+
<!--
23+
Note that marking a pod as critical is not meant to prevent evictions entirely; it only prevents the pod from becoming permanently unavailable.
24+
A static pod marked as critical, can't be evicted. However, a non-static pods marked as critical are always rescheduled.
25+
-->
26+
注意,把某个 Pod 标记为关键 Pod 并不意味着完全避免该 Pod 被逐出;它只能防止该 Pod 变成永久不可用。
27+
被标记为关键性的静态 Pod 不会被逐出。但是,被标记为关键性的非静态 Pod 总是会被重新调度。
2328

2429
<!-- body -->
2530

@@ -29,12 +34,9 @@ vacated by the evicted critical add-on pod or the amount of resources available
2934
### 标记关键 Pod
3035

3136
<!--
32-
To be considered critical, the pod has to run in the `kube-system` namespace (configurable via flag) and
33-
* Have the priorityClassName set as "system-cluster-critical" or "system-node-critical", the latter being the highest for entire cluster. Alternatively, you could add an annotation `scheduler.alpha.kubernetes.io/critical-pod` as key and empty string as value to your pod, but this annotation is deprecated as of version 1.13 and will be removed in 1.14.
37+
To mark a Pod as critical, set priorityClassName for that Pod to `system-cluster-critical` or `system-node-critical`. `system-node-critical` is the highest available priority, even higher than `system-cluster-critical`
3438
-->
35-
要将 pod 标记为关键性(critical),pod 必须在 kube-system 命名空间中运行(可通过参数配置)。
36-
同时,需要将 `priorityClassName` 设置为 `system-cluster-critical``system-node-critical` ,后者是整个群集的最高级别。
37-
或者,也可以为 Pod 添加名为 `scheduler.alpha.kubernetes.io/critical-pod`、值为空字符串的注解。
38-
不过,这一注解从 1.13 版本开始不再推荐使用,并将在 1.14 中删除。
39+
要将 Pod 标记为关键性(critical),设置 Pod 的 priorityClassName 为 `system-cluster-critical` 或者 `system-node-critical`
40+
`system-node-critical` 是最高级别的可用性优先级,甚至比 `system-cluster-critical` 更高。
3941

4042

content/zh/docs/tasks/administer-cluster/securing-a-cluster.md

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ and provides recommendations on overall security.
2929
<!--
3030
## Controlling access to the Kubernetes API
3131
32-
As Kubernetes is entirely API driven, controlling and limiting who can access the cluster and what actions
32+
As Kubernetes is entirely API-driven, controlling and limiting who can access the cluster and what actions
3333
they are allowed to perform is the first line of defense.
3434
-->
3535
## 控制对 Kubernetes API 的访问
@@ -53,7 +53,7 @@ Kubernetes 期望集群中所有的 API 通信在默认情况下都使用 TLS
5353
### API Authentication
5454
5555
Choose an authentication mechanism for the API servers to use that matches the common access patterns
56-
when you install a cluster. For instance, small single user clusters may wish to use a simple certificate
56+
when you install a cluster. For instance, small single-user clusters may wish to use a simple certificate
5757
or static Bearer token approach. Larger clusters may wish to integrate an existing OIDC or LDAP server that
5858
allow users to be subdivided into groups.
5959
@@ -80,7 +80,7 @@ Consult the [authentication reference document](/docs/reference/access-authn-aut
8080
Once authenticated, every API call is also expected to pass an authorization check. Kubernetes ships
8181
an integrated [Role-Based Access Control (RBAC)](/docs/reference/access-authn-authz/rbac/) component that matches an incoming user or group to a
8282
set of permissions bundled into roles. These permissions combine verbs (get, create, delete) with
83-
resources (pods, services, nodes) and can be namespace or cluster scoped. A set of out of the box
83+
resources (pods, services, nodes) and can be namespace-scoped or cluster-scoped. A set of out-of-the-box
8484
roles are provided that offer reasonable default separation of responsibility depending on what
8585
actions a client might want to perform. It is recommended that you use the [Node](/docs/reference/access-authn-authz/node/) and [RBAC](/docs/reference/access-authn-authz/rbac/) authorizers together, in combination with the
8686
[NodeRestriction](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission plugin.
@@ -110,8 +110,8 @@ With authorization, it is important to understand how updates on one object may
110110
other places. For instance, a user may not be able to create pods directly, but allowing them to
111111
create a deployment, which creates pods on their behalf, will let them create those pods
112112
indirectly. Likewise, deleting a node from the API will result in the pods scheduled to that node
113-
being terminated and recreated on other nodes. The out of the box roles represent a balance
114-
between flexibility and the common use cases, but more limited roles should be carefully reviewed
113+
being terminated and recreated on other nodes. The out-of-the-box roles represent a balance
114+
between flexibility and common use cases, but more limited roles should be carefully reviewed
115115
to prevent accidental escalation. You can make roles specific to your use case if the out-of-box ones don't meet your needs.
116116
117117
Consult the [authorization reference section](/docs/reference/access-authn-authz/authorization/) for more information.
@@ -183,7 +183,7 @@ reserved resources like memory, or to provide default limits when none are speci
183183
### Controlling what privileges containers run with
184184
185185
A pod definition contains a [security context](/docs/tasks/configure-pod-container/security-context/)
186-
that allows it to request access to running as a specific Linux user on a node (like root),
186+
that allows it to request access to run as a specific Linux user on a node (like root),
187187
access to run privileged or access the host network, and other controls that would otherwise
188188
allow it to run unfettered on a hosting node. [Pod security policies](/docs/concepts/policy/pod-security-policy/)
189189
can limit which users or service accounts can provide dangerous security context settings. For example, pod security policies can limit volume mounts, especially `hostPath`, which are aspects of a pod that should be controlled.
@@ -227,11 +227,11 @@ now respect network policy.
227227

228228
<!--
229229
Quota and limit ranges can also be used to control whether users may request node ports or
230-
load balanced services, which on many clusters can control whether those users applications
230+
load-balanced services, which on many clusters can control whether those users applications
231231
are visible outside of the cluster.
232232
233-
Additional protections may be available that control network rules on a per plugin or per
234-
environment basis, such as per-node firewalls, physically separating cluster nodes to
233+
Additional protections may be available that control network rules on a per-plugin or
234+
per-environment basis, such as per-node firewalls, physically separating cluster nodes to
235235
prevent cross talk, or advanced networking policy.
236236
-->
237237
对于可以控制用户的应用程序是否在集群之外可见的许多集群,配额和限制范围也可用于
@@ -248,7 +248,7 @@ By default these APIs are accessible by pods running on an instance and can cont
248248
credentials for that node, or provisioning data such as kubelet credentials. These credentials
249249
can be used to escalate within the cluster or to other cloud services under the same account.
250250
251-
When running Kubernetes on a cloud platform limit permissions given to instance credentials, use
251+
When running Kubernetes on a cloud platform, limit permissions given to instance credentials, use
252252
[network policies](/docs/tasks/administer-cluster/declare-network-policy/) to restrict pod access
253253
to the metadata API, and avoid using provisioning data to deliver secrets.
254254
-->
@@ -268,7 +268,7 @@ to the metadata API, and avoid using provisioning data to deliver secrets.
268268
269269
By default, there are no restrictions on which nodes may run a pod. Kubernetes offers a
270270
[rich set of policies for controlling placement of pods onto nodes](/docs/concepts/configuration/assign-pod-node/)
271-
and the [taint based pod placement and eviction](/docs/concepts/configuration/taint-and-toleration/)
271+
and the [taint-based pod placement and eviction](/docs/concepts/configuration/taint-and-toleration/)
272272
that are available to end users. For many clusters use of these policies to separate workloads
273273
can be a convention that authors adopt or enforce via tooling.
274274
@@ -360,7 +360,7 @@ Kubernetes 的 alpha 和 beta 特性还在努力开发中,可能存在导致
360360
The shorter the lifetime of a secret or credential the harder it is for an attacker to make
361361
use of that credential. Set short lifetimes on certificates and automate their rotation. Use
362362
an authentication provider that can control how long issued tokens are available and use short
363-
lifetimes where possible. If you use service account tokens in external integrations, plan to
363+
lifetimes where possible. If you use service-account tokens in external integrations, plan to
364364
rotate those tokens frequently. For example, once the bootstrap phase is complete, a bootstrap token used for setting up nodes should be revoked or its authorization removed.
365365
-->
366366
### 频繁回收基础设施证书
@@ -406,9 +406,10 @@ and may grant an attacker significant visibility into the state of your cluster.
406406
your backups using a well reviewed backup and encryption solution, and consider using full disk
407407
encryption where possible.
408408
409-
Kubernetes 1.7 contains [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/), an alpha feature that will encrypt `Secret` resources in etcd, preventing
409+
Kubernetes supports [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/), a feature
410+
introduced in 1.7, and beta since 1.13. This will encrypt `Secret` resources in etcd, preventing
410411
parties that gain access to your etcd backups from viewing the content of those secrets. While
411-
this feature is currently experimental, it may offer an additional level of defense when backups
412+
this feature is currently beta, it offers an additional level of defense when backups
412413
are not encrypted or an attacker gains read access to etcd.
413414
-->
414415
### 对 Secret 进行静态加密
@@ -417,9 +418,9 @@ are not encrypted or an attacker gains read access to etcd.
417418
并且可以授予攻击者对集群状态的可见性。
418419
始终使用经过良好审查的备份和加密解决方案来加密备份,并考虑在可能的情况下使用全磁盘加密。
419420

420-
Kubernetes 1.7 包含了[静态数据加密](/zh/docs/tasks/administer-cluster/encrypt-data/)
421-
它是一个 alpha 特性,会加密 etcd 里面的 `Secret` 资源,以防止某一方通过查看
422-
etcd 的备份文件查看到这些 Secret 的内容。虽然目前这还只是实验性的功能
421+
Kubernetes 支持 [静态数据加密](/zh/docs/tasks/administer-cluster/encrypt-data/)
422+
该功能在 1.7 版本引入,并在 1.13 版本成为 Beta。它会加密 etcd 里面的 `Secret` 资源,以防止某一方通过查看
423+
etcd 的备份文件查看到这些 Secret 的内容。虽然目前这还只是 Beta 阶段的功能
423424
但是在备份没有加密或者攻击者获取到 etcd 的读访问权限的时候,它能提供额外的防御层级。
424425

425426
<!--

content/zh/docs/tasks/administer-cluster/sysctl-cluster.md

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ content_type: task
1212

1313
<!-- overview -->
1414

15-
15+
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
1616
<!--
1717
This document describes how to configure and use kernel parameters within a
1818
Kubernetes cluster using the {{< glossary_tooltip term_id="sysctl" >}}
@@ -24,7 +24,13 @@ interface.
2424
## {{% heading "prerequisites" %}}
2525

2626

27-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
27+
{{< include "task-tutorial-prereqs.md" >}}
28+
29+
<!--
30+
For some steps, you also need to be able to reconfigure the command line
31+
options for the kubelets running on your cluster.
32+
-->
33+
对一些步骤,你需要能够重新配置在你的集群里运行的 kubelet 命令行的选项。
2834

2935
<!-- steps -->
3036

@@ -272,6 +278,8 @@ to schedule those pods onto the right nodes.
272278

273279
## PodSecurityPolicy
274280

281+
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
282+
275283
<!--
276284
You can further control which sysctls can be set in pods by specifying lists of
277285
sysctls or sysctl patterns in the `forbiddenSysctls` and/or

content/zh/docs/tasks/administer-cluster/topology-manager.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -132,6 +132,15 @@ To align CPU resources with other requested resources in a Pod Spec, the CPU Man
132132
参看[控制 CPU 管理策略](/zh/docs/tasks/administer-cluster/cpu-management-policies/).
133133
{{< /note >}}
134134

135+
<!--
136+
To align memory (and hugepages) resources with other requested resources in a Pod Spec, the Memory Manager should be enabled and proper Memory Manager policy should be configured on a Node. Examine [Memory Manager](/docs/tasks/administer-cluster/memory-manager/) documentation.
137+
-->
138+
{{< note >}}
139+
为了将 Pod 规约中的 memory(和 hugepages)资源与所请求的其他资源对齐,需要启用内存管理器,
140+
并且在节点配置适当的内存管理器策略。查看[内存管理器](/zh/docs/tasks/administer-cluster/memory-manager/)
141+
文档。
142+
{{< /note >}}
143+
135144
<!--
136145
### Topology Manager Scopes
137146
@@ -487,15 +496,10 @@ Using this information the Topology Manager calculates the optimal hint for the
487496
1. The maximum number of NUMA nodes that Topology Manager allows is 8. With more than 8 NUMA nodes there will be a state explosion when trying to enumerate the possible NUMA affinities and generating their hints.
488497

489498
2. The scheduler is not topology-aware, so it is possible to be scheduled on a node and then fail on the node due to the Topology Manager.
490-
491-
3. The Device Manager and the CPU Manager are the only components to adopt the Topology Manager's HintProvider interface. This means that NUMA alignment can only be achieved for resources managed by the CPU Manager and the Device Manager. Memory or Hugepages are not considered by the Topology Manager for NUMA alignment.
492499
-->
493500
### 已知的局限性
494501

495502
1. 拓扑管理器所能处理的最大 NUMA 节点个数是 8。若 NUMA 节点数超过 8,
496503
枚举可能的 NUMA 亲和性并为之生成提示时会发生状态爆炸。
497-
2. 调度器不支持拓扑功能,因此可能会由于拓扑管理器的原因而在节点上进行调度,然后在该节点上调度失败。
498-
3. 设备管理器和 CPU 管理器时能够采纳拓扑管理器 HintProvider 接口的唯一两个组件。
499-
这意味着 NUMA 对齐只能针对 CPU 管理器和设备管理器所管理的资源实现。
500-
内存和大页面在拓扑管理器决定 NUMA 对齐时都还不会被考虑在内。
504+
2. 调度器不是拓扑感知的,所以有可能一个 Pod 被调度到一个节点之后,会因为拓扑管理器的缘故在该节点上启动失败。
501505

0 commit comments

Comments
 (0)