Skip to content

Commit ec3d343

Browse files
authored
Merge pull request #23724 from tengqm/zh-links-setup-2
[zh] Fix links in setup section (2)
2 parents 57b6588 + 73415d9 commit ec3d343

File tree

5 files changed

+280
-284
lines changed

5 files changed

+280
-284
lines changed

content/zh/docs/setup/best-practices/certificates.md

Lines changed: 24 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -6,24 +6,25 @@ content_type: concept
66
weight: 40
77
---
88
<!--
9-
---
109
title: PKI certificates and requirements
1110
reviewers:
1211
- sig-cluster-lifecycle
1312
content_type: concept
1413
weight: 40
15-
---
1614
-->
1715

1816
<!-- overview -->
1917

2018
<!--
2119
Kubernetes requires PKI certificates for authentication over TLS.
2220
If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), the certificates that your cluster requires are automatically generated.
23-
You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server.
21+
You can also generate your own certificates - for example, to keep your private keys more secure by not storing them on the API server.
2422
This page explains the certificates that your cluster requires.
2523
-->
26-
Kubernetes 需要 PKI 证书才能进行基于 TLS 的身份验证。如果您是使用 [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) 安装的 Kubernetes,则会自动生成集群所需的证书。您还可以生成自己的证书。例如,不将私钥存储在 API 服务器上,可以让私钥更加安全。此页面说明了集群必需的证书。
24+
Kubernetes 需要 PKI 证书才能进行基于 TLS 的身份验证。如果你是使用
25+
[kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 安装的 Kubernetes,
26+
则会自动生成集群所需的证书。你还可以生成自己的证书。
27+
例如,不将私钥存储在 API 服务器上,可以让私钥更加安全。此页面说明了集群必需的证书。
2728

2829

2930

@@ -57,11 +58,13 @@ Kubernetes 需要 PKI 才能执行以下操作:
5758
* 调度器的客户端证书/kubeconfig,用于和 API server 的会话
5859
* [前端代理](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/) 的客户端及服务端证书
5960

60-
{{< note >}}
6161
<!--
6262
`front-proxy` certificates are required only if you run kube-proxy to support [an extension API server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/).
6363
-->
64-
只有当您运行 kube-proxy 并要支持[扩展 API 服务器](/docs/tasks/access-kubernetes-api/setup-extension-api-server/)时,才需要 `front-proxy` 证书
64+
{{< note >}}
65+
只有当你运行 kube-proxy 并要支持
66+
[扩展 API 服务器](/zh/docs/tasks/extend-kubernetes/setup-extension-api-server/)
67+
时,才需要 `front-proxy` 证书
6568
{{< /note >}}
6669

6770
<!--
@@ -146,9 +149,12 @@ Required certificates:
146149
147150
where `kind` maps to one or more of the [x509 key usage][usage] types:
148151
-->
149-
[1]: 用来连接到集群的不同 IP 或 DNS 名(就像 [kubeadm][kubeadm] 为负载均衡所使用的固定 IP 或 DNS 名,`kubernetes``kubernetes.default``kubernetes.default.svc``kubernetes.default.svc.cluster``kubernetes.default.svc.cluster.local`
152+
[1]: 用来连接到集群的不同 IP 或 DNS 名
153+
(就像 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 为负载均衡所使用的固定
154+
IP 或 DNS 名,`kubernetes``kubernetes.default``kubernetes.default.svc`
155+
`kubernetes.default.svc.cluster``kubernetes.default.svc.cluster.local`)。
150156

151-
其中,`kind` 对应一种或多种类型的 [x509 密钥用途][usage]
157+
其中,`kind` 对应一种或多种类型的 [x509 密钥用途][https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage]
152158

153159
<!--
154160
| kind | Key usage |
@@ -227,20 +233,21 @@ You must manually configure these administrator account and service accounts:
227233
-->
228234
## 为用户帐户配置证书
229235

230-
您必须手动配置以下管理员帐户和服务帐户
236+
你必须手动配置以下管理员帐户和服务帐户
231237

232-
| 文件名 | 凭据名称 | 默认 CN | O (位于 Subject 中) |
233-
|-------------------------|----------------------------|--------------------------------|----------------|
234-
| admin.conf | default-admin | kubernetes-admin | system:masters |
235-
| kubelet.conf | default-auth | system:node:`<nodeName>` (see note) | system:nodes |
236-
| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
237-
| scheduler.conf | default-scheduler | system:kube-scheduler | |
238+
| 文件名 | 凭据名称 | 默认 CN | O (位于 Subject 中) |
239+
|-------------------------|----------------------------|--------------------------------|---------------------|
240+
| admin.conf | default-admin | kubernetes-admin | system:masters |
241+
| kubelet.conf | default-auth | system:node:`<nodeName>` (参阅注释) | system:nodes |
242+
| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
243+
| scheduler.conf | default-scheduler | system:kube-scheduler | |
238244

239-
{{< note >}}
240245
<!--
241246
The value of `<nodeName>` for `kubelet.conf` **must** match precisely the value of the node name provided by the kubelet as it registers with the apiserver. For further details, read the [Node Authorization](/docs/reference/access-authn-authz/node/).
242247
-->
243-
`kubelet.conf``<nodeName>` 的值 **必须** 与 kubelet 向 apiserver 注册时提供的节点名称的值完全匹配。有关更多详细信息,请阅读[节点授权](/docs/reference/access-authn-authz/node/)
248+
{{< note >}}
249+
`kubelet.conf``<nodeName>` 的值 **必须** 与 kubelet 向 apiserver 注册时提供的节点名称的值完全匹配。
250+
有关更多详细信息,请阅读[节点授权](/zh/docs/reference/access-authn-authz/node/)
244251
{{< /note >}}
245252

246253
<!--
@@ -278,5 +285,3 @@ These files are used as follows:
278285
| controller-manager.conf | kube-controller-manager | 必需添加到 `manifests/kube-controller-manager.yaml` 清单中 |
279286
| scheduler.conf | kube-scheduler | 必需添加到 `manifests/kube-scheduler.yaml` 清单中 |
280287

281-
[usage]: https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage
282-
[kubeadm]: /docs/reference/setup-tools/kubeadm/kubeadm/

content/zh/docs/setup/best-practices/cluster-large.md

Lines changed: 16 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,9 @@ A cluster is a set of nodes (physical or virtual machines) running Kubernetes ag
4242
<!--
4343
Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)).
4444
-->
45-
通常,集群中的节点数由特定于云平台的配置文件 `config-default.sh`(可以参考 [GCE 平台的 `config-default.sh`](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh))中的 `NUM_NODES` 参数控制。
45+
通常,集群中的节点数由特定于云平台的配置文件 `config-default.sh`
46+
(可以参考 [GCE 平台的 `config-default.sh`](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh))
47+
中的 `NUM_NODES` 参数控制。
4648

4749
<!--
4850
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
@@ -175,7 +177,9 @@ On AWS, master node sizes are currently set at cluster startup time and do not c
175177
<!--
176178
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
177179
-->
178-
为了防止内存泄漏或 [集群插件](https://releases.k8s.io/{{<param "githubbranch" >}}/cluster/addons) 中的其它资源问题导致节点上所有可用资源被消耗,Kubernetes 限制了插件容器可以消耗的 CPU 和内存资源(请参阅 PR [#10653](http://pr.k8s.io/10653/files)[#10778](http://pr.k8s.io/10778/files))。
180+
为了防止内存泄漏或 [集群插件](https://releases.k8s.io/{{<param "githubbranch" >}}/cluster/addons)
181+
中的其它资源问题导致节点上所有可用资源被消耗,Kubernetes 限制了插件容器可以消耗的 CPU 和内存资源
182+
(请参阅 PR [#10653](http://pr.k8s.io/10653/files)[#10778](http://pr.k8s.io/10778/files))。
179183

180184
例如:
181185

@@ -211,33 +215,34 @@ To avoid running into cluster addon resource issues, when creating a cluster wit
211215
* [FluentD with GCP Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
212216
-->
213217
* 根据集群的规模,如果使用了以下插件,提高其内存和 CPU 上限(每个插件都有一个副本处理整个群集,因此内存和 CPU 使用率往往与集群的规模/负载成比例增长) :
214-
* [InfluxDB 和 Grafana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
215-
* [kubedns、dnsmasq 和 sidecar](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
216-
* [Kibana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
218+
* [InfluxDB 和 Grafana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
219+
* [kubedns、dnsmasq 和 sidecar](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
220+
* [Kibana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
217221
* 根据集群的规模,如果使用了以下插件,调整其副本数量(每个插件都有多个副本,增加副本数量有助于处理增加的负载,但是,由于每个副本的负载也略有增加,因此也请考虑增加 CPU/内存限制):
218-
* [elasticsearch](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
222+
* [elasticsearch](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
219223
* 根据集群的规模,如果使用了以下插件,限制其内存和 CPU 上限(这些插件在每个节点上都有一个副本,但是 CPU/内存使用量也会随集群负载/规模而略有增加):
220-
* [FluentD 和 ElasticSearch 插件](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
221-
* [FluentD 和 GCP 插件](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
224+
* [FluentD 和 ElasticSearch 插件](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
225+
* [FluentD 和 GCP 插件](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
222226
223227
<!--
224228
Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185)
225229
and [#22940](http://issue.k8s.io/22940)). If you find that Heapster is running
226230
out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
227231
-->
228-
Heapster 的资源限制与您集群的初始大小有关(请参阅 [#16185](http://issue.k8s.io/16185)
232+
Heapster 的资源限制与您集群的初始大小有关(请参阅 [#16185](https://issue.k8s.io/16185)
229233
和 [#22940](http://issue.k8s.io/22940))。如果您发现 Heapster 资源不足,您应该调整堆内存请求的计算公式(有关详细信息,请参阅相关 PR)。
230234
231235
<!--
232236
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting).
233237
-->
234-
关于如何检测插件容器是否达到资源限制,参见 [计算资源的故障排除](/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting) 部分。
238+
关于如何检测插件容器是否达到资源限制,参见
239+
[计算资源的故障排除](/zh/docs/concepts/configuration/manage-resources-containers/#troubleshooting) 部分。
235240
236241
<!--
237242
In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster.
238243
We welcome PRs that implement those features.
239244
-->
240-
[未来](http://issue.k8s.io/13048),我们期望根据集群规模大小来设置所有群集附加资源限制,并在集群扩缩容时动态调整它们。
245+
[未来](https://issue.k8s.io/13048),我们期望根据集群规模大小来设置所有群集附加资源限制,并在集群扩缩容时动态调整它们。
241246
我们欢迎您来实现这些功能。
242247
243248
<!--

0 commit comments

Comments
 (0)