Skip to content

Commit 06273bf

Browse files
authored
Merge pull request #34237 from howieyuen/zh-34221-concepts-5
[zh]Resync concepts files after zh language renaming(concepts-5)
2 parents b11335c + 2b92a82 commit 06273bf

File tree

7 files changed

+444
-202
lines changed

7 files changed

+444
-202
lines changed

content/zh-cn/docs/concepts/cluster-administration/addons.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Add-ons 扩展了 Kubernetes 的功能。
3030
* [Calico](https://docs.projectcalico.org/latest/getting-started/kubernetes/) is a secure L3 networking and network policy provider.
3131
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
3232
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported.
33-
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
33+
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave.
3434
* [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
3535
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
3636
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
@@ -40,7 +40,7 @@ Add-ons 扩展了 Kubernetes 的功能。
4040
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
4141
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
4242
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
43-
* **Romana** is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
43+
* [Romana](https://github.com/romana) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) API.
4444
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
4545
-->
4646
## 网络和网络策略
@@ -54,7 +54,7 @@ Add-ons 扩展了 Kubernetes 的功能。
5454
* [Cilium](https://github.com/cilium/cilium) 是一个 L3 网络和网络策略插件,能够透明的实施 HTTP/API/L7 策略。
5555
同时支持路由(routing)和覆盖/封装(overlay/encapsulation)模式。
5656
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) 使 Kubernetes 无缝连接到一种 CNI 插件,
57-
例如:Flannel、Calico、Canal、Romana 或者 Weave。
57+
例如:Flannel、Calico、Canal 或者 Weave。
5858
* [Contiv](https://contivpp.io/) 为各种用例和丰富的策略框架提供可配置的网络
5959
(使用 BGP 的本机 L3、使用 vxlan 的覆盖、标准 L2 和 Cisco-SDN/ACI)。
6060
Contiv 项目完全[开源](https://github.com/contiv)
@@ -84,9 +84,8 @@ Add-ons 扩展了 Kubernetes 的功能。
8484
CaaS / PaaS 平台(例如关键容器服务(PKS)和 OpenShift)之间的集成。
8585
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst)
8686
是一个 SDN 平台,可在 Kubernetes Pods 和非 Kubernetes 环境之间提供基于策略的联网,并具有可视化和安全监控。
87-
* Romana 是一个 pod 网络的第三层解决方案,并支持
88-
[NetworkPolicy API](/zh/docs/concepts/services-networking/network-policies/)
89-
Kubeadm add-on 安装细节可以在[这里](https://github.com/romana/romana/tree/master/containerize)找到。
87+
* [Romana](https://github.com/romana) 是一个 Pod 网络的第三层解决方案,并支持
88+
[NetworkPolicy](/zh-cn/docs/concepts/services-networking/network-policies/) API。
9089
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)
9190
提供在网络分组两端参与工作的网络和网络策略,并且不需要额外的数据库。
9291

@@ -129,7 +128,7 @@ Add-ons 扩展了 Kubernetes 的功能。
129128
运行虚拟机的 add-ons。通常运行在裸机集群上。
130129
* [节点问题检测器](https://github.com/kubernetes/node-problem-detector) 在 Linux 节点上运行,
131130
并将系统问题报告为[事件](/docs/reference/kubernetes-api/cluster-resources/event-v1/)
132-
[节点状况](/zh/docs/concepts/architecture/nodes/#condition)
131+
[节点状况](/zh-cn/docs/concepts/architecture/nodes/#condition)
133132

134133
<!--
135134
## Legacy Add-ons

content/zh-cn/docs/concepts/cluster-administration/flow-control.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -584,7 +584,7 @@ opinions of the proper content of these objects.
584584
就可能出现抖动。
585585

586586
<!--
587-
Each `kube-apiserver` makes an inital maintenance pass over the
587+
Each `kube-apiserver` makes an initial maintenance pass over the
588588
mandatory and suggested configuration objects, and after that does
589589
periodic maintenance (once per minute) of those objects.
590590

content/zh-cn/docs/concepts/extend-kubernetes/operator.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -212,6 +212,7 @@ Operator.
212212
{{% thirdparty-content %}}
213213
214214
* [Charmed Operator Framework](https://juju.is/)
215+
* [Java Operator SDK](https://github.com/java-operator-sdk/java-operator-sdk)
215216
* [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework)
216217
* [kubebuilder](https://book.kubebuilder.io/)
217218
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (dotnet operator SDK)
@@ -226,6 +227,7 @@ you implement yourself
226227
{{% thirdparty-content %}}
227228

228229
* [Charmed Operator Framework](https://juju.is/)
230+
* [Java Operator SDK](https://github.com/java-operator-sdk/java-operator-sdk)
229231
* [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework)
230232
* [kubebuilder](https://book.kubebuilder.io/)
231233
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (dotnet operator SDK)

content/zh-cn/docs/concepts/policy/resource-quotas.md

Lines changed: 15 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,7 @@ Resource quotas work like this:
4141
资源配额的工作方式如下:
4242

4343
<!--
44-
- Different teams work in different namespaces. Currently this is voluntary, but
45-
support for making this mandatory via ACLs is planned.
44+
- Different teams work in different namespaces. This can be enforced with [RBAC](/docs/reference/access-authn-authz/rbac/).
4645
- The administrator creates one ResourceQuota for each namespace.
4746
- Users create resources (pods, services, etc.) in the namespace, and the quota system
4847
tracks usage to ensure it does not exceed hard resource limits defined in a ResourceQuota.
@@ -53,8 +52,7 @@ Resource quotas work like this:
5352
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
5453
See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem.
5554
-->
56-
- 不同的团队可以在不同的命名空间下工作,目前这是非约束性的,在未来的版本中可能会通过
57-
ACL (Access Control List 访问控制列表) 来实现强制性约束。
55+
- 不同的团队可以在不同的命名空间下工作。这可以通过 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 强制执行。
5856
- 集群管理员可以为每个命名空间创建一个或多个 ResourceQuota 对象。
5957
- 当用户在命名空间下创建资源(如 Pod、Service 等)时,Kubernetes 的配额系统会
6058
跟踪集群的资源使用情况,以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。
@@ -65,14 +63,14 @@ Resource quotas work like this:
6563
提示: 可使用 `LimitRanger` 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。
6664

6765
若想避免这类问题,请参考
68-
[演练](/zh/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)示例。
66+
[演练](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)示例。
6967

7068
<!--
7169
The name of a ResourceQuota object must be a valid
7270
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
7371
-->
7472
ResourceQuota 对象的名称必须是合法的
75-
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
73+
[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
7674

7775
<!--
7876
Examples of policies that could be created using namespaces and quotas are:
@@ -130,7 +128,7 @@ that can be requested in a given namespace.
130128
## 计算资源配额
131129

132130
用户可以对给定命名空间下的可被请求的
133-
[计算资源](/zh/docs/concepts/configuration/manage-resources-containers/)
131+
[计算资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/)
134132
总量进行限制。
135133

136134
<!--
@@ -168,7 +166,7 @@ In addition to the resources mentioned above, in release 1.10, quota support for
168166
### 扩展资源的资源配额
169167

170168
除上述资源外,在 Kubernetes 1.10 版本中,还添加了对
171-
[扩展资源](/zh/docs/concepts/configuration/manage-resources-containers/#extended-resources)
169+
[扩展资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/#extended-resources)
172170
的支持。
173171

174172
<!--
@@ -202,7 +200,7 @@ In addition, you can limit consumption of storage resources based on associated
202200
-->
203201
## 存储资源配额
204202

205-
用户可以对给定命名空间下的[存储资源](/zh/docs/concepts/storage/persistent-volumes/)
203+
用户可以对给定命名空间下的[存储资源](/zh-cn/docs/concepts/storage/persistent-volumes/)
206204
总量进行限制。
207205

208206
此外,还可以根据相关的存储类(Storage Class)来限制存储资源的消耗。
@@ -218,9 +216,9 @@ In addition, you can limit consumption of storage resources based on associated
218216
| 资源名称 | 描述 |
219217
| --------------------- | ----------------------------------------------------------- |
220218
| `requests.storage` | 所有 PVC,存储资源的需求总量不能超过该值。 |
221-
| `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 总量。 |
219+
| `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 总量。 |
222220
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | 在所有与 `<storage-class-name>` 相关的持久卷申领中,存储请求的总和不能超过该值。 |
223-
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的[持久卷申领](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 |
221+
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的[持久卷申领](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 |
224222

225223
<!--
226224
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
@@ -258,7 +256,7 @@ Refer to [Logging Architecture](/docs/concepts/cluster-administration/logging/)
258256
-->
259257
如果所使用的是 CRI 容器运行时,容器日志会被计入临时存储配额。
260258
这可能会导致存储配额耗尽的 Pods 被意外地驱逐出节点。
261-
参考[日志架构](/zh/docs/concepts/cluster-administration/logging/)
259+
参考[日志架构](/zh-cn/docs/concepts/cluster-administration/logging/)
262260
了解详细信息。
263261
{{< /note >}}
264262

@@ -343,7 +341,7 @@ The following types are supported:
343341
| 资源名称 | 描述 |
344342
| ------------------------------- | ------------------------------------------------- |
345343
| `configmaps` | 在该命名空间中允许存在的 ConfigMap 总数上限。 |
346-
| `persistentvolumeclaims` | 在该命名空间中允许存在的 [PVC](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 的总数上限。 |
344+
| `persistentvolumeclaims` | 在该命名空间中允许存在的 [PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 的总数上限。 |
347345
| `pods` | 在该命名空间中允许存在的非终止状态的 Pod 总数上限。Pod 终止状态等价于 Pod 的 `.status.phase in (Failed, Succeeded)` 为真。 |
348346
| `replicationcontrollers` | 在该命名空间中允许存在的 ReplicationController 总数上限。 |
349347
| `resourcequotas` | 在该命名空间中允许存在的 ResourceQuota 总数上限。 |
@@ -396,8 +394,8 @@ Resources specified on the quota outside of the allowed set results in a validat
396394
| `NotTerminating` | 匹配所有 `spec.activeDeadlineSeconds` 是 nil 的 Pod。 |
397395
| `BestEffort` | 匹配所有 Qos 是 BestEffort 的 Pod。 |
398396
| `NotBestEffort` | 匹配所有 Qos 不是 BestEffort 的 Pod。 |
399-
| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption)的 Pods。 |
400-
| `CrossNamespacePodAffinity` | 匹配那些设置了跨名字空间 [(反)亲和性条件](/zh/docs/concepts/scheduling-eviction/assign-pod-node)的 Pod。 |
397+
| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption)的 Pods。 |
398+
| `CrossNamespacePodAffinity` | 匹配那些设置了跨名字空间 [(反)亲和性条件](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node)的 Pod。 |
401399

402400
<!--
403401
The `BestEffort` scope restricts a quota to tracking the following resource:
@@ -485,7 +483,7 @@ Pods can be created at a specific [priority](/docs/concepts/scheduling-eviction/
485483
You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector`
486484
field in the quota spec.
487485
-->
488-
Pod 可以创建为特定的[优先级](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)。
486+
Pod 可以创建为特定的[优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)。
489487
通过使用配额规约中的 `scopeSelector` 字段,用户可以根据 Pod 的优先级控制其系统资源消耗。
490488

491489
<!--
@@ -1065,7 +1063,7 @@ and it is to be created in a namespace other than `kube-system`.
10651063
- See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
10661064
-->
10671065
- 查看[资源配额设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md)
1068-
- 查看[如何使用资源配额的详细示例](/zh/docs/tasks/administer-cluster/quota-api-object/)
1066+
- 查看[如何使用资源配额的详细示例](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)
10691067
- 阅读[优先级类配额支持的设计文档](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)
10701068
了解更多信息。
10711069
- 参阅 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)

0 commit comments

Comments
 (0)