You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/zh-cn/docs/concepts/workloads/controllers/replicationcontroller.md
+24-25Lines changed: 24 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,10 +26,10 @@ weight: 90
26
26
27
27
<!-- overview -->
28
28
29
+
{{< note >}}
29
30
<!--
30
31
A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication.
For general information about working with configuration files, see [object management](/docs/concepts/overview/working-with-objects/object-management/).
191
193
192
194
A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
The `.spec.template` is the only required field of the `.spec`.
207
211
208
-
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or `kind`.
212
+
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate
@@ -221,7 +225,7 @@ labels and an appropriate restart policy. For labels, make sure not to overlap w
221
225
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified.
222
226
223
227
For local container restarts, ReplicationControllers delegate to an agent on the node,
224
-
for example the [Kubelet](/docs/admin/kubelet/) or Docker.
228
+
for example the [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) or Docker.
225
229
-->
226
230
除了 Pod 所需的字段外,ReplicationController 中的 Pod 模板必须指定适当的标签和适当的重新启动策略。
227
231
对于标签,请确保不与其他控制器重叠。参考 [Pod 选择算符](#pod-selector)。
@@ -368,7 +372,7 @@ Pods may be removed from a ReplicationController's target set by changing their
368
372
<!--
369
373
## Common usage patterns
370
374
-->
371
-
## 常见的使用模式
375
+
## 常见的使用模式 {#common-usage-patterns}
372
376
373
377
<!--
374
378
### Rescheduling
@@ -393,7 +397,7 @@ The ReplicationController enables scaling the number of replicas up or down, eit
393
397
394
398
The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.
395
399
396
-
As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
400
+
As explained in [#1353](https://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
397
401
-->
398
402
### 滚动更新 {#rolling-updates}
399
403
@@ -407,17 +411,11 @@ ReplicationController 的设计目的是通过逐个替换 Pod 以方便滚动
407
411
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
408
412
409
413
The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
410
-
411
-
Rolling update is implemented in the client tool
412
-
[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples.
413
414
-->
414
415
理想情况下,滚动更新控制器将考虑应用程序的就绪情况,并确保在任何给定时间都有足够数量的 Pod 有效地提供服务。
415
416
416
417
这两个 ReplicationController 将需要创建至少具有一个不同标签的 Pod,比如 Pod 主要容器的镜像标签,因为通常是镜像更新触发滚动更新。
@@ -462,7 +460,7 @@ A ReplicationController will never terminate on its own, but it isn't expected t
462
460
463
461
Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself.
464
462
-->
465
-
## 编写多副本的应用
463
+
## 编写多副本的应用 {#writing-programs-for-replication}
466
464
467
465
由 ReplicationController 创建的 Pod 是可替换的,语义上是相同的,
468
466
尽管随着时间的推移,它们的配置可能会变得异构。
@@ -477,9 +475,9 @@ Pods created by a ReplicationController are intended to be fungible and semantic
477
475
<!--
478
476
## Responsibilities of the ReplicationController
479
477
480
-
The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
478
+
The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
ReplicationController 仅确保所需的 Pod 数量与其标签选择算符匹配,并且是可操作的。
485
483
目前,它的计数中只排除终止的 Pod。
@@ -488,7 +486,7 @@ ReplicationController 仅确保所需的 Pod 数量与其标签选择算符匹
488
486
我们计划发出事件,这些事件可以被外部客户端用来实现任意复杂的替换和/或缩减策略。
489
487
490
488
<!--
491
-
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
489
+
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).
我们甚至计划考虑批量创建 Pod 的机制(查阅 [#170](https://issue.k8s.io/170))。
502
500
503
501
<!--
504
-
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](https://netflixtechblog.com/asgard-web-based-cloud-management-and-deployment-2c9fc4e4d3a1) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
502
+
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](https://netflixtechblog.com/asgard-web-based-cloud-management-and-deployment-2c9fc4e4d3a1) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
505
503
-->
506
504
ReplicationController 旨在成为可组合的构建基元。
507
505
我们希望在它和其他补充原语的基础上构建更高级别的 API 或者工具,以便于将来的用户使用。
@@ -516,21 +514,22 @@ Replication controller is a top-level resource in the Kubernetes REST API. More
516
514
API object can be found at:
517
515
[ReplicationController API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replicationcontroller-v1-core).
518
516
-->
519
-
## API 对象
517
+
## API 对象 {#api-object}
520
518
521
519
在 Kubernetes REST API 中 Replication controller 是顶级资源。
522
520
更多关于 API 对象的详细信息可以在
523
521
[ReplicationController API 对象](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replicationcontroller-v1-core)找到。
524
522
525
523
<!--
526
524
## Alternatives to ReplicationController
525
+
527
526
### ReplicaSet
528
527
529
528
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement).
530
-
It’s mainly used by [Deployment](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate Pod creation, deletion and updates.
531
-
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all.
529
+
It's mainly used by [Deployment](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
530
+
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don't require updates at all.
Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for Pods that are expected to terminate on their own
571
+
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicationController for pods that are expected to terminate on their own
0 commit comments