You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -225,15 +224,16 @@ labels and an appropriate restart policy. For labels, make sure not to overlap w
225
224
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified.
226
225
227
226
For local container restarts, ReplicationControllers delegate to an agent on the node,
228
-
for example the [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) or Docker.
227
+
for example the [Kubelet](/docs/reference/command-line-tools-reference/kubelet/).
229
228
-->
230
229
除了 Pod 所需的字段外,ReplicationController 中的 Pod 模板必须指定适当的标签和适当的重新启动策略。
@@ -363,7 +364,7 @@ To update pods to a new spec in a controlled way, use a [rolling update](#rollin
363
364
364
365
Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging and data recovery. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
365
366
-->
366
-
### 从 ReplicationController 中隔离 Pod
367
+
### 从 ReplicationController 中隔离 Pod {#isolating-pods-from-a-replicationcontroller}
367
368
368
369
通过更改 Pod 的标签,可以从 ReplicationController 的目标中删除 Pod。
369
370
此技术可用于从服务中删除 Pod 以进行调试、数据恢复等。以这种方式删除的 Pod
@@ -390,7 +391,7 @@ The ReplicationController enables scaling the number of replicas up or down, eit
@@ -423,7 +424,7 @@ In addition to running multiple releases of an application while a rolling updat
423
424
424
425
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another ReplicationController with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc.
425
426
-->
426
-
### 多个版本跟踪
427
+
### 多个版本跟踪 {#multiple-release-tracks}
427
428
428
429
除了在滚动更新过程中运行应用程序的多个版本之外,通常还会使用多个版本跟踪来长时间,
429
430
甚至持续运行多个版本。这些跟踪将根据标签加以区分。
@@ -445,7 +446,7 @@ goes to the old version, and some goes to the new version.
445
446
446
447
A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services.
@@ -555,15 +555,15 @@ because they are declarative, server-side, and have additional features.
555
555
<!--
556
556
### Bare Pods
557
557
558
-
Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
558
+
Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node, such as the kubelet.
559
559
-->
560
560
### 裸 Pod
561
561
562
-
与用户直接创建 Pod 的情况不同,ReplicationController 能够替换因某些原因
563
-
被删除或被终止的 Pod,例如在节点故障或中断节点维护的情况下,例如内核升级。
562
+
与用户直接创建 Pod 的情况不同,ReplicationController 能够替换因某些原因被删除或被终止的 Pod,
0 commit comments