You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/zh-cn/blog/_posts/2020-09-30-writing-crl-scheduler/index.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,9 +45,9 @@ Most stateless systems, web servers for example, are created without the need to
45
45
One of Kubernetes' responsibilities is to place "resources" (e.g, a disk or container) into the cluster and satisfy the constraints they request. For example: "I must be in availability zone _A_" (see [Running in multiple zones](/docs/setup/best-practices/multiple-zones/#nodes-are-labeled)), or "I can't be placed onto the same node as this other Pod" (see [Affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)).
As an addition to those constraints, Kubernetes offers [Statefulsets](/docs/concepts/workloads/controllers/statefulset/) that provide identity to Pods as well as persistent storage that "follows" these identified pods. Identity in a StatefulSet is handled by an increasing integer at the end of a pod's name. It's important to note that this integer must always be contiguous: in a StatefulSet, if pods 1 and 3 exist then pod 2 must also exist.
@@ -91,7 +91,7 @@ When adding additional resources to the cluster we also distribute them across z
91
91
Note that anti-affinities are satisfied no matter the order in which pods are assigned to Kubernetes nodes. In the example, pods 0, 1 and 2 were assigned to zones A, B, and C respectively, but pods 3 and 4 were assigned in a different order, to zones B and A respectively. The anti-affinity is still satisfied because the pods are still placed in different zones.
92
92
-->
93
93
请注意,无论 Pod 被分配到 Kubernetes 节点的顺序如何,都会满足反亲和性。
94
-
在这个例子中,Pod 0、1、2 分别被分配到 A、B、C 区,但 Pod 3 和 4 以不同的顺序被分配到 B 和 A 区。
94
+
在这个例子中,Pod 0、1、2 分别被分配到 A、B、C 区,但 Pod 3 和 4 以不同的顺序被分配到 B 和 A 区。
95
95
反亲和性仍然得到满足,因为 Pod 仍然被放置在不同的区域。
96
96
97
97
<!--
@@ -144,7 +144,7 @@ Our combined knowledge of the following is what lead to this misconception.
144
144
* The behavior that a StatefulSet with _n_ replicas, when Pods are being deployed, they are created sequentially, in order from `{0..n-1}`. See [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) for more details.
一个 Zone 代表一个逻辑故障域。Kubernetes 集群通常跨越多个 Zone 以提高可用性。虽然 Zone 的确切定义留给基础设施实现,
533
-
但 Zone 的常见属性包括 Zone 内非常低的网络延迟、Zone 内的免费网络流量以及与其他 Zone 的故障独立性。
532
+
一个 Zone 代表一个逻辑故障域。Kubernetes 集群通常跨越多个 Zone 以提高可用性。虽然 Zone 的确切定义留给基础设施实现,
533
+
但 Zone 的常见属性包括 Zone 内非常低的网络延迟、Zone 内的免费网络流量以及与其他 Zone 的故障独立性。
534
534
例如,一个 Zone 内的 Node 可能共享一个网络交换机,但不同 Zone 中的 Node 无法共享交换机。
535
535
536
536
一个 Region 代表一个更大的域,由一个或多个 Zone 组成。Kubernetes 集群跨多个 Region 并不常见,虽然 Zone 或 Region 的确切定义留给基础设施实现,
@@ -544,9 +544,9 @@ Kubernetes makes a few assumptions about the structure of zones and regions:
544
544
-->
545
545
Kubernetes 对 Zone 和 Region 的结构做了一些假设:
546
546
547
-
1. Zone 和 Region 是分层的:Zone 是 Region 的严格子集,没有 Zone 可以在两个 Region 中;
547
+
1. Zone 和 Region 是分层的:Zone 是 Region 的严格子集,没有 Zone 可以在两个 Region 中;
548
548
549
-
2. Zone 名称跨 Region 是唯一的;例如,Region “africa-east-1” 可能由 Zone “africa-east-1a” 和 “africa-east-1b” 组成。
549
+
2. Zone 名称跨 Region 是唯一的;例如,Region “africa-east-1” 可能由 Zone “africa-east-1a” 和 “africa-east-1b” 组成。
550
550
551
551
<!--
552
552
It should be safe to assume that topology labels do not change. Even though labels are strictly mutable, consumers of them can assume that a given node is not going to be moved between zones without being destroyed and recreated.
@@ -581,7 +581,7 @@ If `PersistentVolumeLabel` does not support automatic labeling of your Persisten
581
581
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
582
582
-->
583
583
你应该考虑手动添加标签(或添加对 `PersistentVolumeLabel` 的支持)。
584
-
基于 `PersistentVolumeLabel`,调度程序可以防止 Pod 挂载来自其他 Zone 的卷。如果你的基础架构没有此限制,则不需要将 Zone 标签添加到卷上。
584
+
基于 `PersistentVolumeLabel`,调度程序可以防止 Pod 挂载来自其他 Zone 的卷。如果你的基础架构没有此限制,则不需要将 Zone 标签添加到卷上。
@@ -501,7 +501,7 @@ This would equate to manually enabling `MyPlugin` for all of its extension
501
501
points, like so:
502
502
-->
503
503
504
-
这相当于为所有扩展点手动启用`MyPlugin`,如下所示:
504
+
这相当于为所有扩展点手动启用`MyPlugin`,如下所示:
505
505
506
506
```yaml
507
507
apiVersion: kubescheduler.config.k8s.io/v1beta3
@@ -732,7 +732,7 @@ as well as its seamless integration with the existing methods for configuring ex
732
732
## Scheduler configuration migrations
733
733
-->
734
734
735
-
## 调度程序配置迁移
735
+
## 调度程序配置迁移 {#scheduler-configuration-migrations}
736
736
{{< tabs name="tab_with_md" >}}
737
737
{{% tab name="v1beta1 → v1beta2" %}}
738
738
<!--
@@ -774,7 +774,7 @@ as well as its seamless integration with the existing methods for configuring ex
774
774
* The scheduler plugin `ServiceAffinity` is deprecated; instead, use the [`InterPodAffinity`](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) plugin (enabled by default) to achieve similar behavior.
0 commit comments