@@ -26,7 +26,8 @@ or configure topology spread constraints for individual workloads.
26
26
例如区域(Region)、可用区(Zone)、节点和其他用户自定义拓扑域。
27
27
这样做有助于实现高可用并提升资源利用率。
28
28
29
- 你可以将[ 集群级约束] ( #cluster-level-default-constraints ) 设为默认值,或为个别工作负载配置拓扑分布约束。
29
+ 你可以将[ 集群级约束] ( #cluster-level-default-constraints ) 设为默认值,
30
+ 或为个别工作负载配置拓扑分布约束。
30
31
31
32
<!-- body -->
32
33
@@ -62,19 +63,20 @@ are split across three different datacenters (or infrastructure zones). Now you
62
63
have less concern about a single node failure, but you notice that latency is
63
64
higher than you'd like, and you are paying for network costs associated with
64
65
sending network traffic between the different zones.
65
-
66
- You decide that under normal operation you'd prefer to have a similar number of replicas
67
- [scheduled](/docs/concepts/scheduling-eviction/) into each infrastructure zone,
68
- and you'd like the cluster to self-heal in the case that there is a problem.
69
-
70
- Pod topology spread constraints offer you a declarative way to configure that.
71
66
-->
72
67
随着你的工作负载扩容,运行的 Pod 变多,将需要考虑另一个重要问题。
73
68
假设你有 3 个节点,每个节点运行 5 个 Pod。这些节点有足够的容量能够运行许多副本;
74
69
但与这个工作负载互动的客户端分散在三个不同的数据中心(或基础设施可用区)。
75
70
现在你可能不太关注单节点故障问题,但你会注意到延迟高于自己的预期,
76
71
在不同的可用区之间发送网络流量会产生一些网络成本。
77
72
73
+ <!--
74
+ You decide that under normal operation you'd prefer to have a similar number of replicas
75
+ [scheduled](/docs/concepts/scheduling-eviction/) into each infrastructure zone,
76
+ and you'd like the cluster to self-heal in the case that there is a problem.
77
+
78
+ Pod topology spread constraints offer you a declarative way to configure that.
79
+ -->
78
80
你决定在正常运营时倾向于将类似数量的副本[ 调度] ( /zh-cn/docs/concepts/scheduling-eviction/ )
79
81
到每个基础设施可用区,且你想要该集群在遇到问题时能够自愈。
80
82
@@ -221,7 +223,13 @@ your cluster. Those fields are:
221
223
will try to put a balanced number of pods into each domain.
222
224
Also, we define an eligible domain as a domain whose nodes meet the requirements of
223
225
nodeAffinityPolicy and nodeTaintsPolicy.
226
+ -->
227
+ - ** topologyKey** 是[ 节点标签] ( #node-labels ) 的键。如果节点使用此键标记并且具有相同的标签值,
228
+ 则将这些节点视为处于同一拓扑域中。我们将拓扑域中(即键值对)的每个实例称为一个域。
229
+ 调度器将尝试在每个拓扑域中放置数量均衡的 Pod。
230
+ 另外,我们将符合条件的域定义为其节点满足 nodeAffinityPolicy 和 nodeTaintsPolicy 要求的域。
224
231
232
+ <!--
225
233
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
226
234
- `DoNotSchedule` (default) tells the scheduler not to schedule it.
227
235
- `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
@@ -232,11 +240,6 @@ your cluster. Those fields are:
232
240
See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
233
241
for more details.
234
242
-->
235
- - ** topologyKey** 是[ 节点标签] ( #node-labels ) 的键。如果节点使用此键标记并且具有相同的标签值,
236
- 则将这些节点视为处于同一拓扑域中。我们将拓扑域中(即键值对)的每个实例称为一个域。
237
- 调度器将尝试在每个拓扑域中放置数量均衡的 Pod。
238
- 另外,我们将符合条件的域定义为其节点满足 nodeAffinityPolicy 和 nodeTaintsPolicy 要求的域。
239
-
240
243
- ** whenUnsatisfiable** 指示如果 Pod 不满足分布约束时如何处理:
241
244
- ` DoNotSchedule ` (默认)告诉调度器不要调度。
242
245
- ` ScheduleAnyway ` 告诉调度器仍然继续调度,只是根据如何能将偏差最小化来对节点进行排序。
@@ -434,12 +437,6 @@ Usually, if you are using a workload controller such as a Deployment, the pod te
434
437
takes care of this for you. If you mix different spread constraints then Kubernetes
435
438
follows the API definition of the field; however, the behavior is more likely to become
436
439
confusing and troubleshooting is less straightforward.
437
-
438
- You need a mechanism to ensure that all the nodes in a topology domain (such as a
439
- cloud provider region) are labelled consistently.
440
- To avoid you needing to manually label nodes, most clusters automatically
441
- populate well-known labels such as `kubernetes.io/hostname`. Check whether
442
- your cluster supports this.
443
440
-->
444
441
## 一致性 {#Consistency}
445
442
@@ -449,6 +446,13 @@ your cluster supports this.
449
446
如果你混合不同的分布约束,则 Kubernetes 会遵循该字段的 API 定义;
450
447
但是,该行为可能更令人困惑,并且故障排除也没那么简单。
451
448
449
+ <!--
450
+ You need a mechanism to ensure that all the nodes in a topology domain (such as a
451
+ cloud provider region) are labelled consistently.
452
+ To avoid you needing to manually label nodes, most clusters automatically
453
+ populate well-known labels such as `kubernetes.io/hostname`. Check whether
454
+ your cluster supports this.
455
+ -->
452
456
你需要一种机制来确保拓扑域(例如云提供商区域)中的所有节点具有一致的标签。
453
457
为了避免你需要手动为节点打标签,大多数集群会自动填充知名的标签,
454
458
例如 `kubernetes.io/hostname`。检查你的集群是否支持此功能。
@@ -822,7 +826,7 @@ An example configuration might look like follows:
822
826
配置的示例可能看起来像下面这个样子:
823
827
824
828
```yaml
825
- apiVersion: kubescheduler.config.k8s.io/v1beta3
829
+ apiVersion: kubescheduler.config.k8s.io/v1
826
830
kind: KubeSchedulerConfiguration
827
831
828
832
profiles:
@@ -894,7 +898,7 @@ empty `defaultConstraints` in the `PodTopologySpread` plugin configuration:
894
898
并将 `PodTopologySpread` 插件配置中的 `defaultConstraints` 参数置空来禁用默认 Pod 分布约束:
895
899
896
900
` ` ` yaml
897
- apiVersion: kubescheduler.config.k8s.io/v1beta3
901
+ apiVersion: kubescheduler.config.k8s.io/v1
898
902
kind: KubeSchedulerConfiguration
899
903
900
904
profiles:
0 commit comments