Skip to content

Commit 189d15f

Browse files
authored
Merge pull request #35328 from windsonsea/podnode
[zh-cn] resync /concepts/scheduling-eviction/assign-pod-node.md
2 parents 54d2e71 + 9ec8b99 commit 189d15f

File tree

1 file changed

+67
-30
lines changed

1 file changed

+67
-30
lines changed

content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 67 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -17,40 +17,45 @@ weight: 20
1717
<!-- overview -->
1818

1919
<!--
20-
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
21-
{{< glossary_tooltip text="node(s)" term_id="node" >}}.
20+
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
21+
_restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}},
22+
or to _prefer_ to run on particular nodes.
2223
There are several ways to do this and the recommended approaches all use
2324
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
24-
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
25+
Often, you do not need to set any such constraints; the
26+
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
2527
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
2628
However, there are some circumstances where you may want to control which node
27-
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different
28-
services that communicate a lot into the same availability zone.
29+
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
30+
or to co-locate Pods from two different services that communicate a lot into the same availability zone.
2931
-->
3032
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}}
31-
只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行。
33+
以便 **限制** 其只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行,
34+
或优先在特定的节点上运行。
3235
有几种方法可以实现这点,推荐的方法都是用
3336
[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来进行选择。
3437
通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 Pod 分散到节点上,
3538
而不是将 Pod 放置在可用资源不足的节点上等等)。但在某些情况下,你可能需要进一步控制
36-
Pod 被部署到的节点。例如,确保 Pod 最终落在连接了 SSD 的机器上,
39+
Pod 被部署到哪个节点。例如,确保 Pod 最终落在连接了 SSD 的机器上,
3740
或者将来自两个不同的服务且有大量通信的 Pods 被放置在同一个可用区。
3841

3942
<!-- body -->
4043

4144
<!--
4245
You can use any of the following methods to choose where Kubernetes schedules
43-
specific Pods:
46+
specific Pods:
4447
45-
* [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
46-
* [Affinity and anti-affinity](#affinity-and-anti-affinity)
47-
* [nodeName](#nodename) field
48+
* [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
49+
* [Affinity and anti-affinity](#affinity-and-anti-affinity)
50+
* [nodeName](#nodename) field
51+
* [Pod topology spread constraints](#pod-topology-spread-constraints)
4852
-->
4953
你可以使用下列方法中的任何一种来选择 Kubernetes 对特定 Pod 的调度:
5054

5155
*[节点标签](#built-in-node-labels)匹配的 [nodeSelector](#nodeSelector)
5256
* [亲和性与反亲和性](#affinity-and-anti-affinity)
5357
* [nodeName](#nodename) 字段
58+
* [Pod 拓扑分布约束](#pod-topology-spread-constraints)
5459

5560
<!--
5661
## Node labels {#built-in-node-labels}
@@ -392,8 +397,8 @@ NodeAffinity specified in the PodSpec.
392397
That is, in order to match the Pod, nodes need to satisfy `addedAffinity` and
393398
the Pod's `.spec.NodeAffinity`.
394399
-->
395-
这里的 `addedAffinity` 除遵从 Pod 规约中设置的节点亲和性之外,
396-
适用于将 `.spec.schedulerName` 设置为 `foo-scheduler`。
400+
这里的 `addedAffinity` 除遵从 Pod 规约中设置的节点亲和性之外,
401+
还适用于将 `.spec.schedulerName` 设置为 `foo-scheduler`。
397402
换言之,为了匹配 Pod,节点需要满足 `addedAffinity` 和 Pod 的 `.spec.NodeAffinity`。
398403

399404
<!--
@@ -627,33 +632,35 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
627632
-->
628633
用户也可以使用 `namespaceSelector` 选择匹配的名字空间,`namespaceSelector`
629634
是对名字空间集合进行标签查询的机制。
630-
亲和性条件会应用到 `namespaceSelector` 所选择的名字空间和 `namespaces` 字段中
631-
所列举的名字空间之上。
635+
亲和性条件会应用到 `namespaceSelector` 所选择的名字空间和 `namespaces` 字段中所列举的名字空间之上。
632636
注意,空的 `namespaceSelector`(`{}`)会匹配所有名字空间,而 null 或者空的
633637
`namespaces` 列表以及 null 值 `namespaceSelector` 意味着“当前 Pod 的名字空间”。
634638

635-
636639
<!--
637640
#### More practical use-cases
638641

639642
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
640643
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
641644
rules allow you to configure that a set of workloads should
642-
be co-located in the same defined topology, eg., the same node.
645+
be co-located in the same defined topology; for example, preferring to place two related
646+
Pods onto the same node.
643647
-->
644648
#### 更实际的用例
645649

646650
Pod 间亲和性与反亲和性在与更高级别的集合(例如 ReplicaSet、StatefulSet、
647651
Deployment 等)一起使用时,它们可能更加有用。
648-
这些规则使得你可以配置一组工作负载,使其位于相同定义拓扑(例如,节点)中。
652+
这些规则使得你可以配置一组工作负载,使其位于所定义的同一拓扑中;
653+
例如优先将两个相关的 Pod 置于相同的节点上。
649654

650655
<!--
651-
Take, for example, a three-node cluster running a web application with an
652-
in-memory cache like redis. You could use inter-pod affinity and anti-affinity
653-
to co-locate the web servers with the cache as much as possible.
656+
For example: imagine a three-node cluster. You use the cluster to run a web application
657+
and also an in-memory cache (such as Redis). For this example, also assume that latency between
658+
the web application and the memory cache should be as low as is practical. You could use inter-pod
659+
affinity and anti-affinity to co-locate the web servers with the cache as much as possible.
654660
-->
655-
以一个三节点的集群为例,该集群运行一个带有 Redis 这种内存缓存的 Web 应用程序。
656-
你可以使用节点间的亲和性和反亲和性来尽可能地将 Web 服务器与缓存并置。
661+
以一个三节点的集群为例。你使用该集群运行一个带有内存缓存(例如 Redis)的 Web 应用程序。
662+
在此例中,还假设 Web 应用程序和内存缓存之间的延迟应尽可能低。
663+
你可以使用 Pod 间的亲和性和反亲和性来尽可能地将该 Web 服务器与缓存并置。
657664

658665
<!--
659666
In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
@@ -696,14 +703,14 @@ spec:
696703
```
697704

698705
<!--
699-
The following Deployment for the web servers creates replicas with the label `app=web-store`. The
700-
Pod affinity rule tells the scheduler to place each replica on a node that has a
701-
Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler
702-
to avoid placing multiple `app=web-store` servers on a single node.
706+
The following example Deployment for the web servers creates replicas with the label `app=web-store`.
707+
The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod
708+
with the label `app=store`. The Pod anti-affinity rule tells the scheduler never to place
709+
multiple `app=web-store` servers on a single node.
703710
-->
704-
下面的 Deployment 用来提供 Web 服务器服务,会创建带有标签 `app=web-store` 的副本。
705-
Pod 亲和性规则告诉调度器将副本放到运行有标签包含 `app=store` Pod 的节点上。
706-
Pod 反亲和性规则告诉调度器不要在同一节点上放置多个 `app=web-store` 的服务器
711+
下例的 Deployment Web 服务器创建带有标签 `app=web-store` 的副本。
712+
Pod 亲和性规则告诉调度器将每个副本放到存在标签为 `app=store` Pod 的节点上。
713+
Pod 反亲和性规则告诉调度器决不要在单个节点上放置多个 `app=web-store` 服务器
707714

708715
```yaml
709716
apiVersion: apps/v1
@@ -756,11 +763,20 @@ where each web server is co-located with a cache, on three separate nodes.
756763
| *webserver-1* | *webserver-2* | *webserver-3* |
757764
| *cache-1* | *cache-2* | *cache-3* |
758765

766+
<!--
767+
The overall effect is that each cache instance is likely to be accessed by a single client, that
768+
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
769+
-->
770+
总体效果是每个缓存实例都非常可能被在同一个节点上运行的某个客户端访问。
771+
这种方法旨在最大限度地减少偏差(负载不平衡)和延迟。
772+
759773
<!--
774+
You might have other reasons to use Pod anti-affinity.
760775
See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
761776
for an example of a StatefulSet configured with anti-affinity for high
762777
availability, using the same technique as this example.
763778
-->
779+
你可能还有使用 Pod 反亲和性的一些其他原因。
764780
参阅 [ZooKeeper 教程](/zh-cn/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
765781
了解一个 StatefulSet 的示例,该 StatefulSet 配置了反亲和性以实现高可用,
766782
所使用的是与此例相同的技术。
@@ -820,6 +836,27 @@ The above Pod will only run on the node `kube-01`.
820836
-->
821837
上面的 Pod 只能运行在节点 `kube-01` 之上。
822838

839+
<!--
840+
## Pod topology spread constraints
841+
842+
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}}
843+
are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other
844+
topology domains that you define. You might do this to improve performance, expected availability, or
845+
overall utilization.
846+
847+
Read [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
848+
to learn more about how these work.
849+
-->
850+
## Pod 拓扑分布约束 {#pod-topology-spread-constraints}
851+
852+
你可以使用 **拓扑分布约束(Topology Spread Constraints)** 来控制
853+
{{< glossary_tooltip text="Pod" term_id="Pod" >}} 在集群内故障域之间的分布,
854+
故障域的示例有区域(Region)、可用区(Zone)、节点和其他用户自定义的拓扑域。
855+
这样做有助于提升性能、实现高可用或提升资源利用率。
856+
857+
阅读 [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)
858+
以进一步了解这些约束的工作方式。
859+
823860
## {{% heading "whatsnext" %}}
824861

825862
<!--

0 commit comments

Comments
 (0)