Skip to content

Commit 22ddf4f

Browse files
committed
translate content/zh/docs/reference/scheduling/policies.md
1 parent 15011a6 commit 22ddf4f

File tree

2 files changed

+238
-0
lines changed

2 files changed

+238
-0
lines changed
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
title: 调度
3+
weight: 70
4+
toc-hide: true
5+
---
Lines changed: 233 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,233 @@
1+
---
2+
title: 调度策略
3+
content_type: concept
4+
weight: 10
5+
---
6+
<!--
7+
title: Scheduling Policies
8+
content_type: concept
9+
weight: 10
10+
-->
11+
12+
<!-- overview -->
13+
14+
<!--
15+
A scheduling Policy can be used to specify the *predicates* and *priorities*
16+
that the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
17+
runs to [filter and score nodes](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation),
18+
respectively.
19+
-->
20+
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
21+
根据调度策略指定的*断言(predicates)**优先级(priorities)*
22+
分别对节点进行[过滤和打分](/zh/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation)
23+
24+
<!--
25+
You can set a scheduling policy by running
26+
`kube-scheduler --policy-config-file <filename>` or
27+
`kube-scheduler --policy-configmap <ConfigMap>`
28+
and using the [Policy type](https://pkg.go.dev/k8s.io/[email protected]/config/v1?tab=doc#Policy).
29+
-->
30+
你可以通过执行 `kube-scheduler --policy-config-file <filename>`
31+
`kube-scheduler --policy-configmap <ConfigMap>`
32+
设置并使用[调度策略](https://pkg.go.dev/k8s.io/[email protected]/config/v1?tab=doc#Policy)
33+
34+
35+
<!-- body -->
36+
37+
<!-- ## Predicates -->
38+
## 断言 {#predicates}
39+
40+
41+
<!-- The following *predicates* implement filtering: -->
42+
以下*断言*实现了过滤接口:
43+
44+
<!--
45+
- `PodFitsHostPorts`: Checks if a Node has free ports (the network protocol kind)
46+
for the Pod ports the Pod is requesting.
47+
-->
48+
- `PodFitsHostPorts`:检查 Pod 请求的端口(网络协议类型)在节点上是否可用。
49+
50+
<!-- - `PodFitsHost`: Checks if a Pod specifies a specific Node by its hostname. -->
51+
- `PodFitsHost`:检查 Pod 是否通过主机名指定了 Node。
52+
53+
<!--
54+
- `PodFitsResources`: Checks if the Node has free resources (eg, CPU and Memory)
55+
to meet the requirement of the Pod.
56+
-->
57+
- `PodFitsResources`:检查节点的空闲资源(例如,CPU和内存)是否满足 Pod 的要求。
58+
59+
<!--
60+
- `MatchNodeSelector`: Checks if a Pod's Node {{< glossary_tooltip term_id="selector" >}}
61+
matches the Node's {{< glossary_tooltip text="label(s)" term_id="label" >}}.
62+
-->
63+
- `MatchNodeSelector`:检查 Pod 的节点{{< glossary_tooltip text="选择算符" term_id="selector" >}}
64+
和节点的 {{< glossary_tooltip text="标签" term_id="label" >}} 是否匹配。
65+
66+
<!--
67+
- `NoVolumeZoneConflict`: Evaluate if the {{< glossary_tooltip text="Volumes" term_id="volume" >}}
68+
that a Pod requests are available on the Node, given the failure zone restrictions for
69+
that storage.
70+
-->
71+
- `NoVolumeZoneConflict`:给定该存储的故障区域限制,
72+
评估 Pod 请求的{{< glossary_tooltip text="卷" term_id="volume" >}}在节点上是否可用。
73+
74+
<!--
75+
- `NoDiskConflict`: Evaluates if a Pod can fit on a Node due to the volumes it requests,
76+
and those that are already mounted.
77+
-->
78+
- `NoDiskConflict`:根据 Pod 请求的卷是否在节点上已经挂载,评估 Pod 和节点是否匹配。
79+
80+
<!--
81+
- `MaxCSIVolumeCount`: Decides how many {{< glossary_tooltip text="CSI" term_id="csi" >}}
82+
volumes should be attached, and whether that's over a configured limit.
83+
-->
84+
- `MaxCSIVolumeCount`:决定附加 {{< glossary_tooltip text="CSI" term_id="csi" >}} 卷的数量,判断是否超过配置的限制。
85+
86+
<!--
87+
- `CheckNodeMemoryPressure`: If a Node is reporting memory pressure, and there's no
88+
configured exception, the Pod won't be scheduled there.
89+
-->
90+
- `CheckNodeMemoryPressure`:如果节点正上报内存压力,并且没有异常配置,则不会把 Pod 调度到此节点上。
91+
92+
<!--
93+
- `CheckNodePIDPressure`: If a Node is reporting that process IDs are scarce, and
94+
there's no configured exception, the Pod won't be scheduled there.
95+
-->
96+
- `CheckNodePIDPressure`:如果节点正上报进程 ID 稀缺,并且没有异常配置,则不会把 Pod 调度到此节点上。
97+
98+
<!--
99+
- `CheckNodeDiskPressure`: If a Node is reporting storage pressure (a filesystem that
100+
is full or nearly full), and there's no configured exception, the Pod won't be
101+
scheduled there.
102+
-->
103+
- `CheckNodeDiskPressure`:如果节点正上报存储压力(文件系统已满或几乎已满),并且没有异常配置,则不会把 Pod 调度到此节点上。
104+
105+
<!--
106+
- `CheckNodeCondition`: Nodes can report that they have a completely full filesystem,
107+
that networking isn't available or that kubelet is otherwise not ready to run Pods.
108+
If such a condition is set for a Node, and there's no configured exception, the Pod
109+
won't be scheduled there.
110+
-->
111+
- `CheckNodeCondition`:节点可用上报自己的文件系统已满,网络不可用或者 kubelet 尚未准备好运行 Pod。
112+
如果节点上设置了这样的状况,并且没有异常配置,则不会把 Pod 调度到此节点上。
113+
114+
<!--
115+
- `PodToleratesNodeTaints`: checks if a Pod's {{< glossary_tooltip text="tolerations" term_id="toleration" >}}
116+
can tolerate the Node's {{< glossary_tooltip text="taints" term_id="taint" >}}.
117+
-->
118+
- `PodToleratesNodeTaints`:检查 Pod 的{{< glossary_tooltip text="容忍" term_id="toleration" >}}
119+
是否能容忍节点的{{< glossary_tooltip text="污点" term_id="taint" >}}。
120+
121+
<!--
122+
- `CheckVolumeBinding`: Evaluates if a Pod can fit due to the volumes it requests.
123+
This applies for both bound and unbound
124+
{{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}.
125+
-->
126+
- `CheckVolumeBinding`:基于 Pod 的卷请求,评估 Pod 是否适合节点,这里的卷包括绑定的和未绑定的
127+
{{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}} 都适用。
128+
129+
130+
<!-- ## Priorities -->
131+
## 优先级 {#priorities}
132+
133+
<!-- The following *priorities* implement scoring: -->
134+
以下*优先级*实现了打分接口:
135+
136+
<!--
137+
- `SelectorSpreadPriority`: Spreads Pods across hosts, considering Pods that
138+
belong to the same {{< glossary_tooltip text="Service" term_id="service" >}},
139+
{{< glossary_tooltip term_id="statefulset" >}} or
140+
{{< glossary_tooltip term_id="replica-set" >}}.
141+
-->
142+
- `SelectorSpreadPriority`:属于同一 {{< glossary_tooltip text="Service" term_id="service" >}}、
143+
{{< glossary_tooltip term_id="statefulset" >}} 或
144+
{{< glossary_tooltip term_id="replica-set" >}} 的 Pod,跨主机部署。
145+
146+
<!--
147+
- `InterPodAffinityPriority`: Implements preferred
148+
[inter pod affininity and antiaffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
149+
-->
150+
- `InterPodAffinityPriority`:实现了 [Pod 间亲和性与反亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)的优先级。
151+
152+
<!--
153+
- `LeastRequestedPriority`: Favors nodes with fewer requested resources. In other
154+
words, the more Pods that are placed on a Node, and the more resources those
155+
Pods use, the lower the ranking this policy will give.
156+
-->
157+
- `LeastRequestedPriority`:偏向最少请求资源的节点。
158+
换句话说,节点上的 Pod 越多,使用的资源就越多,此策略给出的排名就越低。
159+
160+
<!--
161+
- `MostRequestedPriority`: Favors nodes with most requested resources. This policy
162+
will fit the scheduled Pods onto the smallest number of Nodes needed to run your
163+
overall set of workloads.
164+
-->
165+
- `MostRequestedPriority`:支持最多请求资源的节点。
166+
该策略将 Pod 调度到整体工作负载所需的最少的一组节点上。
167+
168+
<!-- - `RequestedToCapacityRatioPriority`: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape. -->
169+
- `RequestedToCapacityRatioPriority`:使用默认的打分方法模型,创建基于 ResourceAllocationPriority 的 requestedToCapacity。
170+
171+
<!-- - `BalancedResourceAllocation`: Favors nodes with balanced resource usage. -->
172+
- `BalancedResourceAllocation`:偏向平衡资源使用的节点。
173+
174+
<!--
175+
- `NodePreferAvoidPodsPriority`: Prioritizes nodes according to the node annotation
176+
`scheduler.alpha.kubernetes.io/preferAvoidPods`. You can use this to hint that
177+
two different Pods shouldn't run on the same Node.
178+
-->
179+
- `NodePreferAvoidPodsPriority`:根据节点的注解 `scheduler.alpha.kubernetes.io/preferAvoidPods` 对节点进行优先级排序。
180+
你可以使用它来暗示两个不同的 Pod 不应在同一节点上运行。
181+
182+
<!--
183+
- `NodeAffinityPriority`: Prioritizes nodes according to node affinity scheduling
184+
preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution.
185+
You can read more about this in [Assigning Pods to Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/).
186+
-->
187+
- `NodeAffinityPriority`:根据节点亲和中 PreferredDuringSchedulingIgnoredDuringExecution 字段对节点进行优先级排序。
188+
你可以在[将 Pod 分配给节点](/zh/docs/concepts/scheduling-eviction/assign-pod-node/)中了解更多。
189+
190+
<!--
191+
- `TaintTolerationPriority`: Prepares the priority list for all the nodes, based on
192+
the number of intolerable taints on the node. This policy adjusts a node's rank
193+
taking that list into account.
194+
-->
195+
- `TaintTolerationPriority`:根据节点上无法忍受的污点数量,给所有节点进行优先级排序。
196+
此策略会根据排序结果调整节点的等级。
197+
198+
<!--
199+
- `ImageLocalityPriority`: Favors nodes that already have the
200+
{{< glossary_tooltip text="container images" term_id="image" >}} for that
201+
Pod cached locally.
202+
-->
203+
- `ImageLocalityPriority`:偏向已在本地缓存 Pod 所需容器镜像的节点。
204+
205+
<!--
206+
- `ServiceSpreadingPriority`: For a given Service, this policy aims to make sure that
207+
the Pods for the Service run on different nodes. It favours scheduling onto nodes
208+
that don't have Pods for the service already assigned there. The overall outcome is
209+
that the Service becomes more resilient to a single Node failure.
210+
-->
211+
- `ServiceSpreadingPriority`:对于给定的 Service,此策略旨在确保该 Service 关联的 Pod 在不同的节点上运行。
212+
它偏向把 Pod 调度到没有该服务的节点。
213+
整体来看,Service 对于单个节点故障变得更具弹性。
214+
215+
<!-- - `EqualPriority`: Gives an equal weight of one to all nodes. -->
216+
- `EqualPriority`:给予所有节点相等的权重。
217+
218+
<!--
219+
- `EvenPodsSpreadPriority`: Implements preferred
220+
[pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
221+
-->
222+
- `EvenPodsSpreadPriority`:实现了 [Pod 拓扑扩展约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/)的优先级排序。
223+
224+
225+
226+
## {{% heading "whatsnext" %}}
227+
228+
<!--
229+
* Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/)
230+
* Learn about [kube-scheduler Configuration](/docs/reference/scheduling/config/)
231+
-->
232+
* 了解[调度](/zh/docs/concepts/scheduling-eviction/kube-scheduler/)
233+
* 了解 [kube-scheduler 配置](/zh/docs/reference/scheduling/config/)

0 commit comments

Comments
 (0)