Skip to content

Commit 4c047a7

Browse files
authored
Merge pull request #29423 from mengjiao-liu/sync-scheduling-1.22
[zh] Concept files to sync for 1.22 - (9) Scheduling
2 parents bb3e36d + ec405cc commit 4c047a7

File tree

8 files changed

+111
-117
lines changed

8 files changed

+111
-117
lines changed

content/zh/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -579,7 +579,7 @@ must be satisfied for the pod to be scheduled onto a node.
579579
-->
580580
#### 名字空间选择算符
581581

582-
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
582+
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
583583

584584
<!--
585585
Users can also select matching namespaces using `namespaceSelector`, which is a label query over the set of namespaces.
@@ -595,14 +595,14 @@ null `namespaceSelector` means "this pod's namespace".
595595
`namespaces` 列表以及 null 值 `namespaceSelector` 意味着“当前 Pod 的名字空间”。
596596

597597
<!--
598-
This feature is alpha and disabled by default. You can enable it by setting the
598+
This feature is beta and enabled by default. You can disable it via the
599599
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
600600
`PodAffinityNamespaceSelector` in both kube-apiserver and kube-scheduler.
601601
-->
602-
此功能特性是 Alpha 版本的,默认是被禁用的。你可以通过针对 kube-apiserver 和
602+
此功能特性是 Beta 版本的,默认是被启用的。你可以通过针对 kube-apiserver 和
603603
kube-scheduler 设置
604604
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
605-
`PodAffinityNamespaceSelector` 来启用此特性
605+
`PodAffinityNamespaceSelector` 来禁用此特性
606606

607607
<!--
608608
#### More Practical Use-cases

content/zh/docs/concepts/scheduling-eviction/eviction-policy.md

Lines changed: 0 additions & 47 deletions
This file was deleted.

content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ the API server about this decision in a process called _binding_.
9595
kube-apiserver,这个过程叫做 _绑定_
9696

9797
<!--
98-
Factors that need taken into account for scheduling decisions include
98+
Factors that need to be taken into account for scheduling decisions include
9999
individual and collective resource requirements, hardware / software /
100100
policy constraints, affinity and anti-affinity specifications, data
101101
locality, inter-workload interference, and so on.
@@ -173,15 +173,15 @@ of the scheduler:
173173
* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
174174
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
175175
* Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler
176-
* Read the [kube-scheduler config (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/) reference
176+
* Read the [kube-scheduler config (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta2/) reference
177177
* Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
178178
* Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/)
179179
* Learn about [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
180180
-->
181181
* 阅读关于 [调度器性能调优](/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
182182
* 阅读关于 [Pod 拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
183183
* 阅读关于 kube-scheduler 的 [参考文档](/zh/docs/reference/command-line-tools-reference/kube-scheduler/)
184-
* 阅读 [kube-scheduler 配置参考 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)
184+
* 阅读 [kube-scheduler 配置参考 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta2/)
185185
* 了解关于 [配置多个调度器](/zh/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) 的方式
186186
* 了解关于 [拓扑结构管理策略](/zh/docs/tasks/administer-cluster/topology-manager/)
187187
* 了解关于 [Pod 额外开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/)

content/zh/docs/concepts/scheduling-eviction/pod-priority-preemption.md

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -432,17 +432,17 @@ the Node is not considered for preemption.
432432
{{< /note >}}
433433

434434
<!--
435-
If a pending Pod has inter-pod affinity to one or more of the lower-priority
436-
Pods on the Node, the inter-Pod affinity rule cannot be satisfied in the absence
437-
of those lower-priority Pods. In this case, the scheduler does not preempt any
438-
Pods on the Node. Instead, it looks for another Node. The scheduler might find a
439-
suitable Node or it might not. There is no guarantee that the pending Pod can be
440-
scheduled.
435+
If a pending Pod has inter-pod {{< glossary_tooltip text="affinity" term_id="affinity" >}}
436+
to one or more of the lower-priority Pods on the Node, the inter-Pod affinity
437+
rule cannot be satisfied in the absence of those lower-priority Pods. In this case,
438+
the scheduler does not preempt any Pods on the Node. Instead, it looks for another
439+
Node. The scheduler might find a suitable Node or it might not. There is no
440+
guarantee that the pending Pod can be scheduled.
441441

442442
Our recommended solution for this problem is to create inter-Pod affinity only
443443
towards equal or higher priority Pods.
444444
-->
445-
如果悬决 Pod 与节点上的一个或多个较低优先级 Pod 具有 Pod 间亲和性
445+
如果悬决 Pod 与节点上的一个或多个较低优先级 Pod 具有 Pod 间{{< glossary_tooltip text="亲和性" term_id="affinity" >}}
446446
则在没有这些较低优先级 Pod 的情况下,无法满足 Pod 间亲和性规则。
447447
在这种情况下,调度程序不会抢占节点上的任何 Pod。
448448
相反,它寻找另一个节点。调度程序可能会找到合适的节点,
@@ -620,33 +620,34 @@ Pod 优先级和 {{<glossary_tooltip text="QoS 类" term_id="qos-class" >}}
620620
或者最低优先级的 Pod 受 PodDisruptionBudget 保护时,才会考虑优先级较高的 Pod。
621621

622622
<!--
623-
The kubelet uses Priority to determine pod order for [out-of-resource eviction](/docs/tasks/administer-cluster/out-of-resource/).
623+
The kubelet uses Priority to determine pod order for [node-pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
624624
You can use the QoS class to estimate the order in which pods are most likely
625625
to get evicted. The kubelet ranks pods for eviction based on the following factors:
626626

627627
1. Whether the starved resource usage exceeds requests
628628
1. Pod Priority
629629
1. Amount of resource usage relative to requests
630630

631-
See [evicting end-user pods](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)
631+
See [evicting end-user pods](/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)
632632
for more details.
633633

634-
kubelet out-of-resource eviction does not evict Pods when their
634+
kubelet node-pressure eviction does not evict Pods when their
635635
usage does not exceed their requests. If a Pod with lower priority is not
636636
exceeding its requests, it won't be evicted. Another Pod with higher priority
637637
that exceeds its requests may be evicted.
638638
-->
639639
kubelet 使用优先级来确定
640-
[资源不足时驱逐](/zh/docs/tasks/administer-cluster/out-of-resource/) Pod 的顺序。
640+
[节点压力驱逐](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/) Pod 的顺序。
641641
你可以使用 QoS 类来估计 Pod 最有可能被驱逐的顺序。kubelet 根据以下因素对 Pod 进行驱逐排名:
642642

643643
1. 对紧俏资源的使用是否超过请求值
644644
1. Pod 优先级
645645
1. 相对于请求的资源使用量
646646

647-
有关更多详细信息,请参阅[驱逐最终用户的 Pod](/zh/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)。
647+
有关更多详细信息,请参阅
648+
[kubelet 驱逐时 Pod 的选择](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)。
648649

649-
当某 Pod 的资源用量未超过其请求时,kubelet 资源不足驱逐不会驱逐该 Pod。
650+
当某 Pod 的资源用量未超过其请求时,kubelet 节点压力驱逐不会驱逐该 Pod。
650651
如果优先级较低的 Pod 没有超过其请求,则不会被驱逐。
651652
另一个优先级高于其请求的 Pod 可能会被驱逐。
652653

content/zh/docs/concepts/scheduling-eviction/resource-bin-packing.md

Lines changed: 45 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -32,60 +32,70 @@ The kube-scheduler can be configured to enable bin packing of resources along wi
3232
<!--
3333
## Enabling Bin Packing using RequestedToCapacityRatioResourceAllocation
3434
35-
Before Kubernetes 1.15, Kube-scheduler used to allow scoring nodes based on the request to capacity ratio of primary resources like CPU and Memory. Kubernetes 1.16 added a new parameter to the priority function that allows the users to specify the resources along with weights for each resource to score nodes based on the request to capacity ratio. This allows users to bin pack extended resources by using appropriate parameters and improves the utilization of scarce resources in large clusters. The behavior of the `RequestedToCapacityRatioResourceAllocation` priority function can be controlled by a configuration option called `requestedToCapacityRatioArguments`. This argument consists of two parameters `shape` and `resources`. Shape allows the user to tune the function as least requested or most requested based on `utilization` and `score` values. Resources
36-
consists of `name` which specifies the resource to be considered during scoring and `weight` specify the weight of each resource.
35+
Kubernetes allows the users to specify the resources along with weights for
36+
each resource to score nodes based on the request to capacity ratio. This
37+
allows users to bin pack extended resources by using appropriate parameters
38+
and improves the utilization of scarce resources in large clusters. The
39+
behavior of the `RequestedToCapacityRatioResourceAllocation` priority function
40+
can be controlled by a configuration option called `RequestedToCapacityRatioArgs`.
41+
This argument consists of two parameters `shape` and `resources`. The `shape`
42+
parameter allows the user to tune the function as least requested or most
43+
requested based on `utilization` and `score` values. The `resources` parameter
44+
consists of `name` of the resource to be considered during scoring and `weight`
45+
specify the weight of each resource.
46+
3747
-->
3848

3949
## 使用 RequestedToCapacityRatioResourceAllocation 启用装箱
4050

41-
在 Kubernetes 1.15 之前,Kube-scheduler 通常允许根据对主要资源(如 CPU 和内存)
42-
的请求数量和可用容量 之比率对节点评分。
43-
Kubernetes 1.16 在优先级函数中添加了一个新参数,该参数允许用户指定资源以及每类资源的权重,
51+
Kubernetes 允许用户指定资源以及每类资源的权重,
4452
以便根据请求数量与可用容量之比率为节点评分。
4553
这就使得用户可以通过使用适当的参数来对扩展资源执行装箱操作,从而提高了大型集群中稀缺资源的利用率。
4654
`RequestedToCapacityRatioResourceAllocation` 优先级函数的行为可以通过名为
47-
`requestedToCapacityRatioArguments` 的配置选项进行控制。
55+
`RequestedToCapacityRatioArgs` 的配置选项进行控制。
4856
该标志由两个参数 `shape``resources` 组成。
49-
`shape` 允许用户根据 `utilization``score` 值将函数调整为最少请求
50-
(least requested)或
51-
最多请求(most requested)计算。
57+
`shape` 允许用户根据 `utilization``score` 值将函数调整为
58+
最少请求(least requested)或最多请求(most requested)计算。
5259
`resources` 包含由 `name``weight` 组成,`name` 指定评分时要考虑的资源,
5360
`weight` 指定每种资源的权重。
5461

5562
<!--
56-
Below is an example configuration that sets `requestedToCapacityRatioArguments` to bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar`
63+
Below is an example configuration that sets
64+
`requestedToCapacityRatioArguments` to bin packing behavior for extended
65+
resources `intel.com/foo` and `intel.com/bar`.
5766
-->
5867

5968
以下是一个配置示例,该配置将 `requestedToCapacityRatioArguments` 设置为对扩展资源
6069
`intel.com/foo``intel.com/bar` 的装箱行为
6170

62-
```json
63-
{
64-
"kind": "Policy",
65-
"apiVersion": "v1",
66-
...
67-
"priorities": [
68-
...
69-
{
70-
"name": "RequestedToCapacityRatioPriority",
71-
"weight": 2,
72-
"argument": {
73-
"requestedToCapacityRatioArguments": {
74-
"shape": [
75-
{"utilization": 0, "score": 0},
76-
{"utilization": 100, "score": 10}
77-
],
78-
"resources": [
79-
{"name": "intel.com/foo", "weight": 3},
80-
{"name": "intel.com/bar", "weight": 5}
81-
]
82-
}
83-
}
84-
}
85-
],
86-
}
71+
```yaml
72+
apiVersion: kubescheduler.config.k8s.io/v1beta1
73+
kind: KubeSchedulerConfiguration
74+
profiles:
75+
# ...
76+
pluginConfig:
77+
- name: RequestedToCapacityRatio
78+
args:
79+
shape:
80+
- utilization: 0
81+
score: 10
82+
- utilization: 100
83+
score: 0
84+
resources:
85+
- name: intel.com/foo
86+
weight: 3
87+
- name: intel.com/bar
88+
weight: 5
8789
```
8890
91+
<!--
92+
Referencing the `KubeSchedulerConfiguration` file with the kube-scheduler
93+
flag `--config=/path/to/config/file` will pass the configuration to the
94+
scheduler.
95+
-->
96+
使用 kube-scheduler 标志 `--config=/path/to/config/file`
97+
引用 `KubeSchedulerConfiguration` 文件将配置传递给调度器。
98+
8999
<!--
90100
**This feature is disabled by default**
91101
-->

content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -81,11 +81,11 @@ kube-scheduler 的表现等价于设置值为 100。
8181

8282
<!--
8383
To change the value, edit the
84-
[kube-scheduler configuration file](/docs/reference/config-api/kube-scheduler-config.v1beta1/)
84+
[kube-scheduler configuration file](/docs/reference/config-api/kube-scheduler-config.v1beta2/)
8585
and then restart the scheduler.
8686
In many cases, the configuration file can be found at `/etc/kubernetes/config/kube-scheduler.yaml`
8787
-->
88-
要修改这个值,先编辑 [kube-scheduler 的配置文件](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)
88+
要修改这个值,先编辑 [kube-scheduler 的配置文件](/zh/docs/reference/config-api/kube-scheduler-config.v1beta2/)
8989
然后重启调度器。
9090
大多数情况下,这个配置文件是 `/etc/kubernetes/config/kube-scheduler.yaml`
9191

@@ -298,6 +298,6 @@ After going over all the Nodes, it goes back to Node 1.
298298
299299
## {{% heading "whatsnext" %}}
300300
301-
<!-- * Check the [kube-scheduler configuration reference (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/) -->
301+
<!-- * Check the [kube-scheduler configuration reference (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta2/) -->
302302
303-
* 参见 [kube-scheduler 配置参考 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)
303+
* 参见 [kube-scheduler 配置参考 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta2/)

content/zh/docs/concepts/scheduling-eviction/scheduling-framework.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ weight: 90
1616

1717
<!-- overview -->
1818

19-
{{< feature-state for_k8s_version="1.15" state="alpha" >}}
19+
{{< feature-state for_k8s_version="1.19" state="stable" >}}
2020

2121
<!--
2222
The scheduling framework is a pluggable architecture for the Kubernetes scheduler.

0 commit comments

Comments
 (0)