Skip to content

Commit 86a9bfd

Browse files
authored
Merge pull request #39677 from Zhuzhenghao/2-26
[zh] resync page in scheduling-eviction (Part II)
2 parents b5dfc75 + 3d9ef70 commit 86a9bfd

File tree

4 files changed

+49
-51
lines changed

4 files changed

+49
-51
lines changed

content/zh-cn/docs/concepts/scheduling-eviction/_index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ weight: 95
1515
content_type: concept
1616
description: >
1717
In Kubernetes, scheduling refers to making sure that Pods are matched to Nodes
18-
so that the kubelet can run them. Preemption is the process of terminating
19-
Pods with lower Priority so that Pods with higher Priority can schedule on
18+
so that the kubelet can run them. Preemption is the process of terminating
19+
Pods with lower Priority so that Pods with higher Priority can schedule on
2020
Nodes. Eviction is the process of proactively terminating one or more Pods on
2121
resource-starved Nodes.
2222
no_list: true

content/zh-cn/docs/concepts/scheduling-eviction/api-eviction.md

Lines changed: 23 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -3,26 +3,24 @@ title: API 发起的驱逐
33
content_type: concept
44
weight: 110
55
---
6-
<!--
7-
---
6+
<!--
87
title: API-initiated Eviction
98
content_type: concept
109
weight: 110
11-
---
1210
-->
1311
{{< glossary_definition term_id="api-eviction" length="short" >}} </br>
1412

15-
<!--
13+
<!--
1614
You can request eviction by calling the Eviction API directly, or programmatically
1715
using a client of the {{<glossary_tooltip term_id="kube-apiserver" text="API server">}}, like the `kubectl drain` command. This
1816
creates an `Eviction` object, which causes the API server to terminate the Pod.
1917
2018
API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
21-
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
19+
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
2220
2321
Using the API to create an Eviction object for a Pod is like performing a
2422
policy-controlled [`DELETE` operation](/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod)
25-
on the Pod.
23+
on the Pod.
2624
-->
2725
你可以通过直接调用 Eviction API 发起驱逐,也可以通过编程的方式使用
2826
{{<glossary_tooltip term_id="kube-apiserver" text="API 服务器">}}的客户端来发起驱逐,
@@ -53,8 +51,10 @@ POST the attempted operation, similar to the following example:
5351
{{< tabs name="Eviction_example" >}}
5452
{{% tab name="policy/v1" %}}
5553
{{< note >}}
56-
<!-- `policy/v1` Eviction is available in v1.22+. Use `policy/v1beta1` with prior releases. -->
57-
`policy/v1` 版本的 Eviction 在 v1.22 以及更高的版本中可用,之前的发行版本使用 `policy/v1beta1` 版本。
54+
<!--
55+
`policy/v1` Eviction is available in v1.22+. Use `policy/v1beta1` with prior releases.
56+
-->
57+
`policy/v1` 版本的 Eviction 在 v1.22 以及更高的版本中可用,之前的发行版本使用 `policy/v1beta1` 版本。
5858
{{< /note >}}
5959

6060
```json
@@ -70,7 +70,9 @@ POST the attempted operation, similar to the following example:
7070
{{% /tab %}}
7171
{{% tab name="policy/v1beta1" %}}
7272
{{< note >}}
73-
<!-- Deprecated in v1.22 in favor of `policy/v1` -->
73+
<!--
74+
Deprecated in v1.22 in favor of `policy/v1`
75+
-->
7476
在 v1.22 版本废弃以支持 `policy/v1`
7577
{{< /note >}}
7678

@@ -87,7 +89,7 @@ POST the attempted operation, similar to the following example:
8789
{{% /tab %}}
8890
{{< /tabs >}}
8991

90-
<!--
92+
<!--
9193
Alternatively, you can attempt an eviction operation by accessing the API using
9294
`curl` or `wget`, similar to the following example:
9395
-->
@@ -97,7 +99,7 @@ Alternatively, you can attempt an eviction operation by accessing the API using
9799
curl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json
98100
```
99101

100-
<!--
102+
<!--
101103
## How API-initiated eviction works
102104
103105
When you request an eviction using the API, the API server performs admission
@@ -108,13 +110,13 @@ checks and responds in one of the following ways:
108110

109111
当你使用 API 来请求驱逐时,API 服务器将执行准入检查,并通过以下方式之一做出响应:
110112

111-
<!--
113+
<!--
112114
* `200 OK`: the eviction is allowed, the `Eviction` subresource is created, and
113115
the Pod is deleted, similar to sending a `DELETE` request to the Pod URL.
114116
* `429 Too Many Requests`: the eviction is not currently allowed because of the
115117
configured {{<glossary_tooltip term_id="pod-disruption-budget" text="PodDisruptionBudget">}}.
116118
You may be able to attempt the eviction again later. You might also see this
117-
response because of API rate limiting.
119+
response because of API rate limiting.
118120
* `500 Internal Server Error`: the eviction is not allowed because there is a
119121
misconfiguration, like if multiple PodDisruptionBudgets reference the same Pod.
120122
-->
@@ -128,7 +130,7 @@ checks and responds in one of the following ways:
128130
<!--
129131
If the Pod you want to evict isn't part of a workload that has a
130132
PodDisruptionBudget, the API server always returns `200 OK` and allows the
131-
eviction.
133+
eviction.
132134
133135
If the API server allows the eviction, the Pod is deleted as follows:
134136
-->
@@ -158,18 +160,18 @@ API 服务器总是返回 `200 OK` 并且允许驱逐。
158160
1. 本地运行状态的 Pod 所处的节点上的 {{<glossary_tooltip term_id="kubelet" text="kubelet">}}
159161
注意到 `Pod` 资源被标记为终止,并开始优雅停止本地 Pod。
160162
1. 当 kubelet 停止 Pod 时,控制面从 {{<glossary_tooltip term_id="endpoint" text="Endpoint">}}
161-
和 {{<glossary_tooltip term_id="endpoint-slice" text="EndpointSlice">}}
163+
和 {{<glossary_tooltip term_id="endpoint-slice" text="EndpointSlice">}}
162164
对象中移除该 Pod。因此,控制器不再将此 Pod 视为有用对象。
163165
1. Pod 的宽限期到期后,kubelet 强制终止本地 Pod。
164166
1. kubelet 告诉 API 服务器删除 `Pod` 资源。
165167
1. API 服务器删除 `Pod` 资源。
166168

167-
<!--
169+
<!--
168170
## Troubleshooting stuck evictions
169171
170172
In some cases, your applications may enter a broken state, where the Eviction
171-
API will only return `429` or `500` responses until you intervene. This can
172-
happen if, for example, a ReplicaSet creates pods for your application but new
173+
API will only return `429` or `500` responses until you intervene. This can
174+
happen if, for example, a ReplicaSet creates pods for your application but new
173175
pods do not enter a `Ready` state. You may also notice this behavior in cases
174176
where the last evicted Pod had a long termination grace period.
175177
-->
@@ -181,8 +183,8 @@ where the last evicted Pod had a long termination grace period.
181183
但新的 Pod 没有进入 `Ready` 状态,就会发生这种情况。
182184
在最后一个被驱逐的 Pod 有很长的终止宽限期的情况下,你可能也会注意到这种行为。
183185

184-
<!--
185-
If you notice stuck evictions, try one of the following solutions:
186+
<!--
187+
If you notice stuck evictions, try one of the following solutions:
186188
187189
* Abort or pause the automated operation causing the issue. Investigate the stuck
188190
application before you restart the operation.
@@ -196,7 +198,7 @@ If you notice stuck evictions, try one of the following solutions:
196198

197199
## {{% heading "whatsnext" %}}
198200

199-
<!--
201+
<!--
200202
* Learn how to protect your applications with a [Pod Disruption Budget](/docs/tasks/run-application/configure-pdb/).
201203
* Learn about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
202204
* Learn about [Pod Priority and Preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/).

content/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 22 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@ weight: 20
88
reviewers:
99
- davidopp
1010
- kevin-wangzefeng
11-
- bsalamat
1211
- alculquicondor
1312
title: Assigning Pods to Nodes
1413
content_type: concept
@@ -18,13 +17,13 @@ weight: 20
1817
<!-- overview -->
1918

2019
<!--
21-
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
20+
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
2221
_restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}},
2322
or to _prefer_ to run on particular nodes.
2423
There are several ways to do this and the recommended approaches all use
2524
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
2625
Often, you do not need to set any such constraints; the
27-
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
26+
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
2827
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
2928
However, there are some circumstances where you may want to control which node
3029
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
@@ -46,10 +45,10 @@ Pod 被部署到哪个节点。例如,确保 Pod 最终落在连接了 SSD 的
4645
You can use any of the following methods to choose where Kubernetes schedules
4746
specific Pods:
4847
49-
* [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
50-
* [Affinity and anti-affinity](#affinity-and-anti-affinity)
51-
* [nodeName](#nodename) field
52-
* [Pod topology spread constraints](#pod-topology-spread-constraints)
48+
* [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
49+
* [Affinity and anti-affinity](#affinity-and-anti-affinity)
50+
* [nodeName](#nodename) field
51+
* [Pod topology spread constraints](#pod-topology-spread-constraints)
5352
-->
5453
你可以使用下列方法中的任何一种来选择 Kubernetes 对特定 Pod 的调度:
5554

@@ -90,7 +89,7 @@ and a different value in other environments.
9089
Adding labels to nodes allows you to target Pods for scheduling on specific
9190
nodes or groups of nodes. You can use this functionality to ensure that specific
9291
Pods only run on nodes with certain isolation, security, or regulatory
93-
properties.
92+
properties.
9493
-->
9594
## 节点隔离/限制 {#node-isolation-restriction}
9695

@@ -110,7 +109,7 @@ itself so that the scheduler schedules workloads onto the compromised node.
110109
<!--
111110
The [`NodeRestriction` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
112111
prevents the kubelet from setting or modifying labels with a
113-
`node-restriction.kubernetes.io/` prefix.
112+
`node-restriction.kubernetes.io/` prefix.
114113
115114
To make use of that label prefix for node isolation:
116115
-->
@@ -138,7 +137,7 @@ kubelet 使用 `node-restriction.kubernetes.io/` 前缀设置或修改标签。
138137
You can add the `nodeSelector` field to your Pod specification and specify the
139138
[node labels](#built-in-node-labels) you want the target node to have.
140139
Kubernetes only schedules the Pod onto nodes that have each of the labels you
141-
specify.
140+
specify.
142141
-->
143142
`nodeSelector` 是节点选择约束的最简单推荐形式。你可以将 `nodeSelector` 字段添加到
144143
Pod 的规约中设置你希望目标节点所具有的[节点标签](#built-in-node-labels)
@@ -182,7 +181,7 @@ define. Some of the benefits of affinity and anti-affinity include:
182181
The affinity feature consists of two types of affinity:
183182
184183
* *Node affinity* functions like the `nodeSelector` field but is more expressive and
185-
allows you to specify soft rules.
184+
allows you to specify soft rules.
186185
* *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
187186
on other Pods.
188187
-->
@@ -263,7 +262,7 @@ interpreting the rules. You can use `In`, `NotIn`, `Exists`, `DoesNotExist`,
263262

264263
<!--
265264
`NotIn` and `DoesNotExist` allow you to define node anti-affinity behavior.
266-
Alternatively, you can use [node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/)
265+
Alternatively, you can use [node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/)
267266
to repel Pods from specific nodes.
268267
-->
269268
`NotIn``DoesNotExist` 可用来实现节点反亲和性行为。
@@ -323,7 +322,7 @@ The final sum is added to the score of other priority functions for the node.
323322
Nodes with the highest total score are prioritized when the scheduler makes a
324323
scheduling decision for the Pod.
325324
326-
For example, consider the following Pod spec:
325+
For example, consider the following Pod spec:
327326
-->
328327
最终的加和值会添加到该节点的其他优先级函数的评分之上。
329328
在调度器为 Pod 作出调度决定时,总分最高的节点的优先级也最高。
@@ -550,7 +549,7 @@ The affinity rule says that the scheduler can only schedule a Pod onto a node if
550549
the node is in the same zone as one or more existing Pods with the label
551550
`security=S1`. More precisely, the scheduler must place the Pod on a node that has the
552551
`topology.kubernetes.io/zone=V` label, as long as there is at least one node in
553-
that zone that currently has one or more Pods with the Pod label `security=S1`.
552+
that zone that currently has one or more Pods with the Pod label `security=S1`.
554553
-->
555554
亲和性规则表示,仅当节点和至少一个已运行且有 `security=S1` 的标签的
556555
Pod 处于同一区域时,才可以将该 Pod 调度到节点上。
@@ -615,7 +614,7 @@ affinity/anti-affinity definition appears.
615614
-->
616615
除了 `labelSelector` 和 `topologyKey`,你也可以指定 `labelSelector`
617616
要匹配的命名空间列表,方法是在 `labelSelector` 和 `topologyKey`
618-
所在层同一层次上设置 `namespaces`。
617+
所在层同一层次上设置 `namespaces`。
619618
如果 `namespaces` 被忽略或者为空,则默认为 Pod 亲和性/反亲和性的定义所在的命名空间。
620619

621620
<!--
@@ -628,7 +627,7 @@ affinity/anti-affinity definition appears.
628627
<!--
629628
You can also select matching namespaces using `namespaceSelector`, which is a label query over the set of namespaces.
630629
The affinity term is applied to namespaces selected by both `namespaceSelector` and the `namespaces` field.
631-
Note that an empty `namespaceSelector` ({}) matches all namespaces, while a null or empty `namespaces` list and
630+
Note that an empty `namespaceSelector` ({}) matches all namespaces, while a null or empty `namespaces` list and
632631
null `namespaceSelector` matches the namespace of the Pod where the rule is defined.
633632
-->
634633
用户也可以使用 `namespaceSelector` 选择匹配的名字空间,`namespaceSelector`
@@ -641,7 +640,7 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
641640
#### More practical use-cases
642641

643642
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
644-
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
643+
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
645644
rules allow you to configure that a set of workloads should
646645
be co-located in the same defined topology; for example, preferring to place two related
647646
Pods onto the same node.
@@ -664,7 +663,7 @@ affinity and anti-affinity to co-locate the web servers with the cache as much a
664663
你可以使用 Pod 间的亲和性和反亲和性来尽可能地将该 Web 服务器与缓存并置。
665664

666665
<!--
667-
In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
666+
In the following example Deployment for the Redis cache, the replicas get the label `app=store`. The
668667
`podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas
669668
with the `app=store` label on a single node. This creates each cache in a
670669
separate node.
@@ -764,9 +763,9 @@ where each web server is co-located with a cache, on three separate nodes.
764763
| *webserver-1* | *webserver-2* | *webserver-3* |
765764
| *cache-1* | *cache-2* | *cache-3* |
766765

767-
<!--
766+
<!--
768767
The overall effect is that each cache instance is likely to be accessed by a single client, that
769-
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
768+
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
770769
-->
771770
总体效果是每个缓存实例都非常可能被在同一个节点上运行的某个客户端访问。
772771
这种方法旨在最大限度地减少偏差(负载不平衡)和延迟。
@@ -849,7 +848,7 @@ The above Pod will only run on the node `kube-01`.
849848
-->
850849
上面的 Pod 只能运行在节点 `kube-01` 之上。
851850

852-
<!--
851+
<!--
853852
## Pod topology spread constraints
854853

855854
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}}
@@ -858,7 +857,7 @@ topology domains that you define. You might do this to improve performance, expe
858857
overall utilization.
859858

860859
Read [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
861-
to learn more about how these work.
860+
to learn more about how these work.
862861
-->
863862
## Pod 拓扑分布约束 {#pod-topology-spread-constraints}
864863

@@ -877,7 +876,7 @@ to learn more about how these work.
877876
* Read the design docs for [node affinity](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md)
878877
and for [inter-pod affinity/anti-affinity](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md).
879878
* Learn about how the [topology manager](/docs/tasks/administer-cluster/topology-manager/) takes part in node-level
880-
resource allocation decisions.
879+
resource allocation decisions.
881880
* Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/).
882881
* Learn how to use [affinity and anti-affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
883882
-->
@@ -888,4 +887,3 @@ to learn more about how these work.
888887
* 了解[拓扑管理器](/zh-cn/docs/tasks/administer-cluster/topology-manager/)如何参与节点层面资源分配决定。
889888
* 了解如何使用 [nodeSelector](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/)。
890889
* 了解如何使用[亲和性和反亲和性](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)。
891-

content/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,21 +4,19 @@ content_type: concept
44
weight: 80
55
---
66
<!--
7-
---
87
reviewers:
98
- bsalamat
109
- k82cn
1110
- ahg-g
1211
title: Resource Bin Packing
1312
content_type: concept
1413
weight: 80
15-
---
1614
-->
1715

1816
<!-- overview -->
1917

2018
<!--
21-
In the [scheduling-plugin](/docs/reference/scheduling/config/#scheduling-plugins) `NodeResourcesFit` of kube-scheduler, there are two
19+
In the [scheduling-plugin](/docs/reference/scheduling/config/#scheduling-plugins) `NodeResourcesFit` of kube-scheduler, there are two
2220
scoring strategies that support the bin packing of resources: `MostAllocated` and `RequestedToCapacityRatio`.
2321
-->
2422
在 kube-scheduler 的[调度插件](/zh-cn/docs/reference/scheduling/config/#scheduling-plugins)
@@ -85,7 +83,7 @@ the `NodeResourcesFit` score function can be controlled by the
8583
Within the `scoringStrategy` field, you can configure two parameters: `requestedToCapacityRatio` and
8684
`resources`. The `shape` in the `requestedToCapacityRatio`
8785
parameter allows the user to tune the function as least requested or most
88-
requested based on `utilization` and `score` values. The `resources` parameter
86+
requested based on `utilization` and `score` values. The `resources` parameter
8987
consists of `name` of the resource to be considered during scoring and `weight`
9088
specify the weight of each resource.
9189
-->

0 commit comments

Comments
 (0)