Skip to content

Commit 72130f7

Browse files
authored
Merge pull request #20487 from adambkaplan/scheduling-eviction-2
mv "Assign Pods" and "Taints and Tolerations" concepts to "Scheduling and Eviction"
2 parents 4d5ddc5 + 55e17b8 commit 72130f7

File tree

23 files changed

+35
-33
lines changed

23 files changed

+35
-33
lines changed

content/en/blog/_posts/2018-04-13-local-persistent-volumes-beta.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ The local persistent volume beta feature is not complete by far. Some notable en
144144

145145
[Pod disruption budget](/docs/concepts/workloads/pods/disruptions/) is also very important for those workloads that must maintain quorum. Setting a disruption budget for your workload ensures that it does not drop below quorum due to voluntary disruption events, such as node drains during upgrade.
146146

147-
[Pod affinity and anti-affinity](/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature) ensures that your workloads stay either co-located or spread out across failure domains. If you have multiple local persistent volumes available on a single node, it may be preferable to specify an pod anti-affinity policy to spread your workload across nodes. Note that if you want multiple pods to share the same local persistent volume, you do not need to specify a pod affinity policy. The scheduler understands the locality constraints of the local persistent volume and schedules your pod to the correct node.
147+
[Pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature) ensures that your workloads stay either co-located or spread out across failure domains. If you have multiple local persistent volumes available on a single node, it may be preferable to specify an pod anti-affinity policy to spread your workload across nodes. Note that if you want multiple pods to share the same local persistent volume, you do not need to specify a pod affinity policy. The scheduler understands the locality constraints of the local persistent volume and schedules your pod to the correct node.
148148

149149
## Getting involved
150150

content/en/blog/_posts/2018-10-10-runtimeclass.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Why is RuntimeClass a pod level concept? The Kubernetes resource model expects c
2727

2828
## What's next?
2929

30-
The RuntimeClass resource is an important foundation for surfacing runtime properties to the control plane. For example, to implement scheduler support for clusters with heterogeneous nodes supporting different runtimes, we might add [NodeAffinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) terms to the RuntimeClass definition. Another area to address is managing the variable resource requirements to run pods of different runtimes. The [Pod Overhead proposal](https://docs.google.com/document/d/1EJKT4gyl58-kzt2bnwkv08MIUZ6lkDpXcxkHqCvvAp4/preview) was an early take on this that aligns nicely with the RuntimeClass design, and may be pursued further.
30+
The RuntimeClass resource is an important foundation for surfacing runtime properties to the control plane. For example, to implement scheduler support for clusters with heterogeneous nodes supporting different runtimes, we might add [NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) terms to the RuntimeClass definition. Another area to address is managing the variable resource requirements to run pods of different runtimes. The [Pod Overhead proposal](https://docs.google.com/document/d/1EJKT4gyl58-kzt2bnwkv08MIUZ6lkDpXcxkHqCvvAp4/preview) was an early take on this that aligns nicely with the RuntimeClass design, and may be pursued further.
3131

3232
Many other RuntimeClass extensions have also been proposed, and will be revisited as the feature continues to develop and mature. A few more extensions that are being considered include:
3333

content/en/docs/concepts/architecture/nodes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ all the Pod objects running on the node to be deleted from the API server, and f
191191
names.
192192

193193
The node lifecycle controller automatically creates
194-
[taints](/docs/concepts/configuration/taint-and-toleration/) that represent conditions.
194+
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that represent conditions.
195195
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
196196
Pods can also have tolerations which let them tolerate a Node's taints.
197197

content/en/docs/concepts/containers/runtime-class.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ with the pod's tolerations in admission, effectively taking the union of the set
163163
by each.
164164
165165
To learn more about configuring the node selector and tolerations, see [Assigning Pods to
166-
Nodes](/docs/concepts/configuration/assign-pod-node/).
166+
Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/).
167167
168168
[RuntimeClass admission controller]: /docs/reference/access-authn-authz/admission-controllers/#runtimeclass
169169

content/en/docs/concepts/overview/working-with-objects/labels.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -226,6 +226,6 @@ selector:
226226
#### Selecting sets of nodes
227227

228228
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
229-
See the documentation on [node selection](/docs/concepts/configuration/assign-pod-node/) for more information.
229+
See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information.
230230

231231
{{% /capture %}}

content/en/docs/concepts/configuration/assign-pod-node.md renamed to content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ value is `another-node-label-value` should be preferred.
155155

156156
You can see the operator `In` being used in the example. The new node affinity syntax supports the following operators: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`.
157157
You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use
158-
[node taints](/docs/concepts/configuration/taint-and-toleration/) to repel pods from specific nodes.
158+
[node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) to repel pods from specific nodes.
159159

160160
If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod
161161
to be scheduled onto a candidate node.
@@ -392,7 +392,7 @@ The above pod will run on the node kube-01.
392392

393393
{{% capture whatsnext %}}
394394

395-
[Taints](/docs/concepts/configuration/taint-and-toleration/) allow a Node to *repel* a set of Pods.
395+
[Taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) allow a Node to *repel* a set of Pods.
396396

397397
The design documents for
398398
[node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)

content/en/docs/concepts/scheduling-eviction/kube-scheduler.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Kubernetes Scheduler
33
content_template: templates/concept
4-
weight: 50
4+
weight: 10
55
---
66

77
{{% capture overview %}}

content/en/docs/concepts/configuration/taint-and-toleration.md renamed to content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ weight: 40
1010

1111

1212
{{% capture overview %}}
13-
[_Node affinity_](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity),
13+
[_Node affinity_](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
1414
is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attracts* them to
1515
a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a
1616
hard requirement). _Taints_ are the opposite -- they allow a node to repel a set of pods.

content/en/docs/concepts/services-networking/service.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -905,7 +905,7 @@ the NLB Target Group's health check on the auto-assigned
905905
`.spec.healthCheckNodePort` and not receive any traffic.
906906

907907
In order to achieve even traffic, either use a DaemonSet or specify a
908-
[pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
908+
[pod anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
909909
to not locate on the same node.
910910

911911
You can also use NLB Services with the [internal load balancer](/docs/concepts/services-networking/service/#internal-load-balancer)

content/en/docs/concepts/storage/storage-classes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -169,9 +169,9 @@ will delay the binding and provisioning of a PersistentVolume until a Pod using
169169
PersistentVolumes will be selected or provisioned conforming to the topology that is
170170
specified by the Pod's scheduling constraints. These include, but are not limited to, [resource
171171
requirements](/docs/concepts/configuration/manage-compute-resources-container),
172-
[node selectors](/docs/concepts/configuration/assign-pod-node/#nodeselector),
172+
[node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector),
173173
[pod affinity and
174-
anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity),
174+
anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
175175
and [taints and tolerations](/docs/concepts/configuration/taint-and-toleration).
176176

177177
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:

0 commit comments

Comments
 (0)