Skip to content

Commit 65d9f57

Browse files
authored
Merge pull request #78674 from abrennan89/OSDOCS-4526
OSDOCS-4526: Improvement of the docs about Pod Topology Spread Contraints under Nodes
2 parents ed521cf + 00a3c93 commit 65d9f57

5 files changed

+91
-106
lines changed

modules/nodes-scheduler-pod-topology-spread-constraints-about.adoc

Lines changed: 0 additions & 15 deletions
This file was deleted.

modules/nodes-scheduler-pod-topology-spread-constraints-configuring.adoc

Lines changed: 0 additions & 70 deletions
This file was deleted.

modules/nodes-scheduler-pod-topology-spread-constraints-examples.adoc

Lines changed: 48 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -2,18 +2,53 @@
22
//
33
// * nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints
44

5+
:_mod-docs-content-type: REFERENCE
56
[id="nodes-scheduler-pod-topology-spread-constraints-examples_{context}"]
6-
= Example pod topology spread constraints
7+
= Example configurations for pod topology spread constraints
78

8-
The following examples demonstrate pod topology spread constraint configurations.
9-
10-
[id="nodes-scheduler-pod-topology-spread-constraints-example-single_{context}"]
11-
== Single pod topology spread constraint example
9+
You can specify which pods to group together, which topology domains they are spread among, and the acceptable skew.
1210

13-
// TODO: Add a diagram?
11+
The following examples demonstrate pod topology spread constraint configurations.
1412

15-
This example `Pod` spec defines one pod topology spread constraint. It matches on pods labeled `region: us-east`, distributes among zones, specifies a skew of `1`, and does not schedule the pod if it does not meet these requirements.
13+
.Example to distribute pods that match the specified labels based on their zone
14+
[source,yaml]
15+
----
16+
apiVersion: v1
17+
kind: Pod
18+
metadata:
19+
name: my-pod
20+
labels:
21+
region: us-east
22+
spec:
23+
securityContext:
24+
runAsNonRoot: true
25+
seccompProfile:
26+
type: RuntimeDefault
27+
topologySpreadConstraints:
28+
- maxSkew: 1 <1>
29+
topologyKey: topology.kubernetes.io/zone <2>
30+
whenUnsatisfiable: DoNotSchedule <3>
31+
labelSelector: <4>
32+
matchLabels:
33+
region: us-east <5>
34+
matchLabelKeys:
35+
- my-pod-label <6>
36+
containers:
37+
- image: "docker.io/ocpqe/hello-pod"
38+
name: hello-pod
39+
securityContext:
40+
allowPrivilegeEscalation: false
41+
capabilities:
42+
drop: [ALL]
43+
----
44+
<1> The maximum difference in number of pods between any two topology domains. The default is `1`, and you cannot specify a value of `0`.
45+
<2> The key of a node label. Nodes with this key and identical value are considered to be in the same topology.
46+
<3> How to handle a pod if it does not satisfy the spread constraint. The default is `DoNotSchedule`, which tells the scheduler not to schedule the pod. Set to `ScheduleAnyway` to still schedule the pod, but the scheduler prioritizes honoring the skew to not make the cluster more imbalanced.
47+
<4> Pods that match this label selector are counted and recognized as a group when spreading to satisfy the constraint. Be sure to specify a label selector, otherwise no pods can be matched.
48+
<5> Be sure that this `Pod` spec also sets its labels to match this label selector if you want it to be counted properly in the future.
49+
<6> A list of pod label keys to select which pods to calculate spreading over.
1650

51+
.Example demonstrating a single pod topology spread constraint
1752
[source,yaml]
1853
----
1954
kind: Pod
@@ -43,15 +78,9 @@ spec:
4378
drop: [ALL]
4479
----
4580

46-
[id="nodes-scheduler-pod-topology-spread-constraints-example-multiple_{context}"]
47-
== Multiple pod topology spread constraints example
48-
49-
// TODO: Add a diagram?
50-
51-
This example `Pod` spec defines two pod topology spread constraints. Both match on pods labeled `region: us-east`, specify a skew of `1`, and do not schedule the pod if it does not meet these requirements.
52-
53-
The first constraint distributes pods based on a user-defined label `node`, and the second constraint distributes pods based on a user-defined label `rack`. Both constraints must be met for the pod to be scheduled.
81+
The previous example defines a `Pod` spec with a one pod topology spread constraint. It matches on pods labeled `region: us-east`, distributes among zones, specifies a skew of `1`, and does not schedule the pod if it does not meet these requirements.
5482

83+
.Example demonstrating multiple pod topology spread constraints
5584
[source,yaml]
5685
----
5786
kind: Pod
@@ -86,3 +115,7 @@ spec:
86115
capabilities:
87116
drop: [ALL]
88117
----
118+
119+
The previous example defines a `Pod` spec with two pod topology spread constraints. Both match on pods labeled `region: us-east`, specify a skew of `1`, and do not schedule the pod if it does not meet these requirements.
120+
121+
The first constraint distributes pods based on a user-defined label `node`, and the second constraint distributes pods based on a user-defined label `rack`. Both constraints must be met for the pod to be scheduled.
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="pod-topology-spread-constraints-max-skew_{context}"]
7+
= Understanding skew and maxSkew
8+
9+
Skew refers to the difference in the number of pods that match a specified label selector across different topology domains, such as zones or nodes.
10+
11+
The skew is calculated for each domain by taking the absolute difference between the number of pods in that domain and the number of pods in the domain with the lowest amount of pods scheduled. Setting a `maxSkew` value guides the scheduler to maintain a balanced pod distribution.
12+
13+
[id="pod-topology-spread-constraints-max-skew-calculation_{context}"]
14+
== Example skew calculation
15+
16+
You have three zones (A, B, and C), and you want to distribute your pods evenly across these zones. If zone A has 5 pods, zone B has 3 pods, and zone C has 2 pods, to find the skew, you can subtract the number of pods in the domain with the lowest amount of pods scheduled from the number of pods currently in each zone. This means that the skew for zone A is 3, the skew for zone B is 1, and the skew for zone C is 0.
17+
18+
[id="pod-topology-spread-constraints-max-skew-parameter_{context}"]
19+
== The maxSkew parameter
20+
21+
The `maxSkew` parameter defines the maximum allowable difference, or skew, in the number of pods between any two topology domains. If `maxSkew` is set to `1`, the number of pods in any topology domain should not differ by more than 1 from any other domain. If the skew exceeds `maxSkew`, the scheduler attempts to place new pods in a way that reduces the skew, adhering to the constraints.
22+
23+
Using the previous example skew calculation, the skew values exceed the default `maxSkew` value of `1`. The scheduler places new pods in zone B and zone C to reduce the skew and achieve a more balanced distribution, ensuring that no topology domain exceeds the skew of 1.

nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.adoc

Lines changed: 20 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,15 +6,29 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9-
You can use pod topology spread constraints to control the placement of your pods across nodes, zones, regions, or other user-defined topology domains.
9+
You can use pod topology spread constraints to provide fine-grained control over the placement of your pods across nodes, zones, regions, or other user-defined topology domains. Distributing pods across failure domains can help to achieve high availability and more efficient resource utilization.
1010

11-
// About pod topology spread constraints
12-
include::modules/nodes-scheduler-pod-topology-spread-constraints-about.adoc[leveloffset=+1]
11+
[id="nodes-scheduler-pod-topology-spread-constraints-example-use-cases"]
12+
== Example use cases
1313

14-
// Configuring pod topology spread constraints
15-
include::modules/nodes-scheduler-pod-topology-spread-constraints-configuring.adoc[leveloffset=+1]
14+
* As an administrator, I want my workload to automatically scale between two to fifteen pods. I want to ensure that when there are only two pods, they are not placed on the same node, to avoid a single point of failure.
1615

17-
// Sample pod topology spread constraints
16+
* As an administrator, I want to distribute my pods evenly across multiple infrastructure zones to reduce latency and network costs. I want to ensure that my cluster can self-heal if issues arise.
17+
18+
[id="nodes-scheduler-pod-topology-spread-constraints-considerations"]
19+
== Important considerations
20+
21+
* Pods in an {product-title} cluster are managed by _workload controllers_ such as deployments, stateful sets, or daemon sets. These controllers define the desired state for a group of pods, including how they are distributed and scaled across the nodes in the cluster. You should set the same pod topology spread constraints on all pods in a group to avoid confusion. When using a workload controller, such as a deployment, the pod template typically handles this for you.
22+
23+
* Mixing different pod topology spread constraints can make {product-title} behavior confusing and troubleshooting more difficult. You can avoid this by ensuring that all nodes in a topology domain are consistently labeled. {product-title} automatically populates well-known labels, such as `kubernetes.io/hostname`. This helps avoid the need for manual labeling of nodes. These labels provide essential topology information, ensuring consistent node labeling across the cluster.
24+
25+
* Only pods within the same namespace are matched and grouped together when spreading due to a constraint.
26+
27+
* You can specify multiple pod topology spread constraints, but you must ensure that they do not conflict with each other. All pod topology spread constraints must be satisfied for a pod to be placed.
28+
29+
include::modules/pod-topology-spread-constraints-max-skew.adoc[leveloffset=+1]
30+
31+
// Example pod topology spread constraints
1832
include::modules/nodes-scheduler-pod-topology-spread-constraints-examples.adoc[leveloffset=+1]
1933

2034
ifndef::openshift-rosa,openshift-dedicated[]

0 commit comments

Comments
 (0)