@@ -307,15 +307,7 @@ required) or even code snippets. If there's any ambiguity about HOW your
307
307
proposal will be implemented, this is the place to discuss them.
308
308
-->
309
309
310
- A new field named ` MatchLabelKeys ` will be added to ` TopologySpreadConstraint ` .
311
- Currently, when scheduling a pod, the ` LabelSelector ` defined in the pod is used
312
- to identify the group of pods over which spreading will be calculated.
313
- ` MatchLabelKeys ` adds another constraint to how this group of pods is identified:
314
- the scheduler will use those keys to look up label values from the incoming pod;
315
- and those key-value labels are ANDed with ` LabelSelector ` to select the group of
316
- existing pods over which spreading will be calculated.
317
-
318
- A new field named ` MatchLabelKeys ` will be introduced to` TopologySpreadConstraint ` :
310
+ A new optional field named ` MatchLabelKeys ` will be introduced to` TopologySpreadConstraint ` .
319
311
``` go
320
312
type TopologySpreadConstraint struct {
321
313
MaxSkew int32
@@ -333,22 +325,53 @@ type TopologySpreadConstraint struct {
333
325
}
334
326
```
335
327
336
- Examples of use are as follows:
328
+ When a Pod is created, kube-apiserver will obtain the labels from the pod
329
+ by the keys in ` matchLabelKeys ` and ` key in (value) ` is merged to ` LabelSelector `
330
+ of ` TopologySpreadConstraint ` .
331
+
332
+ For example, when this sample Pod is created,
333
+
337
334
``` yaml
338
- topologySpreadConstraints :
339
- - maxSkew : 1
340
- topologyKey : kubernetes.io/hostname
341
- whenUnsatisfiable : DoNotSchedule
342
- matchLabelKeys :
343
- - app
344
- - pod-template-hash
335
+ apiVersion : v1
336
+ kind : Pod
337
+ metadata :
338
+ name : sample
339
+ labels :
340
+ app : sample
341
+ ...
342
+ topologySpreadConstraints :
343
+ - maxSkew : 1
344
+ topologyKey : kubernetes.io/hostname
345
+ whenUnsatisfiable : DoNotSchedule
346
+ labelSelector : {}
347
+ matchLabelKeys : # ADDED
348
+ - app
349
+ ` ` `
350
+
351
+ kube-apiserver modifies the ` labelSelector` like the following:
352
+
353
+ ` ` ` diff
354
+ topologySpreadConstraints:
355
+ - maxSkew: 1
356
+ topologyKey: kubernetes.io/hostname
357
+ whenUnsatisfiable: DoNotSchedule
358
+ labelSelector:
359
+ + matchExpressions:
360
+ + - key: app
361
+ + operator: In
362
+ + values:
363
+ + - sample
364
+ matchLabelKeys:
365
+ - app
345
366
` ` `
346
367
347
- The scheduler plugin ` PodTopologySpread` will obtain the labels from the pod
348
- labels by the keys in `matchLabelKeys`. The obtained labels will be merged
349
- to `labelSelector` of `topologySpreadConstraints` to filter and group pods.
350
- The pods belonging to the same group will be part of the spreading in
351
- ` PodTopologySpread` .
368
+ Currently, scheduler will also be aware of `matchLabelKeys` and
369
+ gracefully handle the same labels.
370
+ This originates from the previous implementation only scheduler
371
+ obtains the pod labels by the keys in `matchLabelKeys` and uses
372
+ them with `labelSelector` to filter and group pods for spreading.
373
+ In the near future, the scheduler implementation related to
374
+ ` matchLabelKeys` will be removed.
352
375
353
376
Finally, the feature will be guarded by a new feature flag. If the feature is
354
377
disabled, the field `matchLabelKeys` is preserved if it was already set in the
@@ -532,7 +555,9 @@ enhancement:
532
555
533
556
In the event of an upgrade, kube-apiserver will start to accept and store the field `MatchLabelKeys`.
534
557
535
- In the event of a downgrade, kube-scheduler will ignore `MatchLabelKeys` even if it was set.
558
+ In the event of a downgrade, kube-apiserver will reject pod creation with `matchLabelKeys` in `TopologySpreadConstraint`.
559
+ But, regarding existing pods, we leave `matchLabelKeys` and generated `LabelSelector` even after downgraded.
560
+ kube-scheduler will ignore `MatchLabelKeys` even if it was set.
536
561
537
562
# ## Version Skew Strategy
538
563
@@ -659,7 +684,7 @@ feature flags will be enabled on some API servers and not others during the
659
684
rollout. Similarly, consider large clusters and how enablement/disablement
660
685
will rollout across nodes.
661
686
-->
662
- It won't impact already running workloads because it is an opt-in feature in scheduler.
687
+ It won't impact already running workloads because it is an opt-in feature in apiserver and scheduler.
663
688
But during a rolling upgrade, if some apiservers have not enabled the feature, they will not
664
689
be able to accept and store the field "MatchLabelKeys" and the pods associated with these
665
690
apiservers will not be able to use this feature. As a result, pods belonging to the
0 commit comments