@@ -87,6 +87,8 @@ tags, and then generate with `hack/update-toc.sh`.
87
87
- [ Story 1] ( #story-1 )
88
88
- [ Notes/Constraints/Caveats (Optional)] ( #notesconstraintscaveats-optional )
89
89
- [ Risks and Mitigations] ( #risks-and-mitigations )
90
+ - [ Possible misuse] ( #possible-misuse )
91
+ - [ the update to labels specified at <code >matchLabelKeys</code > isn't supported] ( #the-update-to-labels-specified-at-matchlabelkeys-isnt-supported )
90
92
- [ Design Details] ( #design-details )
91
93
- [[ v1.33] design change and a safe upgrade path] ( #v133-design-change-and-a-safe-upgrade-path )
92
94
- [ Test Plan] ( #test-plan )
@@ -296,23 +298,32 @@ How will UX be reviewed, and by whom?
296
298
297
299
Consider including folks who also work outside the SIG or subproject.
298
300
-->
299
- -
300
- In addition to using ` pod-template-hash ` added by the Deployment controller,
301
- users can also provide the customized key in ` MatchLabelKeys ` to identify
302
- which pods should be grouped. If so, the user needs to ensure that it is
303
- correct and not duplicated with other unrelated workloads.
304
- -
305
- ` MatchLabelKeys ` may be broken when the pod's label values corresponding to
306
- ` MatchLabelKeys ` are updated after the pod is created and unscheduled.
307
- ` LabelSelector ` will not be updated even if the pod's label values are updated,
308
- because kube-apiserver merges key-value labels into ` LabelSelector ` and persists
309
- them in the pod object when the pod is created.
310
- But, it is not general that the pod's labels are updated at that timing, and
311
- this means the pod will be scheduled without satisfying the ` TopologySpreadConstraint ` ,
312
- not be unschedulable.
313
- In the first place, it is deprecated to define ` MatchLabelKeys ` with label keys
314
- whose value may change dynamically, and we should document it.
315
-
301
+ #### Possible misuse
302
+
303
+ In addition to using ` pod-template-hash ` added by the Deployment controller,
304
+ users can also provide the customized key in ` MatchLabelKeys ` to identify
305
+ which pods should be grouped. If so, the user needs to ensure that it is
306
+ correct and not duplicated with other unrelated workloads.
307
+
308
+ #### the update to labels specified at ` matchLabelKeys ` isn't supported
309
+
310
+ ` MatchLabelKeys ` is handled and merged into ` LabelSelector ` at _ a pod's creation_ .
311
+ It means this feature doesn't support the label's update even though users, of course,
312
+ could update the label that is specified at ` matchLabelKeys ` after a pod's creation.
313
+ So, in such cases, the update of the label isn't reflected onto the merged ` LabelSelector ` ,
314
+ even though users might expect it to be.
315
+ On the documentation, we'll declare it's not recommended using ` matchLabelKeys ` with labels that is frequently updated.
316
+
317
+ Also, we assume the risk is acceptably low because:
318
+ 1 . It's a fairly low probability to happen because pods are usually managed by another resource (e.g., deployment),
319
+ and the update to pod template's labels on a deployment recreates pods, instead of directly updating the labels on existing pods.
320
+ Also, even if users somehow use bare pods (which is not recommended in the first place),
321
+ there's usually only a tiny moment between the pod creation and the pod getting scheduled, which makes this risk further rarer to happen,
322
+ unless many pods are often getting stuck being unschedulable for a long time in the cluster (which is not recommended)
323
+ or the labels specified at ` matchLabelKeys ` are frequently updated (which we'll declare as not recommended).
324
+ 2 . Even if it happens, the topology spread on the pod is just ignored (since ` labelSelector ` no longer matches the pod itself)
325
+ and the pod could still be schedulable.
326
+ It's not that the unfortunate pods would be unschedulable forever.
316
327
317
328
## Design Details
318
329
0 commit comments