Skip to content

Commit 8f470ed

Browse files
committed
fix1
1 parent 14bd4e5 commit 8f470ed

File tree

1 file changed

+9
-3
lines changed

1 file changed

+9
-3
lines changed

content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -377,8 +377,11 @@ when satisfying the Pod (anti)affinity.
377377
The keys are used to look up values from the pod labels; those key-value labels are combined
378378
(using `AND`) with the match restrictions defined using the `labelSelector` field. The combined
379379
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
380+
381+
{{< warning >}}
380382
It's not recommended to use `matchLabelKeys` with labels that might be updated
381-
because the update of the label isn't reflected onto the merged `LabelSelector`.
383+
because the update of the label isn't reflected onto the merged `labelSelector`.
384+
{{< /warning >}}
382385

383386
A common use case is to use `matchLabelKeys` with `pod-template-hash` (set on Pods
384387
managed as part of a Deployment, where the value is unique for each revision).
@@ -426,8 +429,11 @@ When you want to disable it, you have to disable it explicitly via the
426429
Kubernetes includes an optional `mismatchLabelKeys` field for Pod affinity
427430
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
428431
when satisfying the Pod (anti)affinity.
429-
It's not recommended to use `mismatchLabelKeys` with labels that might be updated
430-
because the update of the label isn't reflected onto the merged `LabelSelector`.
432+
433+
{{< warning >}}
434+
It's not recommended to use `matchLabelKeys` with labels that might be updated
435+
because the update of the label isn't reflected onto the merged `labelSelector`.
436+
{{< /warning >}}
431437

432438
One example use case is to ensure Pods go to the topology domain (node, zone, etc) where only Pods from the same tenant or team are scheduled in.
433439
In other words, you want to avoid running Pods from two different tenants on the same topology domain at the same time.

0 commit comments

Comments
 (0)