Kafka pods spread over different zone in k8s #11853
Unanswered
DavidPeyrton
asked this question in
Q&A
Replies: 1 comment 5 replies
-
The YAML you are sharing is badly formatted. So it is not clear what it really is. If you enabled the rack awareness, there will be a default preferred affinity that you can combine with additional affinity and spread constraint rules. |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone
I'm trying to deploy a Kafka instance 3.9.0 with a Strimzi operator 0.45 on an OpenShift cluster.
I have 6 workers with a label topology.kubernetes.io/zone sets with zone1, zone2 and zone3.
I'm trying to equally spread 6 kafka pods over the zones with:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: rack: topologyKey: topology.kubernetes.io/zone pod: **affinity: {}** metadata: labels: app: strimzi-kafka-topology priorityClassName: core-normal-priority-class topologySpreadConstraints: - labelSelector: matchLabels: app: strimzi-kafka-topology maxSkew: 1 nodeAffinityPolicy: Honor nodeTaintsPolicy: Honor topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: app: strimzi-kafka-topology maxSkew: 1 nodeAffinityPolicy: Honor nodeTaintsPolicy: Honor topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway
When deployed, the topologySpreadConstraints are put into the StrimziPodSet as well as in the Pod objects. But there is also an affinity section:
spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: Exists podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: strimzi.io/cluster: strimzi strimzi.io/name: strimzi-kafka topologyKey: topology.kubernetes.io/zone weight: 100
I set no affinity in the Kafka object: affinity: {}. So I don't understand why it is added in the created objects.
Is there any reason ?
Initially, the pods are well spread over the zones.
But if I cordon all the nodes of a given zone and restart a pod from this node, then it will respawn on another zone. I was expecting it to wait for a node in its initial zone to become available again.
Did I misunderstood the purpose of the zones or did not configure it properly ?
Beta Was this translation helpful? Give feedback.
All reactions