Static Loadbalancer IPs for broker services with KafkaNodePools #9680
-
I am using the KafkaNodePools feature in an AKS cluster and I am trying to set static IPs for the loadbalancer services for each broker. I have it so that it will set the IP for the first broker service in each pool but I am having trouble getting it to do it for multiple brokers. Here is my yaml file for the CRD: apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: kafkanodepool1
labels:
strimzi.io/cluster: kafkacl2
annotations:
strimzi.io/next-node-ids: "[100-199]"
spec:
replicas: 2
roles:
- broker
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 256Gi
deleteClaim: true
class: default
template:
pod:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- eastus-1
perPodService:
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-ipv4: x.x.x.x
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: kafkanodepool2
labels:
strimzi.io/cluster: kafkacl2
annotations:
strimzi.io/next-node-ids: "[200-299]"
spec:
replicas: 2
roles:
- broker
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 256Gi
deleteClaim: true
class: default
template:
pod:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- eastus-2
perPodService:
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-ipv4: x.x.x.x
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: kafkanodepool3
labels:
strimzi.io/cluster: kafkacl2
annotations:
strimzi.io/next-node-ids: "[300-399]"
spec:
replicas: 2
roles:
- broker
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 256Gi
deleteClaim: true
class: default
template:
pod:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- eastus-3
perPodService:
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-ipv4: x.x.x.x
---
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: kafkacl2
annotations:
strimzi.io/node-pools: enabled
spec:
kafka:
version: 3.6.1
# The replicas field is required by the Kafka CRD schema while the KafkaNodePools feature gate is in alpha phase.
# But it will be ignored when Kafka Node Pools are used
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
- name: externaltls
port: 19092
type: loadbalancer
tls: true
authentication:
type: tls
configuration:
bootstrap:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-ipv4: x.x.x.x
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
inter.broker.protocol.version: "3.6"
# The storage field is required by the Kafka CRD schema while the KafkaNodePools feature gate is in alpha phase.
# But it will be ignored when Kafka Node Pools are used
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 64Gi
deleteClaim: false
rack:
topologyKey: topology.kubernetes.io/zone
config:
replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 64Gi
deleteClaim: true
entityOperator:
topicOperator: {}
userOperator: {} |
Beta Was this translation helpful? Give feedback.
Answered by
scholzj
Feb 14, 2024
Replies: 1 comment 15 replies
-
Can you please format the code to make it readable? |
Beta Was this translation helpful? Give feedback.
15 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Right, so the problem is that you configured your node pools to use the node IDs 100, 101, 200, 201, 300 and 301. That is fine. But you need to reflect it in your listener configuration. So instead of
broker: 0
you need to havebroker: 100
,broker: 101
, etc. This way the listener configuration is valid. But the annotations will aplly only once you create a broker with ID for example 0.