-
Hello everyone i'm starting to play with kafka on kubernetes, little by little adding features and now i'm stuck on SSL access via an Ingress I've set up a GKE cluster on GCP with a nginx ingress controller in load-balancer service and the A records towards its IP needed for DNS lookup (bootstrap.example.com, broker-{0,1,2}.example.com I've installed operator v0.37 I've deployed this kafka: apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
labels:
app: my-cluster
name: my-cluster
namespace: kafka
spec:
kafka:
version: 3.5.1
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
- name: external
port: 9094
type: ingress
tls: true
authentication:
type: tls
configuration:
bootstrap:
host: bootstrap.example.com
brokers:
- broker: 0
host: broker-0.example.com
- broker: 1
host: broker-1.example.com
- broker: 2
host: broker-2.example.com
class: nginx
authorization:
type: simple
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
default.replication.factor: 1
min.insync.replicas: 1
inter.broker.protocol.version: "3.5"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 10Gi
deleteClaim: false
zookeeper:
replicas: 1
storage:
type: persistent-claim
size: 10Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {} I've deployed this topic: apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: my-topic
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
partitions: 12 I've deployed this User (god): apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
name: my-user
labels:
strimzi.io/cluster: my-cluster
spec:
authentication:
type: tls
authorization:
type: simple
acls:
- resource:
type: topic
name: "*"
patternType: literal
operations:
- All
- resource:
type: cluster
name: "*"
patternType: literal
operations:
- All
- resource:
type: group
name: "*"
patternType: literal
operations:
- Read I set up a client.properties with this little script: CLUSTER_NAME=my-cluster-kafka
SECRET_NAME=my-user
NAMESPACE=kafka
kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.user\.p12}' | base64 -d > user.p12
password_user=$(kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.user\.password}' | base64 --decode)
SECRET_NAME=$CLUSTER_NAME-cluster-ca-cert
kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12
password_ca=$(kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.ca\.password}' | base64 -d)
# Create client.properties
cat <<EOF > client.properties
bootstrap.servers=$CLUSTER_NAME-kafka-bootstrap:9093
security.protocol=SSL
ssl.truststore.location=/tmp/ca.p12
ssl.truststore.password=$password_ca
ssl.keystore.location=/tmp/user.p12
ssl.keystore.password=$password_user
EOF Then I tested a producer and a consumer internally in the kubenertes cluster, which worked perfectly. kubectl run kafka-producer -n kafka --restart='Never' --image=quay.io/strimzi/kafka:0.37.0-kafka-3.5.1 --namespace kafka --command -- sleep infinity
kubectl cp client.properties kafka/kafka-producer:/tmp/client.properties
kubectl cp ./user.p12 kafka/kafka-producer:/tmp/user.p12
kubectl cp ./ca.p12 kafka/kafka-producer:/tmp/ca.p12
kubectl exec --tty -i kafka-producer --namespace kafka -- bash
# In container
BOOTSTRAP_SERVER=my-cluster-kafka-bootstrap:9093
TOPIC="my-topic"
bin/kafka-console-producer.sh --producer.config /tmp/client.properties \
--bootstrap-server $BOOTSTRAP_SERVER --topic $TOPIC but when I change the bootstrap-server to bootstrap.example.com, the external access with
I've been looking for a solution to my problem without success And what do you see as my problem? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
You should check what certificate it returns. You can do it with |
Beta Was this translation helpful? Give feedback.
-
You're right again ! I didn't remember that it had to be activated in the ingress controller, I thought it was the default. For an installation with nginx-ingress-controller helm, just add this property: --set controller.extraArgs.enable-ssl-passthrough="" Thanks for your responsiveness @scholzj , it's great, I'll be able to move towards a clean kafka configuration with strimzi |
Beta Was this translation helpful? Give feedback.
The Ingress annotations are fine. But you need to enable it in also in the Ingress deployment / command: