-
I posted a question 3 weeks ago about the same problem, but the context is a little different. Kubernetes is in old version: 1.21.12 I installed Nginx ingress controller with passthrough enabled NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 172.19.59.82 <none> 80:31323/TCP,443:32477/TCP,9094:30768/TCP,9095:32740/TCP,9096:30410/TCP 46m record A was made to point to one of the cluster node I've installed operator v0.38 I've deployed this kafka: apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
labels:
app: kafka-cluster
name: kafka-cluster
namespace: kafka2
spec:
kafkaExporter:
topicRegex: ".*"
groupRegex: ".*"
kafka:
template:
pod:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: strimzi.io/name
operator: In
values:
- kafka-cluster-kafka
topologyKey: kubernetes.io/hostname
version: 3.6.0
replicas: 3
# resources:
# requests:
# memory: 1Gi
# cpu: "300m"
# limits:
# memory: 1.5Gi # 50 % alloué à la JVM
# cpu: "300m"
# rack:
# topologyKey: kubernetes.io/hostname
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
- name: external
port: 9094
type: ingress
tls: true
authentication:
type: tls
configuration:
bootstrap:
host: bootstrap-kafka-rci.example.priv
brokers:
- broker: 0
host: broker-kafka-rci-0.example.priv
- broker: 1
host: broker-kafka-rci-1.example.priv
- broker: 2
host: broker-kafka-rci-2.example.priv
class: nginx
authorization:
type: simple
config:
offsets.topic.replication.factor: 2
transaction.state.log.replication.factor: 2
transaction.state.log.min.isr: 2
default.replication.factor: 2
min.insync.replicas: 2
inter.broker.protocol.version: "3.6"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 10Gi
deleteClaim: false
zookeeper:
# resources:
# requests:
# memory: 1Gi
# cpu: "500m"
# limits:
# memory: 1Gi # 75 % alloué à la JVM
# cpu: "500m"
replicas: 1
storage:
type: persistent-claim
size: 10Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {} I set up a client.properties with this little script: CLUSTER_NAME=my-cluster-kafka
SECRET_NAME=my-user
NAMESPACE=kafka
kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.user\.p12}' | base64 -d > user.p12
password_user=$(kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.user\.password}' | base64 --decode)
SECRET_NAME=$CLUSTER_NAME-cluster-ca-cert
kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12
password_ca=$(kubectl get secret $SECRET_NAME -n $NAMESPACE -o jsonpath='{.data.ca\.password}' | base64 -d)
# Create client.properties
cat <<EOF > client.properties
bootstrap.servers=$CLUSTER_NAME-kafka-bootstrap:9093
security.protocol=SSL
ssl.truststore.location=/tmp/ca.p12
ssl.truststore.password=$password_ca
ssl.keystore.location=/tmp/user.p12
ssl.keystore.password=$password_user
EOF
Then I tested a producer and a consumer internally in the kubenertes cluster, which worked perfectly.
producer test:
kubectl run kafka-producer -n kafka --restart='Never' --image=quay.io/strimzi/kafka:0.37.0-kafka-3.5.1 --namespace kafka --command -- sleep infinity
kubectl cp client.properties kafka/kafka-producer:/tmp/client.properties
kubectl cp ./user.p12 kafka/kafka-producer:/tmp/user.p12
kubectl cp ./ca.p12 kafka/kafka-producer:/tmp/ca.p12
kubectl exec --tty -i kafka-producer --namespace kafka -- bash
# In container
BOOTSTRAP_SERVER=my-cluster-kafka-bootstrap:9093
TOPIC="my-topic"
bin/kafka-console-producer.sh --producer.config /tmp/client.properties \
--bootstrap-server $BOOTSTRAP_SERVER --topic $TOPIC but when I use the external access with BOOTSTRAP_SERVER=bootstrap.example.priv:32477, I get this stacktrace:
I check certificate with opennssl: openssl s_client -showcerts -connect bootstrap-kafka-rci.example.priv:32477
CONNECTED(000001B0)
depth=1 O = io.strimzi, CN = cluster-ca v0
verify error:num=19:self signed certificate in certificate chain
verify return:1
depth=1 O = io.strimzi, CN = cluster-ca v0
verify return:1
depth=0 O = io.strimzi, CN = kafka-cluster-kafka
verify return:1
---
Certificate chain
0 s:O = io.strimzi, CN = kafka-cluster-kafka
i:O = io.strimzi, CN = cluster-ca v0
-----BEGIN CERTIFICATE-----
XXX
-----END CERTIFICATE-----
1 s:O = io.strimzi, CN = cluster-ca v0
i:O = io.strimzi, CN = cluster-ca v0
-----BEGIN CERTIFICATE-----
XXXX
-----END CERTIFICATE-----
---
Server certificate
subject=O = io.strimzi, CN = kafka-cluster-kafka
issuer=O = io.strimzi, CN = cluster-ca v0
---
Acceptable client certificate CA names
O = io.strimzi, CN = clients-ca v0
Requested Signature Algorithms: ECDSA+SHA256:ECDSA+SHA384:ECDSA+SHA512:Ed25519:Ed448:RSA-PSS+SHA256:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA-PSS+SHA256:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA1:RSA+SHA1
Shared Requested Signature Algorithms: ECDSA+SHA256:ECDSA+SHA384:ECDSA+SHA512:Ed25519:Ed448:RSA-PSS+SHA256:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA-PSS+SHA256:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 3574 bytes and written 447 bytes
Verification error: self signed certificate in certificate chain
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 19 (self signed certificate in certificate chain)
---
11992:error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:../openssl-1.1.1j/ssl/record/rec_layer_s3.c:1543:SSL alert number 42 There is an error at the end, I don't know if it's normal behavior with self signed certificates When I decode the first certificate block return by openssl, I 've got this SAN:
I don't have the SAN that correspond to instance 0 and 1... I don't understand what's causing my problem. Do you have any clues? Thanks in advance |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
The
Where it tries to connect to the broker on port 443 and likely gets a different certificate then your |
Beta Was this translation helpful? Give feedback.
The
type: ingress
listener expects the Ingress controller to un on the port 443. That is normally the case when you use load balancer. But as you use NodePorts, your port number for 443 is32477
. So you need to configure this in the listener configuration as well. because if you look at the error, you can see this:Where it tries to connect to the broker on port 443 and likely gets a different certificate then your
openssl s_client
command connecting to32477
. So what you need to do is to use theadvertisedPort
option to override the default443
port with the actual3…