-
My English is not so good, sorry for that. I used kafka-operator0.30.0, and there was a problem when zookeeper pods start, it's about hostname verification, and I fixed it by setting Now I got another problem that is the kafka pods redeploy all the time. They all started successfully, but after few minutes, they restart one by one, and after few minutes, they restart again, does anyone knows about this issue? By the way, why do all the kafka pods deployed in the same node? Is there any configuration to deal with this situation? thanks a lot operator's log:
|
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 4 replies
-
I found a reason for that problem:
so, I edited ClusterRole
more operator's log info:
|
Beta Was this translation helpful? Give feedback.
-
@scholzj here's the kafka pod's log |
Beta Was this translation helpful? Give feedback.
-
Finally, I fixed it! |
Beta Was this translation helpful? Give feedback.
-
TBH, it is a bit confusing what the actual issue you are asking about right now is, because you went through multiple different things. So it is not clear if you managed to fix it or not. The |
Beta Was this translation helpful? Give feedback.
Finally, I fixed it!
I setted value for environment variable
KUBERNETES_SERVICE_DNS_DOMAIN
in operator's deployment, and redeployed operator, that's it!