Replies: 3 comments 5 replies
-
You probably have to share more information. In particular:
|
Beta Was this translation helpful? Give feedback.
-
Hi @scholzj, thanks for you very quick reply...here the is the events unfolding along the timeline:
I was expecting that controller to notice that Kafka and Zookeeper Statefulset are at zero replicas, and to scale them back up the desired number or replicas declared into the Here are the data I've been able to collect, hope that is enough. All data below is from
|
Beta Was this translation helpful? Give feedback.
-
Something that I forgot also to say:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Bug Description
Strimzi operator cannot reconcile an existing Kafka/Zookeeper cluster and Statefulset replicas are not scaled up
Steps to reproduce
No response
Expected behavior
I would expect Strimzi operator to scale up Statefulsets
Strimzi version
0.29.0
Kubernetes version
v1.24.14-gke.1400
Installation method
Yaml file via Kustomize
Infrastructure
Google GKE
Configuration files and logs
Log files:
Additional context
We're using the kube-green project to implement nigthly cluster shutdowns in our NON-production environments.
As kube-green does not manage Statefulsets, we implemented a simple solution by shutting down them with CronJobs. It works as expected.
When the cluster wakes up, all deployments are scaled back to their original number of replicas. Among them are some operator pods (i.e. strimzi, prometheus, etc. etc.)
Other operator can successfully scale up Statefulsets they manage, but it seems that Strimzi can't do it...it loops over this error about
Exceeded timeout of 300000ms while waiting for Pods...
.While searching for similar bugs in this repo I stumbled upon some interesting ones (for example this: https://github.com/orgs/strimzi/discussions/8533)
Can anyone help? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions