Replies: 1 comment 3 replies
-
First of all, you should not be running two cluster operators watching the same CR. If you do, they will fight with each other and it will cause problems. You should always have only one cluster operator operate a single cluster. The error you pasted is fairly normal -> it just means the Kubernetes resource watch is too old and needs to be recreated. How often that happens depends on your cluster size and how busy it is. On its own it should not cause any issues. However, in theory, the first issue can cause this error to happen more often and can cause other issues as well. So I think the first thing you need to do is to make sure you have only one operator pod monitor given custom resource. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Description -
While trying to add a disk onto the currently running Kafka Cluster, the changes got applied to the cluster CR wherein we can view the newly added disk but the same wasn't being applied to the Kafka stateful sets.
Environment-
AKS K8s version - 1.22.6
Strimzi version - 0.27
Kafka CR -
Onto the existing cluster the change that we made was adding a new disk with id 3 with the below configuration -
Note - For our current cluster operator deployment we're running two pods and the logs for the same when we encountered this issue were below.
After restarting both the pods, the reconciliation for the storage change triggered and everything was setup in place. Any help on why this has happened and if there is any way to maintain and observe the status of cluster operator would be really helpful.
Beta Was this translation helpful? Give feedback.
All reactions