Replies: 1 comment 2 replies
-
The second part is unreadable because of bad formatting. So I cannot comment much on that I'm afraid. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
We do have an issue that is appears on the operator. It indicates that the resource versions mismatch. It appears as a WARN. Our cluster is an AKS cluster on free pricing tier.
Watch for resource Kafka in namespace bibecoffee with selector null failed and will be reconnected io.fabric8.kubernetes.client.WatcherException: too old resource version: 469439583 (535947780)
Watch for resource KafkaConnect in namespace bibecoffee with selector null failed and will be reconnected io.fabric8.kubernetes.client.WatcherException: too old resource version: 505640676 (535902146)
Also our operator does unexpected restarts and we are talking about a single pod deployment.
ERROR LeaderElector:132 - Exception occurred while releasing lock 'LeaseLock: bibecoffee - strimzi-cluster-operator (strimzi-cluster-operator-6cdbbbcd77-c7p7q)' on cancel io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Lease] with name: [strimzi-cluster-operator] in namespace: [bibecoffee] failed. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:159) ~[io.fabric8.kubernetes-client-api-6.9.0.jar:?] at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.requireFromServer(BaseOperation.java:194) ~[io.fabric8.kubernetes-client-6.9.0.jar:?] at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.get(BaseOperation.java:148) ~[io.fabric8.kubernetes-client-6.9.0.jar:?] at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.get(BaseOperation.java:97) ~[io.fabric8.kubernetes-client-6.9.0.jar:?] at io.fabric8.kubernetes.client.extended.leaderelection.resourcelock.ResourceLock.get(ResourceLock.java:49) ~[io.fabric8.kubernetes-client-api-6.9.0.jar:?] at io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector.release(LeaderElector.java:148) ~[io.fabric8.kubernetes-client-api-6.9.0.jar:?] at io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector.stopLeading(LeaderElector.java:126) ~[io.fabric8.kubernetes-client-api-6.9.0.jar:?] at io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector.lambda$null$1(LeaderElector.java:96) ~[io.fabric8.kubernetes-client-api-6.9.0.jar:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?] at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?] at io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector.lambda$renewWithTimeout$6(LeaderElector.java:194) ~[io.fabric8.kubernetes-client-api-6.9.0.jar:?] at io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector.lambda$loop$8(LeaderElector.java:282) ~[io.fabric8.kubernetes-client-api-6.9.0.jar:?] at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?] at java.lang.Thread.run(Thread.java:840) ~[?:?] Caused by: java.io.IOException: request timed out at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.waitForResult(OperationSupport.java:504) ~[io.fabric8.kubernetes-client-6.9.0.jar:?] at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handleResponse(OperationSupport.java:524) ~[io.fabric8.kubernetes-client-6.9.0.jar:?] at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handleGet(OperationSupport.java:467) ~[io.fabric8.kubernetes-client-6.9.0.jar:?] at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleGet(BaseOperation.java:791) ~[io.fabric8.kubernetes-client-6.9.0.jar:?] at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.requireFromServer(BaseOperation.java:192) ~[io.fabric8.kubernetes-client-6.9.0.jar:?] ... 16 more Caused by: java.net.http.HttpTimeoutException: request timed out at jdk.internal.net.http.ResponseTimerEvent.handle(ResponseTimerEvent.java:63) ~[java.net.http:?] at jdk.internal.net.http.HttpClientImpl.purgeTimeoutsAndReturnNextDeadline(HttpClientImpl.java:1270) ~[java.net.http:?] at jdk.internal.net.http.HttpClientImpl$SelectorManager.run(HttpClientImpl.java:899) ~[java.net.http:?] 2024-05-16 16:04:53 INFO LeaderElectionManager:112 - Stopped being a leader 2024-05-16 16:04:53 WARN Main:244 - Stopped being a leader => exiting 2024-05-16 16:04:53 INFO ShutdownHook:35 - Shutdown hook started 2024-05-16 16:04:53 INFO ShutdownHook:90 - Shutting down Vert.x verticle 6402a701-62b2-48b8-91ec-a2c50d34b3ab 2024-05-16 16:04:53 INFO ClusterOperator:180 - Stopping ClusterOperator for namespace bibecoffee 2024-05-16 16:04:53 INFO StrimziPodSetController:591 - Requesting the StrimziPodSet controller to stop 2024-05-16 16:04:53 INFO StrimziPodSetController:574 - Stopping StrimziPodSet controller 2024-05-16 16:04:53 INFO InformerUtils:63 - Stopping informers 2024-05-16 16:04:53 INFO InformerUtils:51 - StrimziPodSet informer stopped 2024-05-16 16:04:53 INFO InformerUtils:51 - Pod informer stopped 2024-05-16 16:04:53 INFO InformerUtils:51 - Kafka informer stopped 2024-05-16 16:04:53 INFO InformerUtils:51 - KafkaConnect informer stopped 2024-05-16 16:04:53 INFO InformerUtils:51 - KafkaMirrorMaker2 informer stopped 2024-05-16 16:04:53 INFO StrimziPodSetController:599 - StrimziPodSet controller stopped 2024-05-16 16:04:53 INFO ShutdownHook:108 - Shutdown of Vert.x verticle 6402a701-62b2-48b8-91ec-a2c50d34b3ab is complete 2024-05-16 16:04:53 INFO LeaderElectionManager:80 - Leader Elector is already stopped 2024-05-16 16:04:53 INFO ShutdownHook:60 - Shutting down Vert.x 2024-05-16 16:04:53 INFO ShutdownHook:79 - Shutdown of Vert.x is complete 2024-05-16 16:04:53 INFO ShutdownHook:41 - Shutdown hook completed
It seems that might be a kubeapi server related thing. Any ideas why is that happening?
Thanks,
Thomas
Beta Was this translation helpful? Give feedback.
All reactions