[Bug]: Error deleting StatefulSet during Strimzi and Kafka upgrades. Conversion from StatefulSet to StrimziPodSet #8352
charris-ca
started this conversation in
General
Replies: 1 comment
-
You did not provide any logs or anything. You would need to check the operator logs and/or the Kubernetes logs (assuming there won't be any error in the operator log) to see why the StatefulSet was not deleted. The operator does non-cascading delete. So if that is failing even when doing it manually, you have probably some tooling in your system which is blocking this operation and you will need to fix it so that the operator can do it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Bug Description
Strimzi upgrade from 0.28 to 0.33.2 errors when trying to convert pods from StatefulSet to StrimziPodSet. I also received the error on the zookeeper pod set and I worked past it by manually deleting the statefulset. The StrimziPodSet took over after that, but all zookeper pods went down and then restarted.
I am worried that deleting the Kafka statefulset will do the same and cause an outage with our Kafka instances. How do we get around the operator not able to delete the statefulset? I have tried issuing a statefulset delete with --cascade=orphan, but that command hangs and never finishes.
Steps to reproduce
Expected behavior
No response
Strimzi version
0.33.2
Kubernetes version
Kubernetes 1.22
Installation method
No response
Infrastructure
No response
Configuration files and logs
No response
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions