Replies: 2 comments 2 replies
-
Yes, that is expected. If you want to delete them, you can do it yourself. You can also use the following command to delete all Strimzi resources in given namespace:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Thank you Jakub for the quick response! |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi and thanks beforehand for your attention.
I am using Helm to install my strimzi cluster through 3 Helm relases:
On installation everything works as expected:
ssm-user@ip-10-159-193-187:~$ helm list -n strimzi
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
strimzi strimzi 1 2023-09-07 16:18:12.159401543 +0000 UTC deployed strimzi-1.1.0 0.35.1
strimzioperator strimzi 1 2023-09-07 15:32:18.7929155 +0000 UTC deployed strimzioperator-1.1.0-SNAPSHOT 0.35.1
strimziresources strimzi 1 2023-09-08 07:38:06.521193111 +0000 UTC deployed strimziresources-1.1.0-SNAPSHOT 0.35.1
*****Resource: pods
NAME READY STATUS
grafana-7f4dbb995f-mmrzn 1/1 Running
prometheus-operator-9665f996b-7pwl8 1/1 Running
prometheus-prometheus-0 2/2 Running
strimzi-cluster-operator-58fd7cb5f5-pv6hz 1/1 Running
strimzi-entity-operator-588d9d7cbd-l8ggn 3/3 Running
strimzi-kafka-0 1/1 Running
strimzi-kafka-1 1/1 Running
strimzi-kafka-2 1/1 Running
strimzi-kafka-exporter-56c67d64f9-nm28r 1/1 Running
strimzi-zookeeper-0 1/1 Running
strimzi-zookeeper-1 1/1 Running
strimzi-zookeeper-2 1/1 Running
*****Resource: kafkas.kafka.strimzi.io
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS READY WARNINGS
strimzi 3 3 True
*****Resource: persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES
data-strimzi-kafka-0 Bound pvc-96011d63-a11d-499e-9095-57b5a30912d8 300Gi RWO
data-strimzi-kafka-1 Bound pvc-12169db8-95b4-4bac-84ec-d0e4d79c7ef1 300Gi RWO
data-strimzi-kafka-2 Bound pvc-b6a15ee6-ad96-4232-a317-2ec4f092c9df 300Gi RWO
data-strimzi-zookeeper-0 Bound pvc-bce9d60d-2e54-4be6-9372-9d0b33571db5 100Gi RWO
data-strimzi-zookeeper-1 Bound pvc-6ad7def6-adb1-431d-8fa8-3a9d62811297 100Gi RWO
data-strimzi-zookeeper-2 Bound pvc-f3d88215-602a-48d6-ac60-a2df92ca8f96 100Gi RWO
ssm-user@ip-10-159-193-187:~$ k -n strimzi get kafkatopics
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a strimzi 50 3 True
strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 strimzi 1 3 True
strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b strimzi 1 1 True
topic1 strimzi 2 3 True
topic2 strimzi 2 3 True
topic3 strimzi 2 3 True
ssm-user@ip-10-159-193-187:
$ k -n strimzi get kafkausers$NAME CLUSTER AUTHENTICATION AUTHORIZATION READY
akhq-strimzi-user strimzi tls-external simple True
arq-int-admin-user strimzi tls-external simple True
arq-int-test-user strimzi tls-external simple True
strimzi-superuser strimzi tls-external simple True
ssm-user@ip-10-159-193-187:
When uninstalling the Helm releases everything gets cleaned correctly excepting these 3 internal topics:
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a strimzi 50 3 True
strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 strimzi 1 3 True
strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b strimzi 1 1 True
even after the strimzi-entity-operator has been deleted:
ssm-user@ip-10-159-193-187:~$ k -n strimzi get pods
NAME READY STATUS RESTARTS AGE
strimzi-cluster-operator-58fd7cb5f5-pv6hz 1/1 Running 0 13h
The only way I get to clean those topics is to delete them manually. I was just wondering if this is intentional and if there is any possible configuration at topicOperator that can enforce cleaning them on finalizing the strimzi-entity-operator pod.
Thank you!!
Beta Was this translation helpful? Give feedback.
All reactions