-
Notifications
You must be signed in to change notification settings - Fork 307
Open
Labels
bugSomething isn't workingSomething isn't workingnever-staleIssue or PR marked to never go staleIssue or PR marked to never go stale
Description
Describe the bug
Two rabbitmq PODs are stuck on terminate status when during a namespace deletion.
To Reproduce
Steps to reproduce the behavior:
- Create a stream queue
- pump some message
- remove the leader replica with
rabbitmq-streams delete_replica my_stream my_leader_node - restart the leader node
- it should happen the bug ( it is not systematic )
apiVersion: v1
kind: Namespace
metadata:
name: stream-clients-test
---
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: tls
namespace: stream-clients-test
spec:
replicas: 3
image: rabbitmq:3.13-rc-management
service:
type: LoadBalancer
tls:
secretName: tls-secret
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 800m
memory: 1Gi
rabbitmq:
additionalPlugins:
- rabbitmq_stream
- rabbitmq_stream_managementExpected behavior
Should stop all the pods
Screenshots
Version and environment information
- RabbitMQ:
rabbitmq:3.13-rc-management - GCP
Additional context
Per conversation with @mkuratczyk the problem is:
queue 'BenchmarkDotNet0' in vhost '/' will become unavailable if node [email protected] stops
rabbit_stream_coordinator will become unavailable if node [email protected] stops
regressEdo and xunleii
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingnever-staleIssue or PR marked to never go staleIssue or PR marked to never go stale