Replies: 3 comments
-
Use durable queues and durable exchanges. When a node is restarted, transient queues hosted on it are deleted on other nodes, and so are their bindings. When a client reconnects and starts declaring the same queue, or perform operations on it (such as binding it to something), there is a natural race condition between nodes deleting transient queues This is further exacerbated by the fact that clients can connect to different nodes. Durable queues are not deleted on node restart, so you won't run into a race conditions. |
Beta Was this translation helpful? Give feedback.
-
Since there is no way clients and server nodes could coordinate over something like that when clients
Starting with RabbitMQ 4.0, transient entities will be gone (the property will be ignored). In fact, |
Beta Was this translation helpful? Give feedback.
-
My my questions were:
So in summary, this is no bug, in was always intended to work like this. It will be great to add that information to the Topology Recovery Tutorial Page (https://www.rabbitmq.com/api-guide.html#recovery) and explain that it will only work with durable queues. Best regards |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
Dear all
I have been testing the Topology Recovery on network fails and broker restart.
Goal of the Test: Bidirectional communications from Client A to Client B. The communication recovers. At the moment the messages do not have importance.
VERSIONS
Server: RabbitMQ 3.11.12 + Erlang 25.3 (just installed on ubuntu server minimal)
Client: c# library version 6.4.0 (.NET 4.72)
TESTING
Only one client
Opened 2 connection to the Server. One for Tx, one for Rx. (Automatic connections and topology recovery enabled on both)
Open 2 channels (one for Tx + one for Rx)
Create Exchange A type Direct.
3.1) Create Queue B (non-durable, no exclusive, no auto-delete)
3.2) Bind Queue B to Exchange A with routing key B
Create Queue A (non-durable, no exclusive, no auto-delete)
4.1) Basic Consumer (subscribe) to Queue A
Send message to Queue B.
During normal operation everything works correctly.
First test: Network failure. --> Both connections recover perfectly
Second test: Server restart --> Sometimes they recover perfectly. Other times the connections fails and I get the following logs on the server
In the case of the Exchange Binding
2023-03-31 11:39:54.679680+00:00 [info] <0.667.0> accepting AMQP connection <0.667.0> (172.16.200.3:61549 -> 192.168.30.122:5672)
2023-03-31 11:39:54.724980+00:00 [info] <0.667.0> connection <0.667.0> (172.16.200.3:61549 -> 192.168.30.122:5672) has a client-provided name: app:XXXX component:test-A-B-Tx
2023-03-31 11:39:54.726701+00:00 [info] <0.667.0> connection <0.667.0> (172.16.200.3:61549 -> 192.168.30.122:5672 - app:XXXX component:test-A-B-Tx): user 'XXXX' authenticated and granted access to vhost 'XXXX'
2023-03-31 11:39:54.728913+00:00 [error] <0.689.0> Channel error on connection <0.667.0> (172.16.200.3:61549 -> 192.168.30.122:5672, vhost: 'XXXX', user: 'XXXX'), channel 1:
2023-03-31 11:39:54.728913+00:00 [error] <0.689.0> operation queue.bind caused a channel exception not_found: no queue 'B' in vhost 'XXXX'
2023-03-31 11:39:54.732347+00:00 [info] <0.667.0> closing AMQP connection <0.667.0> (172.16.200.3:61549 -> 192.168.30.122:5672 - app:XXXX component:test-A-B-Tx, vhost: 'XXXX', user: 'XXXX')
In case of the Consumer Binding
2023-03-31 11:39:54.680335+00:00 [info] <0.670.0> accepting AMQP connection <0.670.0> (172.16.200.3:61550 -> 192.168.30.122:5672)
2023-03-31 11:39:54.724717+00:00 [info] <0.670.0> connection <0.670.0> (172.16.200.3:61550 -> 192.168.30.122:5672) has a client-provided name: app:XXXX component:test-A-B-Rx
2023-03-31 11:39:54.726385+00:00 [info] <0.670.0> connection <0.670.0> (172.16.200.3:61550 -> 192.168.30.122:5672 - app:XXXX component:test-A-B-Rx): user 'XXXX' authenticated and granted access to vhost 'XXXX'
2023-03-31 11:39:54.729183+00:00 [error] <0.696.0> Channel error on connection <0.670.0> (172.16.200.3:61550 -> 192.168.30.122:5672, vhost: 'XXXX', user: 'XXXX'), channel 1:
2023-03-31 11:39:54.729183+00:00 [error] <0.696.0> operation basic.consume caused a channel exception not_found: no queue 'A' in vhost 'XXXX'
2023-03-31 11:39:54.732663+00:00 [info] <0.670.0> closing AMQP connection <0.670.0> (172.16.200.3:61550 -> 192.168.30.122:5672 - app:XXXX component:test-A-B-Rx, vhost: 'XXXX', user: 'XXXX')
DIAGNOSTICS
For what I understand, the Topology Recovery algorithm tries to re-bind (queue B) or re-subscribe (queue A) a queue that it not there yet, then it generates and exception, the recovery process fails and the connection is dropped.
I can bypass the issue by making both queues (A & B) durable. Then they will survive a server restart and problems solves, BUT...
I do not want to make them durable just for that.
QUESTIONS
So my questions are:
By the way I have event listener on all steps of the process to catch at client side as much information as possible.
conn.ConnectionShutdown += Conn_ConnectionShutdown;
conn.RecoverySucceeded += CommManager_RecoverySucceeded;
conn.ConnectionRecoveryError += Conn_ConnectionRecoveryError;
Should the test code be requires I can upload it.
Thank you for your help and best regards.
Reproduction steps
Only one client
Opened 2 connection to the Server. One for Tx, one for Rx. (Automatic connections and topology recovery enabled on both)
Open 2 channels (one for Tx + one for Rx)
Create Exchange A type Direct.
3.1) Create Queue B (non-durable, no exclusive, no auto-delete)
3.2) Bind Queue B to Exchange A with routing key B
Create Queue A (non-durable, no exclusive, no auto-delete)
4.1) Basic Consumer (subscribe) to Queue A
Send message to Queue B.
During normal operation everything works correctly.
First test: Network failure. --> Both connections recover perfectly
Second test: Server restart --> Sometimes they recover perfectly. Other times the connections fails and I get the following logs on the server
Expected behavior
The Topology should recover, but due to an Exception in the Recovery Process the connection is dropped and the process has to start again.
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions