Replies: 4 comments
-
@piotrR5 RabbitMQ 3.12 is out of community support. I am highly confident that RabbitMQ does not "confuse queues" and the issue is on your applications' end or you misunderstand how certain exchange types perform routing, or there is queue churn happening in parallel which only has visible concurrency hazard effects under certain load. Specifically this expectation
suggests that you need to revisit the basics. Exchanges in RabbitMQ do not store messages. Messages cannot "pile up" in an exchange. Exchanges do not participate in consumer acknowledgements in any way (in fact, neither they participate in publisher confirms). They are just routing tables, where bindings are the data inputs and an exchange implementation module applies a certain logic to a list of bindings to produce a list of queues. Exchanges are not a load balancing mechanism, although the consistent hashing exchange, the Queues in RabbitMQ do not coordinate with each other, including when bound to the same exchange or set of exchanges. Again, exchanges are not a load balancing mechanism. If consumers on queue A cannot keep up with the ingress rate, queue B has nothing to do with it besides sharing CPU and disk I/O resources on the same node. You are welcome to provide an executable way to reproduce against 3.13.3, e.g. using PerfTest, which can simulate a very wide range of workloads, including using pre-declared topologies. |
Beta Was this translation helpful? Give feedback.
-
While not relevant to the (so far very vaguely defined) question at hand,
has been deprecated for a few years now in favor of https://github.com/rabbitmq/amqp091-go. |
Beta Was this translation helpful? Give feedback.
-
You cannot have two queues named exactly the same in the same virtual host. You can declare a queue twice, that would not create two queues. This is trivial to verify. You can declare two exchanges with the same name in two different virtual hosts but you cannot bind them to the same exchange because virtual hosts exist to offer logical isolation of queues, exchanges, bindings, and so on. You can bind a queue N times to the same exchange or two exchanges. Binding a queue to an exchange multiple times does not make much sense and makes no difference for virtually all exchange types, except for the Consistent Hashing exchange which has a stateful consistent hashing ring where bindings have a special meaning. For the third time, I must mention that exchanges, including fanouts, are not a load balancing mechanism. Every fanout will put a complete copy into every bound queue, regardless of how many times it was bound. This is the opposite of "load balancing" between queues. Too many things in this description do not add up. Some claims are factually incorrect. Let's cut the guesswork and see what your code is actually doing. |
Beta Was this translation helpful? Give feedback.
-
@piotrR5 I would just like to add the following, since I see that you are a CS student, and this could be a "teachable moment". In the future, before filing an issue as a bug on GitHub for any software project, you should ask yourself these questions:
Don't forget that your entire CS career is made possible by the existence of open-source and free software, so it's your duty to not waste maintainer's time unless you are certain to have found a real issue. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
A queue of name X is connected to exchange A, another queue of name X is connected to exchange B. Under normal load they correctly distribute messages from exchange A to the first queue and from exchange B to the second queue. Once a high load is achieved, server tends to mix up the queues and sends messages from exchange A at random to the first and to the second queue.
Reproduction steps
Have two fanout exchanges and two queues named exactly the same but pointing to different exchanges. Create 'heavy load' for example two messages at once sent through the first exchange. One message is sent to the first queue and one to the second which makes data leak.
Expected behavior
Messages from exchange A pile up on the first queue and wait for ack while the second queue is left ampty as the B exchange didn't send anything
Additional context
I'll try to get some more info from our devops about exact infra we are using but for now that's all I have to say.
Beta Was this translation helpful? Give feedback.
All reactions