-
Describe the bugWe just migrated our RabbitMQ environment to version 4, and changed every classic mirrored queue into quorum queues. We have a couple of queues that store messages for a variable amount of time as unacked in the consumers, that have a unlimited prefetch. Switching to quorum queues caused some messages to remain ready in the queue instead of being consumed by the consumers, as now each of them can only have a maximum of 2000 messages Reproduction steps
Expected behaviorAccording to the docs, setting prefetch to a value of This is not the case, as in this file we can see that the Also, the maximum prefetch possible to be set is 2^16 -1, so that is also not a solution for us. Either state that 0 is just a way to set the prefetch to 2000 or make it truly infinite Additional contextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
@Davian99 you are the first person to care in quorum queue's ≈ seven years of existence. For all intents and purposes, a prefetch higher than a few hundreds does not make virtually any difference for consumer throughput. If throughput is the only thing you care about, use automatic acknowledgement or try a stream. And please stop the upvote fest on a 22 minute old discussion @davlopez @jriolopez, those are absolutely not appreciated by our team (or any open source software maintainer). |
Beta Was this translation helpful? Give feedback.
-
If this is some kind of batch processing consumer then using a stream would be the optimal way to achieve this in modern RabbitMQ. Then you can just read until either you've got enough entries or enough time has elapsed, do the processing then commit a single offset. Historically Quorum Queues have not performed well with very large prefetch limits and as @michaelklishin mentioned this 2000 value has been in place since day 1. The main reason for this is that the prefetch determines the flow control in place between the channel and the queue. Flow control in AMQP 0.91 is a bit primitive. You could consider moving to AMQP proper (1.0) where the settlement of entries and the credits (that control transfers) are de-coupled. Using AMQP (1.0) you can thus achieve a potentially infinite number of pending messages for a queue by providing credits but not settling the entries. I still wouldn't recommend this way of processing as pending entries have higher memory overhead than ready entries and acking large batches can cause resource spikes. That's said it (this type of practically unlimited batch processing) can be done using AMQP. |
Beta Was this translation helpful? Give feedback.
-
I have updated the most relevant doc guide, the one on Publisher Confirms, Consumer Acknowledgements and Channel Prefetch, to mention that even if the channel prefetch (in AMQP 0-9-1 terms) is unlimited, other parts of the system can introduce limits as they see fit. |
Beta Was this translation helpful? Give feedback.
-
AMQP 0.9.1 spec states for the
MQTT 5.0 spec states for
AMQP 1.0 states for the
So, all three messaging protocols very explicitly state that the broker doesn't have to send messages to the consumer even if messages are available at the broker and the consumer can tolerate more messages. It's fine for the broker to require the consumer to settle messages before dispatching further messages. |
Beta Was this translation helpful? Give feedback.
If this is some kind of batch processing consumer then using a stream would be the optimal way to achieve this in modern RabbitMQ. Then you can just read until either you've got enough entries or enough time has elapsed, do the processing then commit a single offset.
Historically Quorum Queues have not performed well with very large prefetch limits and as @michaelklishin mentioned this 2000 value has been in place since day 1. The main reason for this is that the prefetch determines the flow control in place between the channel and the queue. Flow control in AMQP 0.91 is a bit primitive.
You could consider moving to AMQP proper (1.0) where the settlement of entries and the credits (that c…