Skip to content
Discussion options

You must be logged in to vote

If this is some kind of batch processing consumer then using a stream would be the optimal way to achieve this in modern RabbitMQ. Then you can just read until either you've got enough entries or enough time has elapsed, do the processing then commit a single offset.

Historically Quorum Queues have not performed well with very large prefetch limits and as @michaelklishin mentioned this 2000 value has been in place since day 1. The main reason for this is that the prefetch determines the flow control in place between the channel and the queue. Flow control in AMQP 0.91 is a bit primitive.

You could consider moving to AMQP proper (1.0) where the settlement of entries and the credits (that c…

Replies: 4 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by michaelklishin
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
4 participants
Converted from issue

This discussion was converted from issue #14486 on September 02, 2025 13:52.