How to Increase chunk size for stream messages ingressed via mqtt plugin #14908
Replies: 1 comment 4 replies
-
|
The chunk size itself can't be tuned: the amount of data that ends up in a chunk is however much data the stream writer has in its mailbox at a point in time. So the higher the publishing throughput, the larger the chunk size. Lower chunk sizes end up causing lower consumer throughput because the stream is being read in smaller sections. You should try upgrading to 4.2.0 as https://www.rabbitmq.com/blog/2025/09/26/stream-delivery-optimization specifically improves this scenario by reading ahead in the stream. I was looking into this for a somewhat similar use-case in #14877. In 4.2.1 you should be able to tune this read-ahead size to trade memory for higher consumer throughput. I will try to reproduce this scenario with MQTT. I expect that 4.2.0 will make a big improvement. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Community Support Policy
RabbitMQ version used
4.1.2
Erlang version used
27.3.x
Operating system (distribution) used
Linux
How is RabbitMQ deployed?
Bitnami Helm chart
rabbitmq-diagnostics status output
not applicable
Logs from node 1 (with sensitive values edited out)
not applicable
Logs from node 2 (if applicable, with sensitive values edited out)
No response
Logs from node 3 (if applicable, with sensitive values edited out)
No response
rabbitmq.conf
for config see kubernetes section
Steps to deploy RabbitMQ cluster
Helm deploy via pipeline
Steps to reproduce the behavior in question
Messages are sent to RabbitMQ queues via MQTT message providers. Utilizing their routing keys they are eventually forwarded to streams.
The workflow is basically as follows:
advanced.config
No response
Application code
No response
Kubernetes deployment file
Kubernetes values yaml
What problem are you trying to solve?
Is there a way to increase the chunk size of messages stored in a stream, when they are ingressed via MQTT plugin? We suspect that this is the for slow performance.
More detail:
With the setup described in "Steps to reproduce the behavior in question" the messages can be successfully consumed by the clients.
So the message flow:
MQTT > amq.topic exchange > super-stream exchange > stream partition > clientworks as desired.What we noticed was, that with that setup the message throughput is rather slow. We used RabbitMQ Stream PerfTest and our internal opentelemetry metric to compare different setups and found that messages directly sent to a stream are consumed much faster than messages ingressed via MQTT.
The reason seems to be that messages ingressed via MQTT are stored in individual chunks. When consuming from a stream that holds these messages, the PerfTest tool always reports a chunk size of 1.
Throughput of our system when consuming from a stream that was directly filled with messages via stream producer (instead of MQTT plugin) is about on the level reported here:
https://www.rabbitmq.com/blog/2025/09/26/stream-delivery-optimization
Beta Was this translation helpful? Give feedback.
All reactions