Skip to content

Commit 7aef123

Browse files
committed
Incrase default rabbit.max_link_credit
from 128 to 170. See comments for rationale. On an Ubuntu box, run ``` quiver //host.docker.internal//queues/my-quorum-queue --durable --count 100k --duration 10m --body-size 12 --credit 10000 ``` Before this commit: ``` RESULTS Count ............................................... 100,000 messages Duration ............................................... 11.0 seconds Sender rate ........................................... 9,077 messages/s Receiver rate ......................................... 9,097 messages/s End-to-end rate ....................................... 9,066 messages/s ``` After this commit: ``` RESULTS Count ............................................... 100,000 messages Duration ................................................ 6.2 seconds Sender rate .......................................... 16,215 messages/s Receiver rate ........................................ 16,271 messages/s End-to-end rate ...................................... 16,166 messages/s ``` That's because more `#enqueue{}` Ra commands can be batched before fsyncing. So, this commit brings the performance of scenario "a single connection publishing to a quorum queue with large number (>200) of unconfirmed publishes" in AMQP 1.0 closer to AMQP 0.9.1. (cherry picked from commit 55e6d58)
1 parent be351bd commit 7aef123

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

deps/rabbit/src/rabbit_amqp_session.erl

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,12 +36,20 @@
3636
%% 32 for quorum queues
3737
%% 256 for streams
3838
%% 400 for classic queues
39+
%% Note however that rabbit_channel can easily overshoot quorum queues' soft limit by 300 due to
40+
%% higher credit_flow_default_credit setting.
3941
%% If link target is a queue (rather than an exchange), we could use one of these depending
4042
%% on target queue type. For the time being just use a static value that's something in between.
4143
%% In future, we could dynamically grow (or shrink) the link credit we grant depending on how fast
4244
%% target queue(s) actually confirm messages: see paper "Credit-Based Flow Control for ATM Networks"
4345
%% from 1995, section 4.2 "Static vs. adaptive credit control" for pros and cons.
44-
-define(DEFAULT_MAX_LINK_CREDIT, 128).
46+
%% We choose a default of 170 because 170 x 1.5 = 255 which is still below DEFAULT_MAX_QUEUE_CREDIT of 256.
47+
%% We use "x 1.5" in this calculation because we grant 170 new credit half way through leading to maximum
48+
%% 85 + 170 = 255 unconfirmed in-flight messages to the target queue.
49+
%% By staying below DEFAULT_MAX_QUEUE_CREDIT, we avoid situations where a single client is able to enqueue
50+
%% faster to a quorum queue than to consume from it. (Remember that a quorum queue fsyncs each credit top
51+
%% up and batch of enqueues.)
52+
-define(DEFAULT_MAX_LINK_CREDIT, 170).
4553
%% Initial and maximum link credit that we grant to a sending queue.
4654
%% Only when we sent sufficient messages to the writer proc, we will again grant
4755
%% credits to the sending queue. We have this limit in place to ensure that our

0 commit comments

Comments
 (0)