Dynamic shovel publisher not blocked on low disk alarm #7752
Replies: 8 comments
-
It must be the "direct connection" part of Shovel as all network clients would be blocked the same way. |
Beta Was this translation helpful? Give feedback.
-
there is whole bunch of code in amqp091 shovel that handles blocked connections - , for example.That shovel also seems to be using normal amqp_connection. |
Beta Was this translation helpful? Give feedback.
-
another potentially interesting thing is that when destination connection is blocked, messages added to pending list which looks boundless. add_pending(Elem, State = #{dest := Dest}) ->
Pending = maps:get(pending, Dest, queue:new()),
State#{dest => Dest#{pending => queue:in(Elem, Pending)}}. |
Beta Was this translation helpful? Give feedback.
-
Hi @omonnier-swlabs , There are many code lines concerning flow/blocking state management in both 091 shovel and respective tests. So at this point I would like to ask you for reproduction steps. Please describe what exactly you are doing, what you are seeing as well as provide more logs. |
Beta Was this translation helpful? Give feedback.
-
iirc it is bound by the prefetch count on the source side. (The pending list is only used in case of ack-mode: on-publish or on-confirm) |
Beta Was this translation helpful? Give feedback.
-
As far as I know it is a known characteristics/feature/limitation that direct connections are not blocked by resource alarms, that includes also shovels with a local destination. If you would like to shovel between two clusters you can consider defining the shovel on the source cluster as a workaround and have proper flow control on the destination connection (which goes via the network) as was mentioned by Michael and Iliia. |
Beta Was this translation helpful? Give feedback.
-
Indeed using full URIs (not |
Beta Was this translation helpful? Give feedback.
-
thanks @michaelklishin for your feedback. Indeed, I tested a full destination URI in the shovel config and the shovel connection now becomes "blocked" as expected, when the low disk alarm fires. Here are the details of the reproduction steps: Here is the setup, composed of 2 rabbimq servers| rabbitmq-src | rabbitmq-dest | Start rabbitmq-src and create src_queue$ docker run -d --hostname localhost --name rabbitmq-src -p 5672:5672 -p 8080:15672 rabbitmq:3-management start rabbitmq-dest and create dest_ex, dest_queue and binding$ docker run -d --hostname localhost --name rabbitmq-dest -p 5673:5672 -p 8081:15672 rabbitmq:3-management connect both rabbitmq through a network bridge$ docker network create rabbitmq-net on rabbitmq-dest, enable rabbitmq_shovel_management plugin and create the shovel$ docker exec rabbitmq-dest rabbitmq-plugins enable rabbitmq_shovel_management CASE 1: dest-uri="amqp://"$ docker exec rabbitmq-dest rabbitmqctl set_parameter shovel my-shovel '{"ack-mode": "on-confirm","dest-add-forward-headers": true,"dest-add-timestamp-header": true,"dest-exchange": "dest_ex","dest-exchange-key": "fromshovel","dest-protocol": "amqp091","dest-uri": "amqp://","src-delete-after": "never","src-protocol": "amqp091","src-queue": "src_queue","src-uri": "amqp://rabbitmq-src:5672/%2F"}' Check shovel status is OK:$ docker exec rabbitmq-dest rabbitmqctl shovel_status --formatter=pretty_table Simulate a disk full on rabbitmq-destSee the current free disk space and limit Set disk free limit to a value higher than Free disk space$ docker exec rabbitmq-dest rabbitmqctl set_disk_free_limit 1000GB And ensure in the logs that the alarm fires:$ docker logs rabbitmq-dest Now publish a message to rabbitmq-src:src_queue-> The shovel will get and republish the message into rabbitmq-dest:dest_ex, despite the low disk alarm CASE 2: Now testing a full shovel destination uri: dest-uri="amqp://rabbitmq-dest:5672/%2F"Clear the previously created shovel$ docker exec rabbitmq-dest rabbitmqctl clear_parameter shovel my-shovel $ docker exec rabbitmq-dest rabbitmqctl set_parameter shovel my-shovel '{"ack-mode": "on-confirm","dest-add-forward-headers": true,"dest-add-timestamp-header": true,"dest-exchange": "dest_ex","dest-exchange-key": "fromshovel","dest-protocol": "amqp091","dest-uri": "amqp://rabbitmq-dest:5672/%2F","src-delete-after": "never","src-protocol": "amqp091","src-queue": "src_queue","src-uri": "amqp://rabbitmq-src:5672/%2F"}' Check shovel status is OK:$ docker exec rabbitmq-dest rabbitmqctl shovel_status --formatter=pretty_table Publish a message to rabbitmq-src:src_queue CASE 3: dest-uri="amqp://localhost:5672"-> Same expected behavior as CASE 2 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
This issue is related to following rabbitmq discussion https://groups.google.com/g/rabbitmq-users/c/iVdixNm4-Jk
Versions:
Configure a dynamic shovel like this:
{
"ack-mode": "on-confirm",
"dest-add-forward-headers": true,
"dest-add-timestamp-header": true,
"dest-exchange": "outputexchange",
"dest-exchange-key": "fromshovel",
"dest-protocol": "amqp091",
"dest-uri": "amqp://",
"src-delete-after": "never",
"src-protocol": "amqp091",
"src-queue": "inputqueue",
"src-uri": "amqp://172.17.0.1:5673/%2F"
}
When RabbitMQ low disk alarms fires, publishers are expected to be blocked as we can see in the logs:
2023-01-31 13:54:18.494092+00:00 [info] <0.297.0> Free disk space is insufficient. Free bytes: 566611968. Limit: 629145600
2023-01-31 13:54:18.494174+00:00 [warn] <0.293.0> *** Publishers will be blocked until this alarm clears ***
But the shovel continues to publish on the destination exchange.
Only other kind of publishers connections are in "blocked" state.
The shovel connection is still in "Idle" state.
Beta Was this translation helpful? Give feedback.
All reactions