Memory footprint in environments where clients repeatedly run into channel exceptions #8199
Replies: 5 comments
-
What version of RabbitMQ and Erlang are you using? |
Beta Was this translation helpful? Give feedback.
-
When the issue appeared the versions have been:
|
Beta Was this translation helpful? Give feedback.
-
Reasoning About Memory Use explains how to produce a breakdown of what uses the memory. According to the log above, your code
and then… according to the log, the connection is leaked. Connection churn is another resource-intensive scenario that has nothing to do with "RabbitMQ garbage collection" but See Connection Lifecycle Events for another source of information about what is going on in your system. Clients can leak resources. In some cases, RabbitMQ cannot do anything about that (an open connection is an open connection), and cannot be blamed for the resource leak. |
Beta Was this translation helpful? Give feedback.
-
@denis-walther without an executable way to reproduce (even in a few hours) all we can suggest is using the doc guides above to narrow down if this may be as trivial as a connection leak, or a case of high connection churn. Please put together an example repo we can run. |
Beta Was this translation helpful? Give feedback.
-
Specifically this part
makes me think that it is a good old connection leak by your own apps that you blame on RabbitMQ's "garbage collector". FTR, you can trigger a system-wide GC run (for all Erlang processes on the node) using CLI tools:
If that does not help even after a brief period of time, the issue is not with garbage collection |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
If the vhost(s) of clients do not have set the correct exchange:
and there are active connection tries (e.g. faulty configuration) it looks like there are some memory leftovers produced.
Those seem to be not seen by the rabbitmq itself anymore.
Loglines (every 5 seconds form a service connecting to the rabbitmq):
Loglines from the rabbitmq host:
The error appeared roughly around every 6 hours by 8 hosts connecting to 1 rabbitmq instance/server.
After roughly 6 hours the memory exhausted (machine OOM killer) and rabbitmq did not report that it used that much memory.
Reproduction steps
Expected behavior
The expected behaviour should either be that rabbitmq reports these memory leftovers so an admin can take action or those should be included in a garbage collector and be cleaned up automatically.
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions