Replies: 1 comment 7 replies
-
The configuration is badly formatted and not readable. It might depend on many things, but in general yes. Part of the memory will be taken by Java and the rest will be likely filled with the disk page cache keeping the data written to the disk to be able to quickly access them. So with enough time, it will likely eventually consume all the memory you give it. Whether it really needs the memory, that is a different question and will at the end depend a lot on the use-cases, performance you expect etc. |
Beta Was this translation helpful? Give feedback.
7 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This is my cluster specification. I am writing test data constantly into a kafkatopic on this cluster with 120 partitions. When I check my messages, the memory consumption of each of the broker actually goes above the limit or slowly goes up and constantly. This is the grafana dashboard screenshot as well. Any guidance will be highly appreciated

Beta Was this translation helpful? Give feedback.
All reactions