Replies: 2 comments
-
You are asking us to document an "expected real world workload" breakdown but there is no such thing as one real work workload. There are tens or hundreds, with different profiles. Which is why the documentation guide is not just a single screenshot but is what it is. |
Beta Was this translation helpful? Give feedback.
-
I can think of one additional section in the Reasoning About Memory Usage guide that would take the idea of That said, it likely will be stating something very obvious in some cases ("a large footprint here means you have a lot of connections"), and something overly generic in others (a large binary heap can be a result of N different factors). The docs currently mention two specific scenarios where we are confident that the effects are correlated with a specific feature:
There is no rule of thumb explanation like that for binary heap. In fact, sometimes it is a matter of runtime behavior that depends on less-than-obvious OS settings. So I am not sure what specifically can be improved. Like I said, there is such thing as "a standard workload" and therefore an obvious definition of an "anomaly" in a node's memory footprint. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
This findings from this and a couple of similar discussions with Kubernetes users have been briefly documented.
This now attracts all kinds of "here is my memory footprint profile" comments, not related to streams, to the kernel page cache, or even to modern versions [....]. New discussions should be started instead => closing.
Originally posted by @michaelklishin in #7362 (comment)
My problem with understanding this documentation and comparing it to the behaviour I saw on my machine is, that the only memory footprint showed there seams to be from a machine with 256MB memory (if memory watermark is 0.5).
This is in some way contradictory to the default setting of wal size of 512 MB per queue and the statement that 3 to 4 times the wal size is used. Even if reduzing wal size like suggested in some posts to 64 MB you would need for a real world application with 10 queues 5120 MB of memory.
1.) So I miss the information how far the wal size could/should be lowered to reduce the memory footprint. Lowering it to 16 MB would leed to 1280 MB memory needed, which is still over 4 times more thant the memory footprint of the example machine. I didn't notice any problems lowering it to 16 MB but should we go even lower?
2.) Having a machine with 2 GB to 8GB of memory (10 - 80 times the memory of the example) the memory footprint looks quite differnt to the example and it is difficult to compare.
If in the example the "binary" is eating up 23 MB of memory I don't care about this on a 2 GB machine. But if it is eating up 650 MB - which seams not to be uncommon at load - then that is something that catch the eye first.
Same is for the "quorum queue tables".
3.) Setting for high watermark
I understand that on a small machine witd 256 MB memory with perhaps ony one queue this should be down to 0.5 for garbage collection. But what about a real world machine with 2 GB memory and multiple queues? I didn't have a problem to use 0.75 but that might just be because of my limited testing.
To improve the documentation I would suggest to add at least a screen of the memory footprint of a real world case with 2 or 4 GB and multiple queues at load an explain which of the bigger poritons of the memory should be reduced (garbage collected) in which cases.
Beta Was this translation helpful? Give feedback.
All reactions