-
When using Iceoryx2's dynamic data mode, how can overall shared memory utilization be improved? In scenarios where subscriber-max-buffer-size is large and the payload size varies greatly, it's easy to encounter memory fragmentation or inefficient allocation. Are there any recommended strategies or best practices for optimization? I would like to know:
Looking forward to everyone's suggestions and experience sharing! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
iceoryx2 pre-allocates memory, even in the fully dynamic case. In this case, it starts with the hinted size and if this does not suffice it pre-allocates a larger segment. Every segment is handled by a pool allocator with the same bucket size, so memory fragmentation and inefficient allocation (performance-wise) should never be a problem.
What do you mean here? That you send a sample that has a size of 1GB and then a sample of 1 Byte, so you have huge size differences? If so I would recommend to split it up into 2 services if possible. Iceoryx2 always preallocates the worst case size, so if you have rarely 1GB samples and often only 1 byte samples then you might waste memory - on the other hand, with this strategy iceoryx2 can guarantee you that it never runs out of memory. And since we are also aiming for mission-critical systems, it is an important use case. But on our roadmap, we also have a buddy allocator, which would be much more memory efficient, but it would be significantly slower, introduces fragmentation issues, and your system might get into a state where there is no more communication via this service possible since there is no more memory available.
No, but you could serialize and compress your payload.
It would be helpful if you would be able to share your use-case/problem or the project you are working on and then we can provide you with some practical hands on tips or best practices. |
Beta Was this translation helpful? Give feedback.
@AmazingPP
iceoryx2 pre-allocates memory, even in the fully dynamic case. In this case, it starts with the hinted size and if this does not suffice it pre-allocates a larger segment. Every segment is handled by a pool allocator with the same bucket size, so memory fragmentation and inefficient allocation (performance-wise) should never be a problem.
What do you mean here? That you send a sample that has a size of 1GB and then a sample of 1 Byte, so you have huge size differences? If so I would recommend to split it up i…