Best Practice on Dynamic Allocation #1033
-
Does dynamic memory allocation have any drawbacks compared to the fixed-size data type? Is it recommended to use dynamic allocation as a default mode for one's app? For instance, let's say an app generates flatbuffers whose total length fluctuates between 500-1500 KB. Perhaps not every iteration, but often. Also, for some reason, the message name needs to be the same and mutliples. i.e
In this case, if I use this, const auto required_memory_size = CHANGING_FB_LENTH;
auto sample = publisher.loan_slice_uninit(required_memory_size).expect("acquire sample");
auto* dst = sample.payload_mut().data();
// auto initialized_sample =
// sample.write_from_fn([&](auto byte_idx) { return (byte_idx + counter) % 255; }); // NOLINT
memcpy(dst, long_data.data(), required_memory_size); Would it be any slower in the iteration compared to auto sample = publisher.loan_uninit().expect("acquire sample");
sample.write_payload(some_data_struct_fixed_data_at_1.5KB_wasting_memory_for_1000_diffreent_messages) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Dynamic memory allocation means that one can start with low memory and the payload data segments are dynamically increased, depending on the required memory size. The payloads data segment does not shrink though. So, once the first example has reached the memory size of the second, there is no difference. While allocating more memory, there will be latency spike in the first example. In general, the first example is ideal when you do not know your final memory size, e.g. during development and the second example is ideal for production. |
Beta Was this translation helpful? Give feedback.
Dynamic memory allocation means that one can start with low memory and the payload data segments are dynamically increased, depending on the required memory size. The payloads data segment does not shrink though. So, once the first example has reached the memory size of the second, there is no difference. While allocating more memory, there will be latency spike in the first example. In general, the first example is ideal when you do not know your final memory size, e.g. during development and the second example is ideal for production.