- 
                Notifications
    
You must be signed in to change notification settings  - Fork 8.2k
 
shared_multi_heap: Rework framework #44371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
shared_multi_heap: Rework framework #44371
Conversation
| 
           This needs to be rebased.  | 
    
          
 Is it necessary to implement platform code? Why not add generic code to shared_multi_heap component which would register memory regions based on device tree?  | 
    
          
 Because different platforms can have different needs. The interface to register the memory regions is now generic, the information can be retrieved from DT, but this is not mandatory anymore. For example a platform would want to have one single big region in DT and allocate several heaps in this region according to some peculiar scheme, etc... I also feel like cramming even more the DT with some software configuration parameters could be a bit too much.  | 
    
Entirely rework the shared_multi_heap framework. Refer to the documentation for more information. Signed-off-by: Carlo Caione <[email protected]>
| 
               | 
          ||
| .. code-block:: c | ||
| // Init the shared multi-heap pool | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to avoid C99 comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I mean, this is a documentation file, so maybe not that important. Anyway, I wait for more comments before fixing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No objections from me to removing the shared-multi-heap binding and leaving the decisions entirely in code, helped out as needed by zephyr,memory-region nodes.
However, this seems like yet another API that doesn't show up in the API overview page: https://docs.zephyrproject.org/latest/develop/api/overview.html
This means we cannot track it stability status, which makes me unhappy whenever I see breaking changes :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems very reasonable to me, and in fact the included examples map rather better to the original internal use case for which multi_heap was added, so maybe we should consider merging the two once the opportunity presents.
No opinion on the API stability angle, except to say that I think iterative development like this is an objectively good thing and freezing APIs too soon has the effect of discouraging exactly this kind of rework.
          
 I can add this as experimental / unstable but in a different PR because I do not want to lose the precious Andy's ACK :)  | 
    
This PR is about entirely reworking the
shared_multi_heapframework and making it useful again (tm).What the heck is the
shared_multi_heapframework?Yeah I know, it didn't have much success. The
shared_multi_heapframework is an extremely niche framework introduced by yours truly in #39071. It is basically used when the application or a driver needs some dynamic memory with certain characteristics. Think for example of some drivers that needs a certain amount of non-cacheable memory at run time, this framework can be used to allocate this memory from heap pools of non-cacheable memory.Why are you reworking it?
Well, there are several problems with the current state of the framework:
reserved-memoryDT nodesWhy are you reworking it now?
Because with the #43119 in place we can now obtain all the information we need from the DT about the memory regions (and their attributes) without having to rely on any
reserved-memorymagic (that I plan to remove soon because it is heavily overlapping with thezephyr,memory-regionmechanism). Please note though that the reworked version is now decoupled and doesn't strictly rely anymore on DT.Ok, I'm sold. How does this work?
For a deeper analysis please refer to the official documentation updated in this PR, here a quick overview.
At first some platform code (possibly soc or board level) registers to the framework a set of memory regions to be used as heaps using an opaque attribute parameter that is carrying some meaning about the capability of the region (i.e. cacheable, non-cacheable, external, etc...)
When a driver or application needs some dynamic memory with a certain capability, it can request the memory specifying its requirement using the opaque parameter. The framework will take care of selecting the correct heap (thus memory region) to carve memory from, based on the opaque parameter and the runtime state of the heaps (available memory, heap state, etc...).
I'm dull. Can you show me a real example?
Sure, in this PR a test is provided showing how the framework is used on MMU-enabled plaftorms (
qemu_cortex_a53) and MPU-enabled platforms (mps2_an521).In both cases we use two overlay files to add a set of memory regions in the DT with different capabilities leveraging the
zephyr,memory-regioncompatible and thezephyr,memory-region-mpuattribute so that we derive the opaque parameters (SMH_REG_ATTR_CACHEABLEandSMH_REG_ATTR_NON_CACHEABLE) from the devicetree. A test application then tries to allocate memory from the different set of heaps playing with attributes and sizes.