Skip to content

Conversation

@carlocaione
Copy link
Contributor

@carlocaione carlocaione commented Mar 30, 2022

This PR is about entirely reworking the shared_multi_heap framework and making it useful again (tm).

What the heck is the shared_multi_heap framework?

Yeah I know, it didn't have much success. The shared_multi_heap framework is an extremely niche framework introduced by yours truly in #39071. It is basically used when the application or a driver needs some dynamic memory with certain characteristics. Think for example of some drivers that needs a certain amount of non-cacheable memory at run time, this framework can be used to allocate this memory from heap pools of non-cacheable memory.

Why are you reworking it?

Well, there are several problems with the current state of the framework:

  • It is a bit cumbersome to use given that it relies on reserved-memory DT nodes
  • It is using a standalone compatible in the DT that feels a bit weird to use for a framework / library
  • It doesn't really scale and it is not flexible enough

Why are you reworking it now?

Because with the #43119 in place we can now obtain all the information we need from the DT about the memory regions (and their attributes) without having to rely on any reserved-memory magic (that I plan to remove soon because it is heavily overlapping with the zephyr,memory-region mechanism). Please note though that the reworked version is now decoupled and doesn't strictly rely anymore on DT.

Ok, I'm sold. How does this work?

For a deeper analysis please refer to the official documentation updated in this PR, here a quick overview.

At first some platform code (possibly soc or board level) registers to the framework a set of memory regions to be used as heaps using an opaque attribute parameter that is carrying some meaning about the capability of the region (i.e. cacheable, non-cacheable, external, etc...)

struct shared_multi_heap_region cacheable = {
    .addr = region_cacheable_address,
    .size = region_cacheable_size,
    .attr = SMH_REG_ATTR_CACHEABLE,
};
shared_multi_heap_add(&region_cacheable, NULL);

struct shared_multi_heap_region noncacheable = {
    .addr = region_noncacheable_address,
    .size = region_noncacheable_size,
    .attr = SMH_REG_ATTR_NON_CACHEABLE,
};
shared_multi_heap_add(&region_noncacheable, NULL);

When a driver or application needs some dynamic memory with a certain capability, it can request the memory specifying its requirement using the opaque parameter. The framework will take care of selecting the correct heap (thus memory region) to carve memory from, based on the opaque parameter and the runtime state of the heaps (available memory, heap state, etc...).

void *buf;

// Allocate 4K of cacheable memory
buf = shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x1000);

// Allocate 8K of non-cacheable memory
buf = shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x2000);

// Allocate 4K of non-cacheable aligned memory
buf = shared_multi_heap_aligned_alloc(SMH_REG_ATTR_NON_CACHEABLE, 4, 0x1000);

I'm dull. Can you show me a real example?

Sure, in this PR a test is provided showing how the framework is used on MMU-enabled plaftorms (qemu_cortex_a53) and MPU-enabled platforms (mps2_an521).

In both cases we use two overlay files to add a set of memory regions in the DT with different capabilities leveraging the zephyr,memory-region compatible and the zephyr,memory-region-mpu attribute so that we derive the opaque parameters (SMH_REG_ATTR_CACHEABLE and SMH_REG_ATTR_NON_CACHEABLE) from the devicetree. A test application then tries to allocate memory from the different set of heaps playing with attributes and sizes.

@github-actions github-actions bot added area: API Changes to public APIs area: Devicetree area: Devicetree Binding PR modifies or adds a Device Tree binding area: Documentation area: Kernel area: Tests Issues related to a particular existing or missing test labels Mar 30, 2022
@carlocaione carlocaione added DNM This PR should not be merged (Do Not Merge) and removed DNM This PR should not be merged (Do Not Merge) labels Mar 30, 2022
@carlocaione carlocaione marked this pull request as ready for review April 5, 2022 10:16
@carlescufi carlescufi requested a review from hubertmis April 7, 2022 11:34
@carlescufi
Copy link
Member

@dcpleung and @andyross reviews would be very appreciated, thanks!

@dcpleung
Copy link
Member

dcpleung commented Apr 7, 2022

This needs to be rebased.

@hubertmis
Copy link
Member

At first some platform code (possibly soc or board level) registers to the framework a set of memory regions to be used as heaps using an opaque attribute parameter that is carrying some meaning about the capability of the region (i.e. cacheable, non-cacheable, external, etc...)

Is it necessary to implement platform code? Why not add generic code to shared_multi_heap component which would register memory regions based on device tree?

@carlocaione
Copy link
Contributor Author

Is it necessary to implement platform code? Why not add generic code to shared_multi_heap component which would register memory regions based on device tree?

Because different platforms can have different needs. The interface to register the memory regions is now generic, the information can be retrieved from DT, but this is not mandatory anymore.

For example a platform would want to have one single big region in DT and allocate several heaps in this region according to some peculiar scheme, etc...

I also feel like cramming even more the DT with some software configuration parameters could be a bit too much.

Entirely rework the shared_multi_heap framework. Refer to the
documentation for more information.

Signed-off-by: Carlo Caione <[email protected]>

.. code-block:: c
// Init the shared multi-heap pool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to avoid C99 comments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I mean, this is a documentation file, so maybe not that important. Anyway, I wait for more comments before fixing this.

Copy link
Contributor

@mbolivar-nordic mbolivar-nordic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No objections from me to removing the shared-multi-heap binding and leaving the decisions entirely in code, helped out as needed by zephyr,memory-region nodes.

However, this seems like yet another API that doesn't show up in the API overview page: https://docs.zephyrproject.org/latest/develop/api/overview.html

This means we cannot track it stability status, which makes me unhappy whenever I see breaking changes :(

Copy link
Contributor

@andyross andyross left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems very reasonable to me, and in fact the included examples map rather better to the original internal use case for which multi_heap was added, so maybe we should consider merging the two once the opportunity presents.

No opinion on the API stability angle, except to say that I think iterative development like this is an objectively good thing and freezing APIs too soon has the effect of discouraging exactly this kind of rework.

@carlocaione
Copy link
Contributor Author

This means we cannot track it stability status, which makes me unhappy whenever I see breaking changes :(

I can add this as experimental / unstable but in a different PR because I do not want to lose the precious Andy's ACK :)

@carlescufi carlescufi merged commit 1dcea25 into zephyrproject-rtos:main Apr 21, 2022
@carlocaione carlocaione deleted the rework_multi_heap_upstream branch April 21, 2022 11:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: API Changes to public APIs area: Devicetree Binding PR modifies or adds a Device Tree binding area: Devicetree area: Documentation area: Kernel area: Tests Issues related to a particular existing or missing test

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants