Skip to content

Conversation

@carlocaione
Copy link
Contributor

@carlocaione carlocaione commented Sep 23, 2023

This is a natural evolution of the work started with #61009

The idea is that it is possible to leverage the memory attribute property zephyr,memory-attr to define and create a set of memory heaps from which the user can allocate memory from with certain attributes / capabilities.

When the CONFIG_MEM_ATTR_HEAP is set, every region marked with one of the memory attributes listed in in include/zephyr/dt-bindings/memory-attr/memory-attr-sw.h is added to a pool of memory heaps used for dynamic allocation of memory buffers with certain attributes.

Here a non exhaustive list of possible attributes:

DT_MEM_SW_ALLOC_CACHE
DT_MEM_SW_ALLOC_NON_CACHE
DT_MEM_SW_ALLOC_DMA

For example we can define several memory regions with different attributes and use the appropriate attribute to indicate that it is possible to dynamically allocate memory from those regions:

   mem_cacheable: memory@10000000 {
       compatible = "mmio-sram";
       reg = <0x10000000 0x1000>;
       zephyr,memory-attr = <( DT_MEM_CACHEABLE | DT_MEM_SW_ALLOC_CACHE )>;
   };

   mem_non_cacheable: memory@20000000 {
       compatible = "mmio-sram";
       reg = <0x20000000 0x1000>;
       zephyr,memory-attr = <( DT_MEM_NON_CACHEABLE | ATTR_SW_ALLOC_NON_CACHE )>;
   };

   mem_cacheable_big: memory@30000000 {
       compatible = "mmio-sram";
       reg = <0x30000000 0x10000>;
       zephyr,memory-attr = <( DT_MEM_CACHEABLE | DT_MEM_OOO | DT_MEM_SW_ALLOC_CACHE )>;

The user can then dynamically carve memory out of those regions using the provided functions, the library will take care of allocating memory from the correct heap depending on the provided attribute and size:

   // Init the pool
   mem_attr_heap_pool_init();

   // Allocate 0x100 bytes of cacheable memory
   block = mem_attr_heap_alloc(DT_MEM_SW_ALLOC_CACHE, 0x100);

   // Allocate 0x200 bytes of non-cacheable memory aligned to 32 bytes
   block = mem_attr_heap_aligned_alloc(ATTR_SW_ALLOC_NON_CACHE, 0x100, 32);

When several regions are marked with the same attributes, the memory is allocated:

  1. From the regions where the zephyr,memory-attr property has at least the requested property (other attributes can be present as well).

  2. Among the regions as at point 1, from the smallest region if there is any unallocated space left for the requested size

  3. If there is not enough space, from the next bigger region able to accommodate the requested size

The following example shows the point 3:

   // This memory is allocated from `mem_non_cacheable`
   block = mem_attr_heap_alloc(DT_MEM_SW_ALLOC_CACHE, 0x100);

   // This memory is allocated from `mem_cacheable_big`
   block = mem_attr_heap_alloc(DT_MEM_SW_ALLOC_CACHE, 0x5000);

Important

The framework is assuming that the memory regions used to create the heaps are usable by the code and available at init time. The user must take of initializing and setting the memory area before calling mem_attr_heap_pool_init.
That means that the region must be correctly configured in terms of MPU / MMU (if needed) and that an actual heap can be created out of it, for example by leveraging the zephyr,memory-region property to create a proper linker section to accommodate the heap.

@carlocaione carlocaione changed the title wip Introducing a memory attributes based heap allocator Sep 23, 2023
@carlocaione carlocaione marked this pull request as ready for review September 25, 2023 07:42
@zephyrbot zephyrbot added area: Kernel area: Base OS Base OS Library (lib/os) labels Sep 25, 2023
@dcpleung
Copy link
Member

Wonder if this can be made to work with sys_multi_heap_*?

@carlocaione
Copy link
Contributor Author

carlocaione commented Sep 25, 2023

Wonder if this can be made to work with sys_multi_heap_*?

@dcpleung This is using sys_multi_heap_* already as a backend.

@talih0
Copy link
Contributor

talih0 commented Oct 21, 2023

To clarify my concerns, say we define a memory region like this with the new attribute:

sram1: sram@64000000 {
	        status = "okay";
		compatible = "zephyr,memory-region", "mmio-sram";
		device_type = "memory";
		reg = <0x64000000 DT_SIZE_M(2)>;
		zephyr,memory-region = "SRAM1";
		zephyr,memory-attr = <( DT_MEM_SW_ALLOC_CACHE )>;
};EM_SW_ALLOC_CACHE

It means that the whole region will be used by heap, but the linker script will still just treat it as an empty memory region:

Memory region         Used Size  Region Size  %age Used
           FLASH:      238996 B       512 KB     45.58%
             RAM:       40672 B       128 KB     31.03%
           SRAM1:          0 GB         2 MB      0.00%
        IDT_LIST:          0 GB         2 KB      0.00%

So the user could move data into SRAM1 which will give incorrect results but the linker will still be happy.

static uint8_t slow_ram[1024] Z_GENERIC_SECTION("SRAM1");

These types of errors could not be made with the previous implementation of shared multi heap.

It would be nice if the sram could be partitioned into a sub-region for the heap.

andyross
andyross previously approved these changes Nov 2, 2023
Copy link
Contributor

@andyross andyross left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Carles pulled me over. I absolutely love the API.

One nitpick on the old API removal just so we can say that someone mentioned "deprecation" during review.

Not really expert enough to talk much about the DTS layer work, except to parrot @evgeniy-paltsev 's general concern that the memory region specification at the platform layer is high complexity and a little confusing. But that's an area where we can tolerate more complexity, so not sure it's disqualifying? And it's sort of a hard problem anyway: a given allocation is always going to end up matching potentially multiple regions and there isn't always going to be a Correct Choice regardless. Probably best to merge whatever seems straightforward and then tweak as platforms run into difficulty.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pedantry: are we absolutely sure this doesn't have any out-of-tree users? I'll let the experts chime in on whether this needs formal deprecation or not, just calling it out. I personally don't mind the deletion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently we have at least one user (@talih0) so I'll leave that in place so that we don't need to deprecate anything.

teburd
teburd previously approved these changes Nov 2, 2023
@teburd
Copy link
Contributor

teburd commented Nov 2, 2023

I like it!

cfriedt
cfriedt previously approved these changes Nov 7, 2023
Comment on lines +104 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It worth to mention here if it is possible to specify multiple DT_MEM_SW_ALLOC* attributes. Like DT_MEM_SW_ALLOC_DMA | DT_MEM_SW_ALLOC_NON_CACHE.

As I understand that wouldn't work (please correct me if I'm wrong) - so we should mention that in the docs and it would be useful to add compile-time assertion against such combination.

Alternatively we can just define DT_MEM_SW_ALLOC* attributes not as a individual bits but as an enum:

-#define  ATTR_SW_ALLOC_CACHE           BIT(0)
-#define  ATTR_SW_ALLOC_NON_CACHE       BIT(1)
-#define  ATTR_SW_ALLOC_DMA             BIT(2)
+#define  ATTR_SW_ALLOC_CACHE           1
+#define  ATTR_SW_ALLOC_NON_CACHE       2
+#define  ATTR_SW_ALLOC_DMA             3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I understand that wouldn't work (please correct me if I'm wrong)

It would work. From the allocator prospective DT_MEM_SW_ALLOC_CACHE is different than DT_MEM_SW_ALLOC_CACHE | DT_MEM_SW_ALLOC_DMA, so you can allocate from one region or from the other region depending on the parameter passed to mem_attr_heap_alloc().

I'll add a new test for this case and I'll fix the documentation that indeed is a bit unclear on this point.

Using this new library it is now possible to leverage the memory
attribute property 'zephyr,memory-attr' to define and create a set of
memory heaps from which the user can allocate memory from with certain
attributes / capabilities.

When the CONFIG_MEM_ATTR_HEAP option is set, every region marked with
one of the memory attributes listed in
include/zephyr/dt-bindings/memory-attr/memory-attr-sw.h is added to a
pool of memory heaps used for dynamic allocation of memory buffers with
certain attributes.

Signed-off-by: Carlo Caione <[email protected]>
Add new documentation for the new attribute-based memory allocator.

Signed-off-by: Carlo Caione <[email protected]>
Add a new test for the attribute-based memory allocator.

Signed-off-by: Carlo Caione <[email protected]>
@carlocaione carlocaione dismissed stale reviews from cfriedt, teburd, and andyross via 9f3f9d3 November 13, 2023 12:38
@zephyrbot zephyrbot added the area: Devicetree Binding PR modifies or adds a Device Tree binding label Nov 13, 2023
@carlocaione
Copy link
Contributor Author

Pushed new revision.

  • @andyross @talih0 I left the old shared_multi_heap framework in place so we don't need to deprecate anything.
  • @evgeniy-paltsev I clarified the multi-attribute case in the documentation and added a new test case in the test app for that.
  • @cfriedt @teburd @povergoing and others: new respin to fix conflicts and small changes, nothing major to report.

Copy link
Contributor

@andyross andyross left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refresh +1

compatible = "mmio-sram";
reg = <0x20000000 0x1000>;
zephyr,memory-attr = <( DT_MEM_NON_CACHEABLE | ATTR_SW_ALLOC_NON_CACHE )>;
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ATTR_SW_ALLOC_NON_CACHE should be DT_MEM_SW_ALLOC_NON_CACHE?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

huuu, yeah, I'll fix that in a later commit. It would suck to lose all the ACKs to fix spelling errors.

// Allocate 0x100 bytes of cacheable memory from `mem_cacheable`
block = mem_attr_heap_alloc(DT_MEM_SW_ALLOC_CACHE, 0x100);
// Allocate 0x200 bytes of non-cacheable memory aligned to 32 bytes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allocate 0x100?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

* @brief Init the memory pool
*
* This must be the first function to be called to initialize the memory pools
* from all the memory regions with the a software attribute.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "the a software attribute"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@carlocaione
Copy link
Contributor Author

@evgeniy-paltsev can you take a look again?

@carlescufi carlescufi merged commit 9025789 into zephyrproject-rtos:main Nov 27, 2023
@carlocaione carlocaione deleted the mem_attr_heap branch November 29, 2023 19:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: Base OS Base OS Library (lib/os) area: Devicetree Binding PR modifies or adds a Device Tree binding area: Kernel

Projects

None yet

Development

Successfully merging this pull request may close these issues.