-
Notifications
You must be signed in to change notification settings - Fork 8.2k
dt/mpu: Generate MPU regions from SRAM DT nodes #41223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- A few concerns with this:
- partitioning sram nodes in this way loses the actual hardware SRAM node and thus creates possible issues in future with knowing how to power manage SRAM banks
- use of strings for
zephyr,memory-regionandzephyr,memory-region-mpufeels ackward and concern about how/where string values are validated and how these are specific to a given "MPU" implementation (Arm v7, v8, nxp, non-arm MPUs, etc).
Isn't this raised a bit too late? We already have Also this PR is literally introducing nothing new, it is just closing a gap between two pieces that are already in place: the
We can discuss about naming but the code is flexible enough to be adapted to every MPU implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial thoughts. I'll try to understand current status in Zephyr to do a 2nd review.
dts/bindings/base/mem-region.yaml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the reasons behind this, but it's a bit ugly. Have you thought about other options? Maybe extra dt tooling? using an extra compatible (not sure if feasible)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using an extra compatible is technically feasible because DT_FOREACH_STATUS_OKAY works even for compatibles that do not have matching bindings.
For example, there is no binding for compatible abcd in this example:
$ cat samples/hello_world/app.overlay
/ {
foo {
compatible = "gpio-keys", "abcd";
};
bar {
compatible = "pwm-leds", "abcd";
};
};
$ cat samples/hello_world/src/main.c
/*
* Copyright (c) 2012-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr.h>
#include <sys/printk.h>
#define PATH_COMMA(node_id) DT_NODE_PATH(node_id),
void main(void)
{
const char *paths[] = {
DT_FOREACH_STATUS_OKAY(abcd, PATH_COMMA)
};
for (int i = 0; i < ARRAY_SIZE(paths); i++) {
printk("%s\n", paths[i]);
}
}
But it still works:
$ west build -b native_posix -t run samples/hello_world
[...]
*** Booting Zephyr OS build v2.7.99-2322-gb326aabf73ce ***
/foo
/bar
I'm not saying this is the right way to go, just saying no extra tooling is needed if you want to just use an extra compatible, regardless of whether you define a matching binding for it.
If you do write a matching binding, then the specific binding should come first, and the generic binding second, as usual (https://docs.zephyrproject.org/latest/guides/dts/bindings.html#other-ways-nodes-are-matched-to-bindings).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, what's the preference here? If I have to rework this I'd like to have a precise way forward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any solution that avoids this:
/*
* List of compatibles whose YAML files are including mem-region.yaml to
* support the 'zephyr,memory-region-mpu' property.
*/
#define _DT_COMPATIBLES mmio_sram, arm_dtcm, arm_itcm, nxp_imx_dtcm, nxp_imx_itcm, \
st_stm32_ccm, st_stm32_backup_sramwould be welcome, if technically possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alright, going for the additional compatible
dts/bindings/base/mem-region.yaml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: could likely be an int, and a header with valid attributes be provided (each SoC could provide their own set if necessary)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it has no impact on code size, I would prefer to use string, at least in device tree. The reason is just simple; better readability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 votes here for the strings.
|
Another comment: I wonder if drivers should be the responsible part to configure the MPU in some case, e.g. zephyr/soc/arm/st_stm32/stm32h7/mpu_regions.c Lines 15 to 22 in 3194f75
Could be the ethernet driver that configures SRAM3 appropriately (still getting information from DT). |
Also need to see what is going on with system devicetree and "firewall" devicetree specs for this type of configuration. |
Codecov Report
@@ Coverage Diff @@
## main #41223 +/- ##
==========================================
+ Coverage 51.41% 51.45% +0.04%
==========================================
Files 609 616 +7
Lines 71379 72356 +977
Branches 16427 16651 +224
==========================================
+ Hits 36698 37234 +536
- Misses 28641 29003 +362
- Partials 6040 6119 +79
Continue to review full report at Codecov.
|
I do not agree with it. Since zephyr uses mainly static allocated memory, it is simpler for the developers to partition it directly over device tree and provide necessary mpu configurations. If each driver would configure mpu, the system may become very fast out of available mpu regions. As I know, there are limited number of mpu regions. |
|
@gmarull @Nukersson can you look at this again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good now! left a few comments
Currently the only way for a BOARD/SOC to configure at compile time the
MPU regions is to add, in a soc-specific file, the 'mpu_config' struct
adding static entries for the new regions with the needed attributes
(cacheable, non-cacheable, etc...). This exported struct is then read by
the MPU driver at boot time and used to properly setup the MPU regions.
At the same time it is now possible to introduce new memory regions in
the DT using the newly introduced 'zephyr,memory-region' attribute.
What is missing is the link between these two solutions: that is how to
declare the memory regions in the DT and automatically configure these
regions in the MPU with the correct attributes.
This patch is trying to address exactly this problem.
It is now possible to declare the memory regions in the DT and define
the MPU attributes for the regions using the 'zephyr,memory-region-mpu'
property. When this new property is present together with the
'zephyr,memory-region' property and a new "mpu-region" compatible, the
'mpu_config' struct is automatically extended at compile-time to host
the DT defined regions with the correct MPU attributes.
So for example in the DT we can now have:
sram_cache: memory@20200000 {
compatible = "mpu-region", "mmio-sram";
reg = <0x20200000 0x100000>;
zephyr,memory-region = "SRAM_CACHE";
zephyr,memory-region-mpu = "REGION_RAM_ATTR";
};
and a new region will be created called "SRAM_CACHE" and a new MPU
region will be configure at boot time with the attribute
"REGION_RAM_ATTR".
This patch is also introducing a trivial test.
Signed-off-by: Carlo Caione <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No more objections, LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming nit; let's avoid having to do something like d1eee6a again.
| * }; | ||
| * @endcode | ||
| */ | ||
| #define DT_REGION_MPU(mpu_fn) _CHECK_APPLY_FN(_DT_COMPATIBLE, _EXPAND_MPU_FN, mpu_fn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DT_ namespace is reserved for the devicetree.h header and its sub-headers.
| zephyr,memory-region: | ||
| required: true | ||
|
|
||
| zephyr,memory-region-mpu: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reasoning behind adding this as a new compatible instead of adding the zephyr,memory-region-mpu property to mem-region.yaml? Is it solely to enable the usage of DT_FOREACH_STATUS_OKAY_VARGS to iterate over usages of zephyr,memory-region-mpu?
If so I wonder if it would make more sense to add a DT macro for iterating over all nodes that share a given property (Maybe limited to the zephyr, prefix).
This would also enable automatic generation of the linker regions as well:
zephyr/include/arch/arm/aarch32/cortex_m/scripts/linker.ld
Lines 85 to 90 in 449c378
| LINKER_DT_REGION_FROM_NODE(DT_NODELABEL(ti_ccfg_partition), rwx) | |
| /* Data & Instruction Tightly Coupled Memory */ | |
| LINKER_DT_REGION_FROM_NODE(DT_CHOSEN(zephyr_itcm), rw) | |
| LINKER_DT_REGION_FROM_NODE(DT_CHOSEN(zephyr_dtcm), rw) | |
| /* STM32 Core Coupled Memory */ | |
| LINKER_DT_REGION_FROM_NODE(DT_CHOSEN(zephyr_ccm), rw) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reasoning behind adding this as a new compatible instead of adding the
zephyr,memory-region-mpuproperty tomem-region.yaml? Is it solely to enable the usage ofDT_FOREACH_STATUS_OKAY_VARGSto iterate over usages ofzephyr,memory-region-mpu?
Yes. In the first version of this PR it was done like that but that was changed when the usage of the additional compatible was proposed, see #41223 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. In the first version of this PR it was done like that but that was changed when the usage of the additional compatible was proposed, see #41223 (comment)
Right, that's where the compatible was proposed, but there was no discussion of its merits vs alternatives.
The iteration you really want is over anything that sets zephyr,memory-region-mpu, with the compatible being a bit of a hack to get that. The code I linked wants to do a similar thing eventually, iterate over all zephyr,memory-region.
Given that both systems want to do the same thing, their solution should probably be the same. A new compatible could be added to every SRAM node so that it does the same thing as this PR, or we can introduce a way to do DT_FOREACH_NODE_WITH_PROP(zephyr_memory_...) directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbolivar-nordic what's your opinion on this?
@JordanYates the problem i see with the DT_FOREACH_NODE_WITH_PROP() macro is that you have no control over the properties of the nodes you are cycling through. At least when using an additional compatible you can use a yaml file to exactly define what properties you expect (or you need) to find and use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll need some more time to look at this, but I'm firing off a quick response in case it's helpful.
The zephyr DT tooling supports multiple compatibles in a single node, and unless I am mistaken, DT_FOREACH_STATUS_OKAY will work on a node that has multiple compatibles, even for compatibles that aren't the "main" compatible that matches a binding.
So for example if you have
foo { compatible = "a", "x"; };
bar { compatible = "b", "x"; };
Then you should be able to do DT_FOREACH_STATUS_OKAY(x, my_fn) even if there isn't a binding for x.
So maybe you want to have a 'marker' compatible that you can put on the nodes you are interested in, even if they have their own individual bindings?
If that's tedious or nonsense, feel free to ignore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened #42008 to move to the multiple compatibles solution more gradually.
|
Moving this to draft until #42008 is merged / reviewed. |
Thank you for asking for review and your trust in my work. I'll try to find time as soon as possible for the review. Anyway, I am very happy, you pushing this approach forward!!! Thank you! |
Thank you for taking the time to review :) I just pushed #42008 that is a more gentle step toward this goal, so you probably want to take a look at that before this one. |
Currently the only way for a BOARD/SOC to configure at compile time the MPU regions is to add, in a soc-specific file, the
mpu_configstruct adding static entries for the new regions with the needed attributes (cacheable, non-cacheable, etc...). This exported struct is then read by the MPU driver at boot time and used to properly setup the MPU regions.See for example this or this
At the same time it is now possible to introduce new memory regions in the DT using the newly introduced
zephyr,memory-regionattribute in #37279What is missing is the link between these two solutions: that is how to declare the memory regions in the DT and automatically configure these regions in the MPU with the correct attributes. This was also requested long time ago in #30403 (comment)
This patch is trying to address exactly this problem.
It is now possible to declare the memory regions in the DT and define the MPU attributes for the regions using the
zephyr,memory-region-mpuproperty. When this new property is present together with thezephyr,memory-regionproperty and the new compatiblempu-region, thempu_configstruct is automatically extended at compile-time to host the DT defined regions with the correct MPU attributes.So for example in the DT we can now have:
and a new region will be created called
SRAM_CACHEand a new MPU region will be configure at boot time with the attributeREGION_RAM_ATTR.This patch is also introducing a trivial test.