Skip to content

Conversation

@carlocaione
Copy link
Contributor

@carlocaione carlocaione commented Dec 15, 2021

Currently the only way for a BOARD/SOC to configure at compile time the MPU regions is to add, in a soc-specific file, the mpu_config struct adding static entries for the new regions with the needed attributes (cacheable, non-cacheable, etc...). This exported struct is then read by the MPU driver at boot time and used to properly setup the MPU regions.

See for example this or this

At the same time it is now possible to introduce new memory regions in the DT using the newly introduced zephyr,memory-region attribute in #37279

What is missing is the link between these two solutions: that is how to declare the memory regions in the DT and automatically configure these regions in the MPU with the correct attributes. This was also requested long time ago in #30403 (comment)

This patch is trying to address exactly this problem.

It is now possible to declare the memory regions in the DT and define the MPU attributes for the regions using the zephyr,memory-region-mpu property. When this new property is present together with the zephyr,memory-region property and the new compatible mpu-region, the mpu_config struct is automatically extended at compile-time to host the DT defined regions with the correct MPU attributes.

So for example in the DT we can now have:

       sram_cache: memory@20200000 {
                compatible = "mpu-region", "mmio-sram";
                reg = <0x20200000 0x100000>;
                zephyr,memory-region = "SRAM_CACHE";
                zephyr,memory-region-mpu = "REGION_RAM_ATTR";
        };

and a new region will be created called SRAM_CACHE and a new MPU region will be configure at boot time with the attribute REGION_RAM_ATTR.

This patch is also introducing a trivial test.

galak
galak previously requested changes Dec 15, 2021
Copy link
Contributor

@galak galak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • A few concerns with this:
  • partitioning sram nodes in this way loses the actual hardware SRAM node and thus creates possible issues in future with knowing how to power manage SRAM banks
  • use of strings for zephyr,memory-region and zephyr,memory-region-mpu feels ackward and concern about how/where string values are validated and how these are specific to a given "MPU" implementation (Arm v7, v8, nxp, non-arm MPUs, etc).

@carlocaione
Copy link
Contributor Author

  • partitioning sram nodes in this way loses the actual hardware SRAM node and thus creates possible issues in future with knowing how to power manage SRAM banks

Isn't this raised a bit too late? We already have zephyr,memory-region in place (see #37279) and a lot of cases where the SRAM is partitioned in DT for different use cases (IPC for example).

Also this PR is literally introducing nothing new, it is just closing a gap between two pieces that are already in place: the zephyr,memory-region support and the MPU configuration. This is just an helper to automatically fix the mpu_config using a macro instead of having to carry and manually modify a SoC-specific file every time we want to tweak the MPU configuration.

  • use of strings for zephyr,memory-region and zephyr,memory-region-mpu feels ackward and concern about how/where string values are validated and how these are specific to a given "MPU" implementation (Arm v7, v8, nxp, non-arm MPUs, etc).

We can discuss about naming but the code is flexible enough to be adapted to every MPU implementation.

Copy link
Member

@gmarull gmarull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some initial thoughts. I'll try to understand current status in Zephyr to do a 2nd review.

Comment on lines 5 to 6
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the reasons behind this, but it's a bit ugly. Have you thought about other options? Maybe extra dt tooling? using an extra compatible (not sure if feasible)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using an extra compatible is technically feasible because DT_FOREACH_STATUS_OKAY works even for compatibles that do not have matching bindings.

For example, there is no binding for compatible abcd in this example:

$ cat samples/hello_world/app.overlay
/ {
	foo {
		compatible = "gpio-keys", "abcd";
	};

	bar {
		compatible = "pwm-leds", "abcd";
	};
};

$ cat samples/hello_world/src/main.c
/*
 * Copyright (c) 2012-2014 Wind River Systems, Inc.
 *
 * SPDX-License-Identifier: Apache-2.0
 */

#include <zephyr.h>
#include <sys/printk.h>

#define PATH_COMMA(node_id) DT_NODE_PATH(node_id),

void main(void)
{
	const char *paths[] = {
		DT_FOREACH_STATUS_OKAY(abcd, PATH_COMMA)
	};
	for (int i = 0; i < ARRAY_SIZE(paths); i++) {
		printk("%s\n", paths[i]);
	}
}

But it still works:

$ west build -b native_posix -t run samples/hello_world
[...]
*** Booting Zephyr OS build v2.7.99-2322-gb326aabf73ce  ***
/foo
/bar

I'm not saying this is the right way to go, just saying no extra tooling is needed if you want to just use an extra compatible, regardless of whether you define a matching binding for it.

If you do write a matching binding, then the specific binding should come first, and the generic binding second, as usual (https://docs.zephyrproject.org/latest/guides/dts/bindings.html#other-ways-nodes-are-matched-to-bindings).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, what's the preference here? If I have to rework this I'd like to have a precise way forward.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any solution that avoids this:

/*
 * List of compatibles whose YAML files are including mem-region.yaml to
 * support the 'zephyr,memory-region-mpu' property.
 */
#define _DT_COMPATIBLES mmio_sram, arm_dtcm, arm_itcm, nxp_imx_dtcm, nxp_imx_itcm,	\
			st_stm32_ccm, st_stm32_backup_sram

would be welcome, if technically possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alright, going for the additional compatible

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: could likely be an int, and a header with valid attributes be provided (each SoC could provide their own set if necessary)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it has no impact on code size, I would prefer to use string, at least in device tree. The reason is just simple; better readability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 votes here for the strings.

@gmarull
Copy link
Member

gmarull commented Dec 15, 2021

Another comment: I wonder if drivers should be the responsible part to configure the MPU in some case, e.g.

#if DT_NODE_HAS_STATUS(DT_NODELABEL(sram3), okay) && \
DT_NODE_HAS_STATUS(DT_NODELABEL(mac), okay)
MPU_REGION_ENTRY("SRAM3_ETH_BUF",
DT_REG_ADDR(DT_NODELABEL(sram3)),
REGION_RAM_NOCACHE_ATTR(REGION_16K)),
MPU_REGION_ENTRY("SRAM3_ETH_DESC",
DT_REG_ADDR(DT_NODELABEL(sram3)),
REGION_PPB_ATTR(REGION_256B)),

Could be the ethernet driver that configures SRAM3 appropriately (still getting information from DT).

@galak
Copy link
Contributor

galak commented Dec 15, 2021

  • partitioning sram nodes in this way loses the actual hardware SRAM node and thus creates possible issues in future with knowing how to power manage SRAM banks

Isn't this raised a bit too late? We already have zephyr,memory-region in place (see #37279) and a lot of cases where the SRAM is partitioned in DT for different use cases (IPC for example).

Also this PR is literally introducing nothing new, it is just closing a gap between two pieces that are already in place: the zephyr,memory-region support and the MPU configuration. This is just an helper to automatically fix the mpu_config using a macro instead of having to carry and manually modify a SoC-specific file every time we want to tweak the MPU configuration.

  • use of strings for zephyr,memory-region and zephyr,memory-region-mpu feels ackward and concern about how/where string values are validated and how these are specific to a given "MPU" implementation (Arm v7, v8, nxp, non-arm MPUs, etc).

We can discuss about naming but the code is flexible enough to be adapted to every MPU implementation.

Also need to see what is going on with system devicetree and "firewall" devicetree specs for this type of configuration.

@codecov-commenter
Copy link

codecov-commenter commented Dec 15, 2021

Codecov Report

Merging #41223 (449c378) into main (3457ebc) will increase coverage by 0.04%.
The diff coverage is 41.86%.

❗ Current head 449c378 differs from pull request most recent head ae57e9c. Consider uploading reports for the commit ae57e9c to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##             main   #41223      +/-   ##
==========================================
+ Coverage   51.41%   51.45%   +0.04%     
==========================================
  Files         609      616       +7     
  Lines       71379    72356     +977     
  Branches    16427    16651     +224     
==========================================
+ Hits        36698    37234     +536     
- Misses      28641    29003     +362     
- Partials     6040     6119      +79     
Impacted Files Coverage Δ
arch/posix/core/cpuhalt.c 100.00% <ø> (ø)
arch/x86/core/acpi.c 41.43% <0.00%> (-5.15%) ⬇️
boards/posix/native_posix/irq_handler.c 90.12% <ø> (ø)
drivers/console/uart_console.c 76.47% <ø> (ø)
drivers/interrupt_controller/intc_loapic.c 47.50% <ø> (ø)
drivers/interrupt_controller/intc_system_apic.c 81.25% <ø> (ø)
drivers/net/ppp.c 55.82% <0.00%> (ø)
drivers/serial/uart_ns16550.c 34.94% <ø> (ø)
drivers/timer/native_posix_timer.c 76.47% <ø> (ø)
drivers/can/can_common.c 45.61% <66.66%> (+6.14%) ⬆️
... and 95 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3457ebc...ae57e9c. Read the comment docs.

@KozhinovAlexander
Copy link
Contributor

Another comment: I wonder if drivers should be the responsible part to configure the MPU in some case, e.g.

#if DT_NODE_HAS_STATUS(DT_NODELABEL(sram3), okay) && \
DT_NODE_HAS_STATUS(DT_NODELABEL(mac), okay)
MPU_REGION_ENTRY("SRAM3_ETH_BUF",
DT_REG_ADDR(DT_NODELABEL(sram3)),
REGION_RAM_NOCACHE_ATTR(REGION_16K)),
MPU_REGION_ENTRY("SRAM3_ETH_DESC",
DT_REG_ADDR(DT_NODELABEL(sram3)),
REGION_PPB_ATTR(REGION_256B)),

Could be the ethernet driver that configures SRAM3 appropriately (still getting information from DT).

I do not agree with it. Since zephyr uses mainly static allocated memory, it is simpler for the developers to partition it directly over device tree and provide necessary mpu configurations. If each driver would configure mpu, the system may become very fast out of available mpu regions. As I know, there are limited number of mpu regions.

@github-actions github-actions bot added the area: Tests Issues related to a particular existing or missing test label Dec 22, 2021
@carlocaione
Copy link
Contributor Author

@gmarull @Nukersson can you look at this again?

Copy link
Member

@gmarull gmarull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty good now! left a few comments

Currently the only way for a BOARD/SOC to configure at compile time the
MPU regions is to add, in a soc-specific file, the 'mpu_config' struct
adding static entries for the new regions with the needed attributes
(cacheable, non-cacheable, etc...). This exported struct is then read by
the MPU driver at boot time and used to properly setup the MPU regions.

At the same time it is now possible to introduce new memory regions in
the DT using the newly introduced 'zephyr,memory-region' attribute.

What is missing is the link between these two solutions: that is how to
declare the memory regions in the DT and automatically configure these
regions in the MPU with the correct attributes.

This patch is trying to address exactly this problem.

It is now possible to declare the memory regions in the DT and define
the MPU attributes for the regions using the 'zephyr,memory-region-mpu'
property. When this new property is present together with the
'zephyr,memory-region' property and a new "mpu-region" compatible, the
'mpu_config' struct is automatically extended at compile-time to host
the DT defined regions with the correct MPU attributes.

So for example in the DT we can now have:

       sram_cache: memory@20200000 {
                compatible = "mpu-region", "mmio-sram";
                reg = <0x20200000 0x100000>;
                zephyr,memory-region = "SRAM_CACHE";
                zephyr,memory-region-mpu = "REGION_RAM_ATTR";
        };

and a new region will be created called "SRAM_CACHE" and a new MPU
region will be configure at boot time with the attribute
"REGION_RAM_ATTR".

This patch is also introducing a trivial test.

Signed-off-by: Carlo Caione <[email protected]>
Copy link
Member

@gmarull gmarull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No more objections, LGTM.

Copy link
Contributor

@mbolivar-nordic mbolivar-nordic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming nit; let's avoid having to do something like d1eee6a again.

* };
* @endcode
*/
#define DT_REGION_MPU(mpu_fn) _CHECK_APPLY_FN(_DT_COMPATIBLE, _EXPAND_MPU_FN, mpu_fn)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The DT_ namespace is reserved for the devicetree.h header and its sub-headers.

zephyr,memory-region:
required: true

zephyr,memory-region-mpu:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reasoning behind adding this as a new compatible instead of adding the zephyr,memory-region-mpu property to mem-region.yaml? Is it solely to enable the usage of DT_FOREACH_STATUS_OKAY_VARGS to iterate over usages of zephyr,memory-region-mpu?

If so I wonder if it would make more sense to add a DT macro for iterating over all nodes that share a given property (Maybe limited to the zephyr, prefix).
This would also enable automatic generation of the linker regions as well:

LINKER_DT_REGION_FROM_NODE(DT_NODELABEL(ti_ccfg_partition), rwx)
/* Data & Instruction Tightly Coupled Memory */
LINKER_DT_REGION_FROM_NODE(DT_CHOSEN(zephyr_itcm), rw)
LINKER_DT_REGION_FROM_NODE(DT_CHOSEN(zephyr_dtcm), rw)
/* STM32 Core Coupled Memory */
LINKER_DT_REGION_FROM_NODE(DT_CHOSEN(zephyr_ccm), rw)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reasoning behind adding this as a new compatible instead of adding the zephyr,memory-region-mpu property to mem-region.yaml? Is it solely to enable the usage of DT_FOREACH_STATUS_OKAY_VARGS to iterate over usages of zephyr,memory-region-mpu?

Yes. In the first version of this PR it was done like that but that was changed when the usage of the additional compatible was proposed, see #41223 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. In the first version of this PR it was done like that but that was changed when the usage of the additional compatible was proposed, see #41223 (comment)

Right, that's where the compatible was proposed, but there was no discussion of its merits vs alternatives.
The iteration you really want is over anything that sets zephyr,memory-region-mpu, with the compatible being a bit of a hack to get that. The code I linked wants to do a similar thing eventually, iterate over all zephyr,memory-region.

Given that both systems want to do the same thing, their solution should probably be the same. A new compatible could be added to every SRAM node so that it does the same thing as this PR, or we can introduce a way to do DT_FOREACH_NODE_WITH_PROP(zephyr_memory_...) directly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mbolivar-nordic what's your opinion on this?

@JordanYates the problem i see with the DT_FOREACH_NODE_WITH_PROP() macro is that you have no control over the properties of the nodes you are cycling through. At least when using an additional compatible you can use a yaml file to exactly define what properties you expect (or you need) to find and use.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll need some more time to look at this, but I'm firing off a quick response in case it's helpful.

The zephyr DT tooling supports multiple compatibles in a single node, and unless I am mistaken, DT_FOREACH_STATUS_OKAY will work on a node that has multiple compatibles, even for compatibles that aren't the "main" compatible that matches a binding.

So for example if you have

foo { compatible = "a", "x"; };
bar { compatible = "b", "x"; };

Then you should be able to do DT_FOREACH_STATUS_OKAY(x, my_fn) even if there isn't a binding for x.

So maybe you want to have a 'marker' compatible that you can put on the nodes you are interested in, even if they have their own individual bindings?

If that's tedious or nonsense, feel free to ignore

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened #42008 to move to the multiple compatibles solution more gradually.

@carlocaione
Copy link
Contributor Author

Moving this to draft until #42008 is merged / reviewed.

@carlocaione carlocaione added the DNM This PR should not be merged (Do Not Merge) label Jan 21, 2022
@KozhinovAlexander
Copy link
Contributor

@gmarull @Nukersson can you look at this again?

Thank you for asking for review and your trust in my work. I'll try to find time as soon as possible for the review. Anyway, I am very happy, you pushing this approach forward!!! Thank you!

@carlocaione
Copy link
Contributor Author

carlocaione commented Jan 21, 2022

Thank you for asking for review and your trust in my work. I'll try to find time as soon as possible for the review. Anyway, I am very happy, you pushing this approach forward!!! Thank you!

Thank you for taking the time to review :) I just pushed #42008 that is a more gentle step toward this goal, so you probably want to take a look at that before this one.

@carlocaione carlocaione marked this pull request as draft January 21, 2022 08:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: API Changes to public APIs area: Devicetree Binding PR modifies or adds a Device Tree binding area: Devicetree area: Samples Samples area: Tests Issues related to a particular existing or missing test DNM This PR should not be merged (Do Not Merge)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants