Skip to content

Conversation

fsammoura1980
Copy link
Contributor

Remove the temporary PMP entry for MPRV catch-all and the setting of MSTATUS_MPRV from the early z_riscv_pmp_init function.

This early, broad catch-all setup is not strictly necessary. The required MPRV catch-all configuration for stack protection in multithreading scenarios is already handled within the z_riscv_pmp_stackguard_prepare function. This function is called during thread creation, providing a more localized and appropriate context for setting up thread-specific stack guard PMP entries, including the MPRV catch-all.

This change consolidates the stack guard related PMP setup logic within the thread creation path.

Remove the temporary PMP entry for MPRV catch-all and the setting of
MSTATUS_MPRV from the early `z_riscv_pmp_init` function.

This early, broad catch-all setup is not strictly necessary. The
required MPRV catch-all configuration for stack protection in
multithreading scenarios is already handled within the
`z_riscv_pmp_stackguard_prepare` function. This function is called
during thread creation, providing a more localized and appropriate
context for setting up thread-specific stack guard PMP entries,
including the MPRV catch-all.

This change consolidates the stack guard related PMP setup logic within
the thread creation path.

Signed-off-by: Firas Sammoura <[email protected]>
Copy link

sonarqubecloud bot commented Oct 9, 2025

@jimmyzhe
Copy link
Contributor

jimmyzhe commented Oct 13, 2025

Before the scheduler starts, each core uses its own IRQ stack during kernel initialization.

This is necessary to ensure kernel initialization doesn’t overflow.

Update:
sorry for my misunderstanding, the current IRQ stack is protected by PMP lock.

The setup mstatus.MPRV and catch all entry is necessary for PMP non-lock entry for kernel initialization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: RISCV RISCV Architecture (32-bit & 64-bit)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants