Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 13 additions & 4 deletions arch/arm/core/aarch32/cortex_m/fault.c
Original file line number Diff line number Diff line change
Expand Up @@ -230,9 +230,10 @@ static uint32_t mem_manage_fault(z_arch_esf_t *esf, int from_hard_fault,
* Software must follow this sequence because another higher
* priority exception might change the MMFAR value.
*/
mmfar = SCB->MMFAR;
uint32_t temp = SCB->MMFAR;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you could fold this into 236 and update mmfar with SCB->MMFAR there directly

Copy link
Member Author

@ioannisg ioannisg Jan 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, for the reasons explained above, in the inline comment, I'd rather keep the design as is, because theoretically the MemManage handler could be preempted rigth after reading the MMARVALID flag, and there would be cases where the flag would not be valid any more.

By design in Zephyr the fault handlers have highest configurable priority, but i wanted to have a design that would be robust against changes :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hrm, yeah I guess one could have the MemManage exception at a lower priority than another ISR, get interrupted here, and fault from that ISR. That would be a tricky crash to investigate! :)


if ((SCB->CFSR & SCB_CFSR_MMARVALID_Msk) != 0) {
mmfar = temp;
PR_EXC(" MMFAR Address: 0x%x", mmfar);
if (from_hard_fault) {
/* clear SCB_MMAR[VALID] to reset */
Expand All @@ -248,15 +249,23 @@ static uint32_t mem_manage_fault(z_arch_esf_t *esf, int from_hard_fault,
PR_FAULT_INFO(
" Floating-point lazy state preservation error");
}
#endif /* !defined(CONFIG_ARMV7_M_ARMV8_M_FP) */
#endif /* CONFIG_ARMV7_M_ARMV8_M_FP */

/* When stack protection is enabled, we need to assess
* if the memory violation error is a stack corruption.
*
* By design, being a Stacking MemManage fault is a necessary
* and sufficient condition for a thread stack corruption.
* [Cortex-M process stack pointer is always descending and
* is never modified by code (except for the context-switch
* routine), therefore, a stacking error implies the PSP has
* crossed into an area beyond the thread stack.]
*
* Data Access Violation errors may or may not be caused by
* thread stack overflows.
*/
if (SCB->CFSR & SCB_CFSR_MSTKERR_Msk) {
if ((SCB->CFSR & SCB_CFSR_MSTKERR_Msk) ||
(SCB->CFSR & SCB_CFSR_DACCVIOL_Msk)) {
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
/* MemManage Faults are always banked between security
* states. Therefore, we can safely assume the fault
Expand Down Expand Up @@ -309,7 +318,7 @@ static uint32_t mem_manage_fault(z_arch_esf_t *esf, int from_hard_fault,

reason = K_ERR_STACK_CHK_FAIL;
} else {
__ASSERT(0,
__ASSERT(!(SCB->CFSR & SCB_CFSR_MSTKERR_Msk),
"Stacking error not a stack fail\n");
}
}
Expand Down