Skip to content

Commit d1b9b06

Browse files
JiafeiPankartben
authored andcommitted
arm64: reset: flush D-Cache before it is disabled
In the commit 573a712 patch "arm64: reset: disable cache and MMU for safety", it disables D-Cache and MMU for safety, but in some cases, for example the code is loaded into memory by hardware debugger, we need to flush D-Cache before disable it in order to make sure the data is coherent in the system, otherwise it will report "Synchronous Abort" when D-Cache is disabled. Signed-off-by: Jiafei Pan <[email protected]>
1 parent ad0dfc5 commit d1b9b06

File tree

2 files changed

+25
-0
lines changed

2 files changed

+25
-0
lines changed

arch/arm64/core/Kconfig

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -364,4 +364,13 @@ config ARM64_DCACHE_ALL_OPS
364364
Enable this option to provide the data cache APIs to flush or
365365
invalidate all data caches.
366366

367+
config ARM64_BOOT_DISABLE_DCACHE
368+
bool "Disable data cache before enable MMU when booting from EL2"
369+
depends on ARM64_DCACHE_ALL_OPS
370+
help
371+
To make it safe, if data cache is enabled in case of Zephyr is booting
372+
from EL2, enable this option, it will clean and invalidate all data
373+
cache and then disable data cache, it will will be re-enabled after
374+
MMU is configured and enabled.
375+
367376
endif # CPU_CORTEX_A || CPU_AARCH64_CORTEX_R

arch/arm64/core/reset.c

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
*/
66

77
#include <kernel_internal.h>
8+
#include <zephyr/arch/cache.h>
89
#include <zephyr/sys/barrier.h>
910
#include "boot.h"
1011

@@ -122,6 +123,21 @@ void z_arm64_el2_init(void)
122123
uint64_t reg;
123124

124125
reg = read_sctlr_el2();
126+
#ifdef CONFIG_ARM64_BOOT_DISABLE_DCACHE
127+
/* Disable D-Cache if it is enabled, it will re-enabled when MMU is enabled at EL1 */
128+
if (reg & SCTLR_C_BIT) {
129+
/* Clean and invalidate the data cache before disabling it to ensure memory
130+
* remains coherent.
131+
*/
132+
arch_dcache_flush_and_invd_all();
133+
barrier_isync_fence_full();
134+
/* Disable D-Cache and MMU for EL2 */
135+
reg &= ~(SCTLR_C_BIT | SCTLR_M_BIT);
136+
write_sctlr_el2(reg);
137+
/* Invalidate TLB entries */
138+
__asm__ volatile("dsb ishst; tlbi alle2; dsb ish; isb" : : : "memory");
139+
}
140+
#endif
125141
reg |= (SCTLR_EL2_RES1 | /* RES1 */
126142
SCTLR_I_BIT | /* Enable i-cache */
127143
SCTLR_SA_BIT); /* Enable SP alignment check */

0 commit comments

Comments
 (0)