Skip to content

Commit d42e6c2

Browse files
Ada Couprie Diazwilldeacon
authored andcommitted
arm64/entry: Mask DAIF in cpu_switch_to(), call_on_irq_stack()
`cpu_switch_to()` and `call_on_irq_stack()` manipulate SP to change to different stacks along with the Shadow Call Stack if it is enabled. Those two stack changes cannot be done atomically and both functions can be interrupted by SErrors or Debug Exceptions which, though unlikely, is very much broken : if interrupted, we can end up with mismatched stacks and Shadow Call Stack leading to clobbered stacks. In `cpu_switch_to()`, it can happen when SP_EL0 points to the new task, but x18 stills points to the old task's SCS. When the interrupt handler tries to save the task's SCS pointer, it will save the old task SCS pointer (x18) into the new task struct (pointed to by SP_EL0), clobbering it. In `call_on_irq_stack()`, it can happen when switching from the task stack to the IRQ stack and when switching back. In both cases, we can be interrupted when the SCS pointer points to the IRQ SCS, but SP points to the task stack. The nested interrupt handler pushes its return addresses on the IRQ SCS. It then detects that SP points to the task stack, calls `call_on_irq_stack()` and clobbers the task SCS pointer with the IRQ SCS pointer, which it will also use ! This leads to tasks returning to addresses on the wrong SCS, or even on the IRQ SCS, triggering kernel panics via CONFIG_VMAP_STACK or FPAC if enabled. This is possible on a default config, but unlikely. However, when enabling CONFIG_ARM64_PSEUDO_NMI, DAIF is unmasked and instead the GIC is responsible for filtering what interrupts the CPU should receive based on priority. Given the goal of emulating NMIs, pseudo-NMIs can be received by the CPU even in `cpu_switch_to()` and `call_on_irq_stack()`, possibly *very* frequently depending on the system configuration and workload, leading to unpredictable kernel panics. Completely mask DAIF in `cpu_switch_to()` and restore it when returning. Do the same in `call_on_irq_stack()`, but restore and mask around the branch. Mask DAIF even if CONFIG_SHADOW_CALL_STACK is not enabled for consistency of behaviour between all configurations. Introduce and use an assembly macro for saving and masking DAIF, as the existing one saves but only masks IF. Cc: <[email protected]> Signed-off-by: Ada Couprie Diaz <[email protected]> Reported-by: Cristian Prundeanu <[email protected]> Fixes: 59b37fe ("arm64: Stash shadow stack pointer in the task struct on interrupt") Tested-by: Cristian Prundeanu <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
1 parent ab16122 commit d42e6c2

File tree

2 files changed

+11
-0
lines changed

2 files changed

+11
-0
lines changed

arch/arm64/include/asm/assembler.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,11 @@
4141
/*
4242
* Save/restore interrupts.
4343
*/
44+
.macro save_and_disable_daif, flags
45+
mrs \flags, daif
46+
msr daifset, #0xf
47+
.endm
48+
4449
.macro save_and_disable_irq, flags
4550
mrs \flags, daif
4651
msr daifset, #3

arch/arm64/kernel/entry.S

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -825,6 +825,7 @@ SYM_CODE_END(__bp_harden_el1_vectors)
825825
*
826826
*/
827827
SYM_FUNC_START(cpu_switch_to)
828+
save_and_disable_daif x11
828829
mov x10, #THREAD_CPU_CONTEXT
829830
add x8, x0, x10
830831
mov x9, sp
@@ -848,6 +849,7 @@ SYM_FUNC_START(cpu_switch_to)
848849
ptrauth_keys_install_kernel x1, x8, x9, x10
849850
scs_save x0
850851
scs_load_current
852+
restore_irq x11
851853
ret
852854
SYM_FUNC_END(cpu_switch_to)
853855
NOKPROBE(cpu_switch_to)
@@ -874,6 +876,7 @@ NOKPROBE(ret_from_fork)
874876
* Calls func(regs) using this CPU's irq stack and shadow irq stack.
875877
*/
876878
SYM_FUNC_START(call_on_irq_stack)
879+
save_and_disable_daif x9
877880
#ifdef CONFIG_SHADOW_CALL_STACK
878881
get_current_task x16
879882
scs_save x16
@@ -888,15 +891,18 @@ SYM_FUNC_START(call_on_irq_stack)
888891

889892
/* Move to the new stack and call the function there */
890893
add sp, x16, #IRQ_STACK_SIZE
894+
restore_irq x9
891895
blr x1
892896

897+
save_and_disable_daif x9
893898
/*
894899
* Restore the SP from the FP, and restore the FP and LR from the frame
895900
* record.
896901
*/
897902
mov sp, x29
898903
ldp x29, x30, [sp], #16
899904
scs_load_current
905+
restore_irq x9
900906
ret
901907
SYM_FUNC_END(call_on_irq_stack)
902908
NOKPROBE(call_on_irq_stack)

0 commit comments

Comments
 (0)