Skip to content

Commit 832dd63

Browse files
mrutland-armwilldeacon
authored andcommitted
arm64: entry: fix ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
Currently the ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD workaround isn't quite right, as it is supposed to be applied after the last explicit memory access, but is immediately followed by an LDR. The ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD workaround is used to handle Cortex-A520 erratum 2966298 and Cortex-A510 erratum 3117295, which are described in: * https://developer.arm.com/documentation/SDEN2444153/0600/?lang=en * https://developer.arm.com/documentation/SDEN1873361/1600/?lang=en In both cases the workaround is described as: | If pagetable isolation is disabled, the context switch logic in the | kernel can be updated to execute the following sequence on affected | cores before exiting to EL0, and after all explicit memory accesses: | | 1. A non-shareable TLBI to any context and/or address, including | unused contexts or addresses, such as a `TLBI VALE1 Xzr`. | | 2. A DSB NSH to guarantee completion of the TLBI. The important part being that the TLBI+DSB must be placed "after all explicit memory accesses". Unfortunately, as-implemented, the TLBI+DSB is immediately followed by an LDR, as we have: | alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD | tlbi vale1, xzr | dsb nsh | alternative_else_nop_endif | alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0 | ldr lr, [sp, #S_LR] | add sp, sp, #PT_REGS_SIZE // restore sp | eret | alternative_else_nop_endif | | [ ... KPTI exception return path ... ] This patch fixes this by reworking the logic to place the TLBI+DSB immediately before the ERET, after all explicit memory accesses. The ERET is currently in a separate alternative block, and alternatives cannot be nested. To account for this, the alternative block for ARM64_UNMAP_KERNEL_AT_EL0 is replaced with a single alternative branch to skip the KPTI logic, with the new shape of the logic being: | alternative_insn "b .L_skip_tramp_exit_\@", nop, ARM64_UNMAP_KERNEL_AT_EL0 | [ ... KPTI exception return path ... ] | .L_skip_tramp_exit_\@: | | ldr lr, [sp, #S_LR] | add sp, sp, #PT_REGS_SIZE // restore sp | | alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD | tlbi vale1, xzr | dsb nsh | alternative_else_nop_endif | eret The new structure means that the workaround is only applied when KPTI is not in use; this is fine as noted in the documented implications of the erratum: | Pagetable isolation between EL0 and higher level ELs prevents the | issue from occurring. ... and as per the workaround description quoted above, the workaround is only necessary "If pagetable isolation is disabled". Fixes: 471470b ("arm64: errata: Add Cortex-A520 speculative unprivileged load workaround") Signed-off-by: Mark Rutland <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: James Morse <[email protected]> Cc: Rob Herring <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
1 parent f827bcd commit 832dd63

File tree

1 file changed

+13
-9
lines changed

1 file changed

+13
-9
lines changed

arch/arm64/kernel/entry.S

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -428,16 +428,9 @@ alternative_else_nop_endif
428428
ldp x28, x29, [sp, #16 * 14]
429429

430430
.if \el == 0
431-
alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
432-
tlbi vale1, xzr
433-
dsb nsh
434-
alternative_else_nop_endif
435-
alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
436-
ldr lr, [sp, #S_LR]
437-
add sp, sp, #PT_REGS_SIZE // restore sp
438-
eret
439-
alternative_else_nop_endif
440431
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
432+
alternative_insn "b .L_skip_tramp_exit_\@", nop, ARM64_UNMAP_KERNEL_AT_EL0
433+
441434
msr far_el1, x29
442435

443436
ldr_this_cpu x30, this_cpu_vector, x29
@@ -446,7 +439,18 @@ alternative_else_nop_endif
446439
ldr lr, [sp, #S_LR] // restore x30
447440
add sp, sp, #PT_REGS_SIZE // restore sp
448441
br x29
442+
443+
.L_skip_tramp_exit_\@:
449444
#endif
445+
ldr lr, [sp, #S_LR]
446+
add sp, sp, #PT_REGS_SIZE // restore sp
447+
448+
/* This must be after the last explicit memory access */
449+
alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
450+
tlbi vale1, xzr
451+
dsb nsh
452+
alternative_else_nop_endif
453+
eret
450454
.else
451455
ldr lr, [sp, #S_LR]
452456
add sp, sp, #PT_REGS_SIZE // restore sp

0 commit comments

Comments
 (0)