Skip to content

Commit 94bb804

Browse files
soleenwilldeacon
authored andcommitted
arm64: uaccess: Ensure PAN is re-enabled after unhandled uaccess fault
A number of our uaccess routines ('__arch_clear_user()' and '__arch_copy_{in,from,to}_user()') fail to re-enable PAN if they encounter an unhandled fault whilst accessing userspace. For CPUs implementing both hardware PAN and UAO, this bug has no effect when both extensions are in use by the kernel. For CPUs implementing hardware PAN but not UAO, this means that a kernel using hardware PAN may execute portions of code with PAN inadvertently disabled, opening us up to potential security vulnerabilities that rely on userspace access from within the kernel which would usually be prevented by this mechanism. In other words, parts of the kernel run the same way as they would on a CPU without PAN implemented/emulated at all. For CPUs not implementing hardware PAN and instead relying on software emulation via 'CONFIG_ARM64_SW_TTBR0_PAN=y', the impact is unfortunately much worse. Calling 'schedule()' with software PAN disabled means that the next task will execute in the kernel using the page-table and ASID of the previous process even after 'switch_mm()', since the actual hardware switch is deferred until return to userspace. At this point, or if there is a intermediate call to 'uaccess_enable()', the page-table and ASID of the new process are installed. Sadly, due to the changes introduced by KPTI, this is not an atomic operation and there is a very small window (two instructions) where the CPU is configured with the page-table of the old task and the ASID of the new task; a speculative access in this state is disastrous because it would corrupt the TLB entries for the new task with mappings from the previous address space. As Pavel explains: | I was able to reproduce memory corruption problem on Broadcom's SoC | ARMv8-A like this: | | Enable software perf-events with PERF_SAMPLE_CALLCHAIN so userland's | stack is accessed and copied. | | The test program performed the following on every CPU and forking | many processes: | | unsigned long *map = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE, | MAP_SHARED | MAP_ANONYMOUS, -1, 0); | map[0] = getpid(); | sched_yield(); | if (map[0] != getpid()) { | fprintf(stderr, "Corruption detected!"); | } | munmap(map, PAGE_SIZE); | | From time to time I was getting map[0] to contain pid for a | different process. Ensure that PAN is re-enabled when returning after an unhandled user fault from our uaccess routines. Cc: Catalin Marinas <[email protected]> Reviewed-by: Mark Rutland <[email protected]> Tested-by: Mark Rutland <[email protected]> Cc: <[email protected]> Fixes: 338d4f4 ("arm64: kernel: Add support for Privileged Access Never") Signed-off-by: Pavel Tatashin <[email protected]> [will: rewrote commit message] Signed-off-by: Will Deacon <[email protected]>
1 parent 65e1f38 commit 94bb804

File tree

4 files changed

+4
-0
lines changed

4 files changed

+4
-0
lines changed

arch/arm64/lib/clear_user.S

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,5 +48,6 @@ EXPORT_SYMBOL(__arch_clear_user)
4848
.section .fixup,"ax"
4949
.align 2
5050
9: mov x0, x2 // return the original size
51+
uaccess_disable_not_uao x2, x3
5152
ret
5253
.previous

arch/arm64/lib/copy_from_user.S

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,5 +66,6 @@ EXPORT_SYMBOL(__arch_copy_from_user)
6666
.section .fixup,"ax"
6767
.align 2
6868
9998: sub x0, end, dst // bytes not copied
69+
uaccess_disable_not_uao x3, x4
6970
ret
7071
.previous

arch/arm64/lib/copy_in_user.S

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,5 +68,6 @@ EXPORT_SYMBOL(__arch_copy_in_user)
6868
.section .fixup,"ax"
6969
.align 2
7070
9998: sub x0, end, dst // bytes not copied
71+
uaccess_disable_not_uao x3, x4
7172
ret
7273
.previous

arch/arm64/lib/copy_to_user.S

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,5 +65,6 @@ EXPORT_SYMBOL(__arch_copy_to_user)
6565
.section .fixup,"ax"
6666
.align 2
6767
9998: sub x0, end, dst // bytes not copied
68+
uaccess_disable_not_uao x3, x4
6869
ret
6970
.previous

0 commit comments

Comments
 (0)