Skip to content

Commit 91309a7

Browse files
committed
x86: use cmov for user address masking
This was a suggestion by David Laight, and while I was slightly worried that some micro-architecture would predict cmov like a conditional branch, there is little reason to actually believe any core would be that broken. Intel documents that their existing cores treat CMOVcc as a data dependency that will constrain speculation in their "Speculative Execution Side Channel Mitigations" whitepaper: "Other instructions such as CMOVcc, AND, ADC, SBB and SETcc can also be used to prevent bounds check bypass by constraining speculative execution on current family 6 processors (Intel® Core™, Intel® Atom™, Intel® Xeon® and Intel® Xeon Phi™ processors)" and while that leaves the future uarch issues open, that's certainly true of our traditional SBB usage too. Any core that predicts CMOV will be unusable for various crypto algorithms that need data-independent timing stability, so let's just treat CMOV as the safe choice that simplifies the address masking by avoiding an extra instruction and doesn't need a temporary register. Suggested-by: David Laight <[email protected]> Link: https://www.intel.com/content/dam/develop/external/us/en/documents/336996-speculative-execution-side-channel-mitigations.pdf Signed-off-by: Linus Torvalds <[email protected]>
1 parent 027ea4f commit 91309a7

File tree

2 files changed

+8
-9
lines changed

2 files changed

+8
-9
lines changed

arch/x86/include/asm/uaccess_64.h

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -63,13 +63,13 @@ static inline unsigned long __untagged_addr_remote(struct mm_struct *mm,
6363
*/
6464
static inline void __user *mask_user_address(const void __user *ptr)
6565
{
66-
unsigned long mask;
66+
void __user *ret;
6767
asm("cmp %1,%0\n\t"
68-
"sbb %0,%0"
69-
:"=r" (mask)
70-
:"r" (ptr),
71-
"0" (runtime_const_ptr(USER_PTR_MAX)));
72-
return (__force void __user *)(mask | (__force unsigned long)ptr);
68+
"cmova %1,%0"
69+
:"=r" (ret)
70+
:"r" (runtime_const_ptr(USER_PTR_MAX)),
71+
"0" (ptr));
72+
return ret;
7373
}
7474
#define masked_user_access_begin(x) ({ \
7575
__auto_type __masked_ptr = (x); \

arch/x86/lib/getuser.S

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,9 +44,8 @@
4444
.pushsection runtime_ptr_USER_PTR_MAX,"a"
4545
.long 1b - 8 - .
4646
.popsection
47-
cmp %rax, %rdx
48-
sbb %rdx, %rdx
49-
or %rdx, %rax
47+
cmp %rdx, %rax
48+
cmova %rdx, %rax
5049
.else
5150
cmp $TASK_SIZE_MAX-\size+1, %eax
5251
jae .Lbad_get_user

0 commit comments

Comments
 (0)