Skip to content

Commit 3376136

Browse files
youquan-songsuryasaimadhu
authored andcommitted
x86/mce: Reduce number of machine checks taken during recovery
When any of the copy functions in arch/x86/lib/copy_user_64.S take a fault, the fixup code copies the remaining byte count from %ecx to %edx and unconditionally jumps to .Lcopy_user_handle_tail to continue the copy in case any more bytes can be copied. If the fault was #PF this may copy more bytes (because the page fault handler might have fixed the fault). But when the fault is a machine check the original copy code will have copied all the way to the poisoned cache line. So .Lcopy_user_handle_tail will just take another machine check for no good reason. Every code path to .Lcopy_user_handle_tail comes from an exception fixup path, so add a check there to check the trap type (in %eax) and simply return the count of remaining bytes if the trap was a machine check. Doing this reduces the number of machine checks taken during synthetic tests from four to three. As well as reducing the number of machine checks, this also allows Skylake generation Xeons to recover some cases that currently fail. The is because REP; MOVSB is only recoverable when source and destination are well aligned and the byte count is large. That useless call to .Lcopy_user_handle_tail may violate one or more of these conditions and generate a fatal machine check. [ Tony: Add more details to commit message. ] [ bp: Fixup comment. Also, another tip patchset which is adding straight-line speculation mitigation changes the "ret" instruction to an all-caps macro "RET". But, since gas is case-insensitive, use "RET" in the newly added asm block already in order to simplify tip branch merging on its way upstream. ] Signed-off-by: Youquan Song <[email protected]> Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent de76841 commit 3376136

File tree

1 file changed

+9
-0
lines changed

1 file changed

+9
-0
lines changed

arch/x86/lib/copy_user_64.S

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -225,6 +225,7 @@ EXPORT_SYMBOL(copy_user_enhanced_fast_string)
225225
* Don't try to copy the tail if machine check happened
226226
*
227227
* Input:
228+
* eax trap number written by ex_handler_copy()
228229
* rdi destination
229230
* rsi source
230231
* rdx count
@@ -233,12 +234,20 @@ EXPORT_SYMBOL(copy_user_enhanced_fast_string)
233234
* eax uncopied bytes or 0 if successful.
234235
*/
235236
SYM_CODE_START_LOCAL(.Lcopy_user_handle_tail)
237+
cmp $X86_TRAP_MC,%eax
238+
je 3f
239+
236240
movl %edx,%ecx
237241
1: rep movsb
238242
2: mov %ecx,%eax
239243
ASM_CLAC
240244
ret
241245

246+
3:
247+
movl %edx,%eax
248+
ASM_CLAC
249+
RET
250+
242251
_ASM_EXTABLE_CPY(1b, 2b)
243252
SYM_CODE_END(.Lcopy_user_handle_tail)
244253

0 commit comments

Comments
 (0)