Skip to content

Commit 55e00fb

Browse files
yyu-intel-comsuryasaimadhu
authored andcommitted
x86/fpu/xstate: Restore supervisor states for signal return
The signal return fast path directly restores user states from the user buffer. Once that succeeds, restore supervisor states (but only when they are not yet restored). For the slow path, save supervisor states to preserve them across context switches, and restore after the user states are restored. The previous version has the overhead of an XSAVES in both the fast and the slow paths. It is addressed as the following: - In the fast path, only do an XRSTORS. - In the slow path, do a supervisor-state-only XSAVES, and relocate the buffer contents. Some thoughts in the implementation: - In the slow path, can any supervisor state become stale between save/restore? Answer: set_thread_flag(TIF_NEED_FPU_LOAD) protects the xstate buffer. - In the slow path, can any code reference a stale supervisor state register between save/restore? Answer: In the current lazy-restore scheme, any reference to xstate registers needs fpregs_lock()/fpregs_unlock() and __fpregs_load_activate(). - Are there other options? One other option is eagerly restoring all supervisor states. Currently, CET user-mode states and ENQCMD's PASID do not need to be eagerly restored. The upcoming CET kernel-mode states (24 bytes) need to be eagerly restored. To me, eagerly restoring all supervisor states adds more overhead then benefit at this point. Signed-off-by: Yu-cheng Yu <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 98265c1 commit 55e00fb

File tree

1 file changed

+39
-5
lines changed

1 file changed

+39
-5
lines changed

arch/x86/kernel/fpu/signal.c

Lines changed: 39 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -347,6 +347,23 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
347347
ret = copy_user_to_fpregs_zeroing(buf_fx, user_xfeatures, fx_only);
348348
pagefault_enable();
349349
if (!ret) {
350+
351+
/*
352+
* Restore supervisor states: previous context switch
353+
* etc has done XSAVES and saved the supervisor states
354+
* in the kernel buffer from which they can be restored
355+
* now.
356+
*
357+
* We cannot do a single XRSTORS here - which would
358+
* be nice - because the rest of the FPU registers are
359+
* being restored from a user buffer directly. The
360+
* single XRSTORS happens below, when the user buffer
361+
* has been copied to the kernel one.
362+
*/
363+
if (test_thread_flag(TIF_NEED_FPU_LOAD) &&
364+
xfeatures_mask_supervisor())
365+
copy_kernel_to_xregs(&fpu->state.xsave,
366+
xfeatures_mask_supervisor());
350367
fpregs_mark_activate();
351368
fpregs_unlock();
352369
return 0;
@@ -364,14 +381,25 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
364381
}
365382

366383
/*
367-
* The current state of the FPU registers does not matter. By setting
368-
* TIF_NEED_FPU_LOAD unconditionally it is ensured that the our xstate
369-
* is not modified on context switch and that the xstate is considered
384+
* By setting TIF_NEED_FPU_LOAD it is ensured that our xstate is
385+
* not modified on context switch and that the xstate is considered
370386
* to be loaded again on return to userland (overriding last_cpu avoids
371387
* the optimisation).
372388
*/
373-
set_thread_flag(TIF_NEED_FPU_LOAD);
389+
fpregs_lock();
390+
391+
if (!test_thread_flag(TIF_NEED_FPU_LOAD)) {
392+
393+
/*
394+
* Supervisor states are not modified by user space input. Save
395+
* current supervisor states first and invalidate the FPU regs.
396+
*/
397+
if (xfeatures_mask_supervisor())
398+
copy_supervisor_to_kernel(&fpu->state.xsave);
399+
set_thread_flag(TIF_NEED_FPU_LOAD);
400+
}
374401
__fpu_invalidate_fpregs_state(fpu);
402+
fpregs_unlock();
375403

376404
if (use_xsave() && !fx_only) {
377405
u64 init_bv = xfeatures_mask_user() & ~user_xfeatures;
@@ -393,7 +421,13 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
393421
fpregs_lock();
394422
if (unlikely(init_bv))
395423
copy_kernel_to_xregs(&init_fpstate.xsave, init_bv);
396-
ret = copy_kernel_to_xregs_err(&fpu->state.xsave, user_xfeatures);
424+
425+
/*
426+
* Restore previously saved supervisor xstates along with
427+
* copied-in user xstates.
428+
*/
429+
ret = copy_kernel_to_xregs_err(&fpu->state.xsave,
430+
user_xfeatures | xfeatures_mask_supervisor());
397431

398432
} else if (use_fxsr()) {
399433
ret = __copy_from_user(&fpu->state.fxsave, buf_fx, state_size);

0 commit comments

Comments
 (0)