Skip to content

Commit f9bf352

Browse files
committed
userfaultfd: simplify fault handling
Instead of waiting in a loop for the userfaultfd condition to become true, just wait once and return VM_FAULT_RETRY. We've already dropped the mmap lock, we know we can't really successfully handle the fault at this point and the caller will have to retry anyway. So there's no point in making the wait any more complicated than it needs to be - just schedule away. And once you don't have that complexity with explicit looping, you can also just lose all the 'userfaultfd_signal_pending()' complexity, because once we've set the correct process sleeping state, and don't loop, the act of scheduling itself will be checking if there are any pending signals before going to sleep. We can also drop the VM_FAULT_MAJOR games, since we'll be treating all retried faults as major soon anyway (series to regularize and share more of fault handling across architectures in a separate series by Peter Xu, and in the meantime we won't worry about the possible minor - I'll be here all week, try the veal - accounting difference). Cc: Andrea Arcangeli <[email protected]> Cc: Peter Xu <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 3208167 commit f9bf352

File tree

1 file changed

+1
-38
lines changed

1 file changed

+1
-38
lines changed

fs/userfaultfd.c

Lines changed: 1 addition & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -339,7 +339,6 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx,
339339
return ret;
340340
}
341341

342-
/* Should pair with userfaultfd_signal_pending() */
343342
static inline long userfaultfd_get_blocking_state(unsigned int flags)
344343
{
345344
if (flags & FAULT_FLAG_INTERRUPTIBLE)
@@ -351,18 +350,6 @@ static inline long userfaultfd_get_blocking_state(unsigned int flags)
351350
return TASK_UNINTERRUPTIBLE;
352351
}
353352

354-
/* Should pair with userfaultfd_get_blocking_state() */
355-
static inline bool userfaultfd_signal_pending(unsigned int flags)
356-
{
357-
if (flags & FAULT_FLAG_INTERRUPTIBLE)
358-
return signal_pending(current);
359-
360-
if (flags & FAULT_FLAG_KILLABLE)
361-
return fatal_signal_pending(current);
362-
363-
return false;
364-
}
365-
366353
/*
367354
* The locking rules involved in returning VM_FAULT_RETRY depending on
368355
* FAULT_FLAG_ALLOW_RETRY, FAULT_FLAG_RETRY_NOWAIT and
@@ -516,33 +503,9 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
516503
vmf->flags, reason);
517504
mmap_read_unlock(mm);
518505

519-
if (likely(must_wait && !READ_ONCE(ctx->released) &&
520-
!userfaultfd_signal_pending(vmf->flags))) {
506+
if (likely(must_wait && !READ_ONCE(ctx->released))) {
521507
wake_up_poll(&ctx->fd_wqh, EPOLLIN);
522508
schedule();
523-
ret |= VM_FAULT_MAJOR;
524-
525-
/*
526-
* False wakeups can orginate even from rwsem before
527-
* up_read() however userfaults will wait either for a
528-
* targeted wakeup on the specific uwq waitqueue from
529-
* wake_userfault() or for signals or for uffd
530-
* release.
531-
*/
532-
while (!READ_ONCE(uwq.waken)) {
533-
/*
534-
* This needs the full smp_store_mb()
535-
* guarantee as the state write must be
536-
* visible to other CPUs before reading
537-
* uwq.waken from other CPUs.
538-
*/
539-
set_current_state(blocking_state);
540-
if (READ_ONCE(uwq.waken) ||
541-
READ_ONCE(ctx->released) ||
542-
userfaultfd_signal_pending(vmf->flags))
543-
break;
544-
schedule();
545-
}
546509
}
547510

548511
__set_current_state(TASK_RUNNING);

0 commit comments

Comments
 (0)