Skip to content

Commit 5521de7

Browse files
weiny2akpm00
authored andcommitted
mm/userfaultfd: replace kmap/kmap_atomic() with kmap_local_page()
kmap() and kmap_atomic() are being deprecated in favor of kmap_local_page() which is appropriate for any thread local context.[1] A recent locking bug report with userfaultfd showed that the conversion of the kmap_atomic()'s in those code flows requires care with regard to the prevention of deadlock.[2] git archaeology implied that the recursion may not be an actual bug.[3] However, depending on the implementation of the mmap_lock and the condition of the call there may still be a deadlock.[4] So this is not purely a lockdep issue. Considering a single threaded call stack there are 3 options. 1) Different mm's are in play (no issue) 2) Readlock implementation is recursive and same mm is in play (no issue) 3) Readlock implementation is _not_ recursive (issue) The mmap_lock is recursive so with a single thread there is no issue. However, Matthew pointed out a deadlock scenario when you consider additional process' and threads thusly. "The readlock implementation is only recursive if nobody else has taken a write lock. If you have a multithreaded process, one of the other threads can call mmap() and that will prevent recursion (due to fairness). Even if it's a different process that you're trying to acquire the mmap read lock on, you can still get into a deadly embrace. eg: process A thread 1 takes read lock on own mmap_lock process A thread 2 calls mmap, blocks taking write lock process B thread 1 takes page fault, read lock on own mmap lock process B thread 2 calls mmap, blocks taking write lock process A thread 1 blocks taking read lock on process B process B thread 1 blocks taking read lock on process A Now all four threads are blocked waiting for each other." Regardless using pagefault_disable() ensures that no matter what locking implementation is used a deadlock will not occur. Complete kmap conversion in userfaultfd by replacing the kmap() and kmap_atomic() calls with kmap_local_page(). When replacing the kmap_atomic() call ensure page faults continue to be disabled to support the correct fall back behavior and add a comment to inform future souls of the requirement. [1] https://lore.kernel.org/all/[email protected]/ [2] https://lore.kernel.org/all/Y1Mh2S7fUGQ%2FiKFR@iweiny-desk3/ [3] https://lore.kernel.org/all/Y1MymJ%2FINb45AdaY@iweiny-desk3/ [4] https://lore.kernel.org/lkml/Y1bXBtGTCym77%[email protected]/ [[email protected]: v2] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ira Weiny <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Peter Xu <[email protected]> Cc: Axel Rasmussen <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 78a498c commit 5521de7

File tree

1 file changed

+21
-4
lines changed

1 file changed

+21
-4
lines changed

mm/userfaultfd.c

Lines changed: 21 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -157,11 +157,28 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm,
157157
if (!page)
158158
goto out;
159159

160-
page_kaddr = kmap_atomic(page);
160+
page_kaddr = kmap_local_page(page);
161+
/*
162+
* The read mmap_lock is held here. Despite the
163+
* mmap_lock being read recursive a deadlock is still
164+
* possible if a writer has taken a lock. For example:
165+
*
166+
* process A thread 1 takes read lock on own mmap_lock
167+
* process A thread 2 calls mmap, blocks taking write lock
168+
* process B thread 1 takes page fault, read lock on own mmap lock
169+
* process B thread 2 calls mmap, blocks taking write lock
170+
* process A thread 1 blocks taking read lock on process B
171+
* process B thread 1 blocks taking read lock on process A
172+
*
173+
* Disable page faults to prevent potential deadlock
174+
* and retry the copy outside the mmap_lock.
175+
*/
176+
pagefault_disable();
161177
ret = copy_from_user(page_kaddr,
162178
(const void __user *) src_addr,
163179
PAGE_SIZE);
164-
kunmap_atomic(page_kaddr);
180+
pagefault_enable();
181+
kunmap_local(page_kaddr);
165182

166183
/* fallback to copy_from_user outside mmap_lock */
167184
if (unlikely(ret)) {
@@ -646,11 +663,11 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
646663
mmap_read_unlock(dst_mm);
647664
BUG_ON(!page);
648665

649-
page_kaddr = kmap(page);
666+
page_kaddr = kmap_local_page(page);
650667
err = copy_from_user(page_kaddr,
651668
(const void __user *) src_addr,
652669
PAGE_SIZE);
653-
kunmap(page);
670+
kunmap_local(page_kaddr);
654671
if (unlikely(err)) {
655672
err = -EFAULT;
656673
goto out;

0 commit comments

Comments
 (0)