Skip to content

Commit 7c7b962

Browse files
apopple-nvidiaakpm00
authored andcommitted
mm: take a page reference when removing device exclusive entries
Device exclusive page table entries are used to prevent CPU access to a page whilst it is being accessed from a device. Typically this is used to implement atomic operations when the underlying bus does not support atomic access. When a CPU thread encounters a device exclusive entry it locks the page and restores the original entry after calling mmu notifiers to signal drivers that exclusive access is no longer available. The device exclusive entry holds a reference to the page making it safe to access the struct page whilst the entry is present. However the fault handling code does not hold the PTL when taking the page lock. This means if there are multiple threads faulting concurrently on the device exclusive entry one will remove the entry whilst others will wait on the page lock without holding a reference. This can lead to threads locking or waiting on a folio with a zero refcount. Whilst mmap_lock prevents the pages getting freed via munmap() they may still be freed by a migration. This leads to warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked when the refcount drops to zero. Fix this by trying to take a reference on the folio before locking it. The code already checks the PTE under the PTL and aborts if the entry is no longer there. It is also possible the folio has been unmapped, freed and re-allocated allowing a reference to be taken on an unrelated folio. This case is also detected by the PTE check and the folio is unlocked without further changes. Link: https://lkml.kernel.org/r/[email protected] Fixes: b756a3b ("mm: device exclusive memory access") Signed-off-by: Alistair Popple <[email protected]> Reviewed-by: Ralph Campbell <[email protected]> Reviewed-by: John Hubbard <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent f349b15 commit 7c7b962

File tree

1 file changed

+15
-1
lines changed

1 file changed

+15
-1
lines changed

mm/memory.c

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3563,8 +3563,21 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
35633563
struct vm_area_struct *vma = vmf->vma;
35643564
struct mmu_notifier_range range;
35653565

3566-
if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))
3566+
/*
3567+
* We need a reference to lock the folio because we don't hold
3568+
* the PTL so a racing thread can remove the device-exclusive
3569+
* entry and unmap it. If the folio is free the entry must
3570+
* have been removed already. If it happens to have already
3571+
* been re-allocated after being freed all we do is lock and
3572+
* unlock it.
3573+
*/
3574+
if (!folio_try_get(folio))
3575+
return 0;
3576+
3577+
if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
3578+
folio_put(folio);
35673579
return VM_FAULT_RETRY;
3580+
}
35683581
mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0,
35693582
vma->vm_mm, vmf->address & PAGE_MASK,
35703583
(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
35773590

35783591
pte_unmap_unlock(vmf->pte, vmf->ptl);
35793592
folio_unlock(folio);
3593+
folio_put(folio);
35803594

35813595
mmu_notifier_invalidate_range_end(&range);
35823596
return 0;

0 commit comments

Comments
 (0)