Skip to content

Commit 8d3c106

Browse files
thejhakpm00
authored andcommitted
mm/khugepaged: take the right locks for page table retraction
pagetable walks on address ranges mapped by VMAs can be done under the mmap lock, the lock of an anon_vma attached to the VMA, or the lock of the VMA's address_space. Only one of these needs to be held, and it does not need to be held in exclusive mode. Under those circumstances, the rules for concurrent access to page table entries are: - Terminal page table entries (entries that don't point to another page table) can be arbitrarily changed under the page table lock, with the exception that they always need to be consistent for hardware page table walks and lockless_pages_from_mm(). This includes that they can be changed into non-terminal entries. - Non-terminal page table entries (which point to another page table) can not be modified; readers are allowed to READ_ONCE() an entry, verify that it is non-terminal, and then assume that its value will stay as-is. Retracting a page table involves modifying a non-terminal entry, so page-table-level locks are insufficient to protect against concurrent page table traversal; it requires taking all the higher-level locks under which it is possible to start a page walk in the relevant range in exclusive mode. The collapse_huge_page() path for anonymous THP already follows this rule, but the shmem/file THP path was getting it wrong, making it possible for concurrent rmap-based operations to cause corruption. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Fixes: 27e1f82 ("khugepaged: enable collapse pmd for pte-mapped THP") Signed-off-by: Jann Horn <[email protected]> Reviewed-by: Yang Shi <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: John Hubbard <[email protected]> Cc: Peter Xu <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 829ae0f commit 8d3c106

File tree

1 file changed

+51
-4
lines changed

1 file changed

+51
-4
lines changed

mm/khugepaged.c

Lines changed: 51 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1379,16 +1379,37 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
13791379
return SCAN_SUCCEED;
13801380
}
13811381

1382+
/*
1383+
* A note about locking:
1384+
* Trying to take the page table spinlocks would be useless here because those
1385+
* are only used to synchronize:
1386+
*
1387+
* - modifying terminal entries (ones that point to a data page, not to another
1388+
* page table)
1389+
* - installing *new* non-terminal entries
1390+
*
1391+
* Instead, we need roughly the same kind of protection as free_pgtables() or
1392+
* mm_take_all_locks() (but only for a single VMA):
1393+
* The mmap lock together with this VMA's rmap locks covers all paths towards
1394+
* the page table entries we're messing with here, except for hardware page
1395+
* table walks and lockless_pages_from_mm().
1396+
*/
13821397
static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
13831398
unsigned long addr, pmd_t *pmdp)
13841399
{
1385-
spinlock_t *ptl;
13861400
pmd_t pmd;
13871401

13881402
mmap_assert_write_locked(mm);
1389-
ptl = pmd_lock(vma->vm_mm, pmdp);
1403+
if (vma->vm_file)
1404+
lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem);
1405+
/*
1406+
* All anon_vmas attached to the VMA have the same root and are
1407+
* therefore locked by the same lock.
1408+
*/
1409+
if (vma->anon_vma)
1410+
lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
1411+
13901412
pmd = pmdp_collapse_flush(vma, addr, pmdp);
1391-
spin_unlock(ptl);
13921413
mm_dec_nr_ptes(mm);
13931414
page_table_check_pte_clear_range(mm, addr, pmd);
13941415
pte_free(mm, pmd_pgtable(pmd));
@@ -1439,6 +1460,14 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
14391460
if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false))
14401461
return SCAN_VMA_CHECK;
14411462

1463+
/*
1464+
* Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
1465+
* that got written to. Without this, we'd have to also lock the
1466+
* anon_vma if one exists.
1467+
*/
1468+
if (vma->anon_vma)
1469+
return SCAN_VMA_CHECK;
1470+
14421471
/* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */
14431472
if (userfaultfd_wp(vma))
14441473
return SCAN_PTE_UFFD_WP;
@@ -1472,6 +1501,20 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
14721501
goto drop_hpage;
14731502
}
14741503

1504+
/*
1505+
* We need to lock the mapping so that from here on, only GUP-fast and
1506+
* hardware page walks can access the parts of the page tables that
1507+
* we're operating on.
1508+
* See collapse_and_free_pmd().
1509+
*/
1510+
i_mmap_lock_write(vma->vm_file->f_mapping);
1511+
1512+
/*
1513+
* This spinlock should be unnecessary: Nobody else should be accessing
1514+
* the page tables under spinlock protection here, only
1515+
* lockless_pages_from_mm() and the hardware page walker can access page
1516+
* tables while all the high-level locks are held in write mode.
1517+
*/
14751518
start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
14761519
result = SCAN_FAIL;
14771520

@@ -1526,6 +1569,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
15261569
/* step 4: remove pte entries */
15271570
collapse_and_free_pmd(mm, vma, haddr, pmd);
15281571

1572+
i_mmap_unlock_write(vma->vm_file->f_mapping);
1573+
15291574
maybe_install_pmd:
15301575
/* step 5: install pmd entry */
15311576
result = install_pmd
@@ -1539,6 +1584,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
15391584

15401585
abort:
15411586
pte_unmap_unlock(start_pte, ptl);
1587+
i_mmap_unlock_write(vma->vm_file->f_mapping);
15421588
goto drop_hpage;
15431589
}
15441590

@@ -1595,7 +1641,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff,
15951641
* An alternative would be drop the check, but check that page
15961642
* table is clear before calling pmdp_collapse_flush() under
15971643
* ptl. It has higher chance to recover THP for the VMA, but
1598-
* has higher cost too.
1644+
* has higher cost too. It would also probably require locking
1645+
* the anon_vma.
15991646
*/
16001647
if (vma->anon_vma) {
16011648
result = SCAN_PAGE_ANON;

0 commit comments

Comments
 (0)