Skip to content

Commit 723a80d

Browse files
Hugh Dickinstorvalds
authored andcommitted
khugepaged: collapse_pte_mapped_thp() flush the right range
pmdp_collapse_flush() should be given the start address at which the huge page is mapped, haddr: it was given addr, which at that point has been used as a local variable, incremented to the end address of the extent. Found by source inspection while chasing a hugepage locking bug, which I then could not explain by this. At first I thought this was very bad; then saw that all of the page translations that were not flushed would actually still point to the right pages afterwards, so harmless; then realized that I know nothing of how different architectures and models cache intermediate paging structures, so maybe it matters after all - particularly since the page table concerned is immediately freed. Much easier to fix than to think about. Fixes: 27e1f82 ("khugepaged: enable collapse pmd for pte-mapped THP") Signed-off-by: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Song Liu <[email protected]> Cc: <[email protected]> [5.4+] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
1 parent 75802ca commit 723a80d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

mm/khugepaged.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1502,7 +1502,7 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
15021502

15031503
/* step 4: collapse pmd */
15041504
ptl = pmd_lock(vma->vm_mm, pmd);
1505-
_pmd = pmdp_collapse_flush(vma, addr, pmd);
1505+
_pmd = pmdp_collapse_flush(vma, haddr, pmd);
15061506
spin_unlock(ptl);
15071507
mm_dec_nr_ptes(mm);
15081508
pte_free(mm, pmd_pgtable(_pmd));

0 commit comments

Comments
 (0)