Skip to content

Commit 67e4eb0

Browse files
Yang Shitorvalds
authored andcommitted
mm: thp: don't need to drain lru cache when splitting and mlocking THP
Since commit 8f18227 ("mm/swap.c: flush lru pvecs on compound page arrival") THP would not stay in pagevec anymore. So the optimization made by commit d965432 ("thp: increase split_huge_page() success rate") doesn't make sense anymore, which tries to unpin munlocked THPs from pagevec by draining pagevec. Draining lru cache before isolating THP in mlock path is also unnecessary. b676b29 ("mm, thp: fix mapped pages avoiding unevictable list on mlock") added it and 9a73f61 ("thp, mlock: do not mlock PTE-mapped file huge pages") accidentally carried it over after the above optimization went in. Signed-off-by: Yang Shi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Daniel Jordan <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Andrea Arcangeli <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
1 parent 8859025 commit 67e4eb0

File tree

1 file changed

+0
-7
lines changed

1 file changed

+0
-7
lines changed

mm/huge_memory.c

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1378,7 +1378,6 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
13781378
goto skip_mlock;
13791379
if (!trylock_page(page))
13801380
goto skip_mlock;
1381-
lru_add_drain();
13821381
if (page->mapping && !PageDoubleMap(page))
13831382
mlock_vma_page(page);
13841383
unlock_page(page);
@@ -2582,7 +2581,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
25822581
struct anon_vma *anon_vma = NULL;
25832582
struct address_space *mapping = NULL;
25842583
int count, mapcount, extra_pins, ret;
2585-
bool mlocked;
25862584
unsigned long flags;
25872585
pgoff_t end;
25882586

@@ -2641,14 +2639,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
26412639
goto out_unlock;
26422640
}
26432641

2644-
mlocked = PageMlocked(head);
26452642
unmap_page(head);
26462643
VM_BUG_ON_PAGE(compound_mapcount(head), head);
26472644

2648-
/* Make sure the page is not on per-CPU pagevec as it takes pin */
2649-
if (mlocked)
2650-
lru_add_drain();
2651-
26522645
/* prevent PageLRU to go away from under us, and freeze lru stats */
26532646
spin_lock_irqsave(&pgdata->lru_lock, flags);
26542647

0 commit comments

Comments
 (0)