@@ -197,7 +197,7 @@ unevictable list for the memory cgroup and node being scanned.
197197There may be situations where a page is mapped into a VM_LOCKED VMA, but the
198198page is not marked as PG_mlocked. Such pages will make it all the way to
199199shrink_active_list() or shrink_page_list() where they will be detected when
200- vmscan walks the reverse map in page_referenced () or try_to_unmap(). The page
200+ vmscan walks the reverse map in folio_referenced () or try_to_unmap(). The page
201201is culled to the unevictable list when it is released by the shrinker.
202202
203203To "cull" an unevictable page, vmscan simply puts the page back on the LRU list
@@ -267,7 +267,7 @@ the LRU. Such pages can be "noticed" by memory management in several places:
267267 (4) in the fault path and when a VM_LOCKED stack segment is expanded; or
268268
269269 (5) as mentioned above, in vmscan:shrink_page_list() when attempting to
270- reclaim a page in a VM_LOCKED VMA by page_referenced () or try_to_unmap().
270+ reclaim a page in a VM_LOCKED VMA by folio_referenced () or try_to_unmap().
271271
272272mlocked pages become unlocked and rescued from the unevictable list when:
273273
@@ -547,7 +547,7 @@ vmscan's shrink_inactive_list() and shrink_page_list() also divert obviously
547547unevictable pages found on the inactive lists to the appropriate memory cgroup
548548and node unevictable list.
549549
550- rmap's page_referenced_one (), called via vmscan's shrink_active_list() or
550+ rmap's folio_referenced_one (), called via vmscan's shrink_active_list() or
551551shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(),
552552check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_page()
553553to correct them. Such pages are culled to the unevictable list when released
0 commit comments