@@ -197,7 +197,7 @@ unevictable list for the memory cgroup and node being scanned.
197
197
There may be situations where a page is mapped into a VM_LOCKED VMA, but the
198
198
page is not marked as PG_mlocked. Such pages will make it all the way to
199
199
shrink_active_list() or shrink_page_list() where they will be detected when
200
- vmscan walks the reverse map in page_referenced () or try_to_unmap(). The page
200
+ vmscan walks the reverse map in folio_referenced () or try_to_unmap(). The page
201
201
is culled to the unevictable list when it is released by the shrinker.
202
202
203
203
To "cull" an unevictable page, vmscan simply puts the page back on the LRU list
@@ -267,7 +267,7 @@ the LRU. Such pages can be "noticed" by memory management in several places:
267
267
(4) in the fault path and when a VM_LOCKED stack segment is expanded; or
268
268
269
269
(5) as mentioned above, in vmscan:shrink_page_list() when attempting to
270
- reclaim a page in a VM_LOCKED VMA by page_referenced () or try_to_unmap().
270
+ reclaim a page in a VM_LOCKED VMA by folio_referenced () or try_to_unmap().
271
271
272
272
mlocked pages become unlocked and rescued from the unevictable list when:
273
273
@@ -547,7 +547,7 @@ vmscan's shrink_inactive_list() and shrink_page_list() also divert obviously
547
547
unevictable pages found on the inactive lists to the appropriate memory cgroup
548
548
and node unevictable list.
549
549
550
- rmap's page_referenced_one (), called via vmscan's shrink_active_list() or
550
+ rmap's folio_referenced_one (), called via vmscan's shrink_active_list() or
551
551
shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(),
552
552
check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_page()
553
553
to correct them. Such pages are culled to the unevictable list when released
0 commit comments