Skip to content

Commit fc346d0

Browse files
Charan Teja Kallaakpm00
authored andcommitted
mm: migrate high-order folios in swap cache correctly
Large folios occupy N consecutive entries in the swap cache instead of using multi-index entries like the page cache. However, if a large folio is re-added to the LRU list, it can be migrated. The migration code was not aware of the difference between the swap cache and the page cache and assumed that a single xas_store() would be sufficient. This leaves potentially many stale pointers to the now-migrated folio in the swap cache, which can lead to almost arbitrary data corruption in the future. This can also manifest as infinite loops with the RCU read lock held. [[email protected]: modifications to the changelog & tweaked the fix] Fixes: 3417013 ("mm/migrate: Add folio_migrate_mapping()") Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Charan Teja Kalla <[email protected]> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reported-by: Charan Teja Kalla <[email protected]> Closes: https://lkml.kernel.org/r/[email protected] Cc: David Hildenbrand <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Naoya Horiguchi <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 4249f13 commit fc346d0

File tree

1 file changed

+8
-1
lines changed

1 file changed

+8
-1
lines changed

mm/migrate.c

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -405,6 +405,7 @@ int folio_migrate_mapping(struct address_space *mapping,
405405
int dirty;
406406
int expected_count = folio_expected_refs(mapping, folio) + extra_count;
407407
long nr = folio_nr_pages(folio);
408+
long entries, i;
408409

409410
if (!mapping) {
410411
/* Anonymous page without mapping */
@@ -442,8 +443,10 @@ int folio_migrate_mapping(struct address_space *mapping,
442443
folio_set_swapcache(newfolio);
443444
newfolio->private = folio_get_private(folio);
444445
}
446+
entries = nr;
445447
} else {
446448
VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
449+
entries = 1;
447450
}
448451

449452
/* Move dirty while page refs frozen and newpage not yet exposed */
@@ -453,7 +456,11 @@ int folio_migrate_mapping(struct address_space *mapping,
453456
folio_set_dirty(newfolio);
454457
}
455458

456-
xas_store(&xas, newfolio);
459+
/* Swap cache still stores N entries instead of a high-order entry */
460+
for (i = 0; i < entries; i++) {
461+
xas_store(&xas, newfolio);
462+
xas_next(&xas);
463+
}
457464

458465
/*
459466
* Drop cache reference from old page by unfreezing

0 commit comments

Comments
 (0)