Skip to content

Commit 00fa15e

Browse files
apopple-nvidiaMatthew Wilcox (Oracle)
authored andcommitted
filemap: Fix serialization adding transparent huge pages to page cache
Commit 793917d ("mm/readahead: Add large folio readahead") introduced support for using large folios for filebacked pages if the filesystem supports it. page_cache_ra_order() was introduced to allocate and add these large folios to the page cache. However adding pages to the page cache should be serialized against truncation and hole punching by taking invalidate_lock. Not doing so can lead to data races resulting in stale data getting added to the page cache and marked up-to-date. See commit 730633f ("mm: Protect operations adding pages to page cache with invalidate_lock") for more details. This issue was found by inspection but a testcase revealed it was possible to observe in practice on XFS. Fix this by taking invalidate_lock in page_cache_ra_order(), to mirror what is done for the non-thp case in page_cache_ra_unbounded(). Signed-off-by: Alistair Popple <[email protected]> Fixes: 793917d ("mm/readahead: Add large folio readahead") Reviewed-by: Jan Kara <[email protected]> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
1 parent b653db7 commit 00fa15e

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

mm/readahead.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -510,6 +510,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
510510
new_order--;
511511
}
512512

513+
filemap_invalidate_lock_shared(mapping);
513514
while (index <= limit) {
514515
unsigned int order = new_order;
515516

@@ -536,6 +537,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
536537
}
537538

538539
read_pages(ractl);
540+
filemap_invalidate_unlock_shared(mapping);
539541

540542
/*
541543
* If there were already pages in the page cache, then we may have

0 commit comments

Comments
 (0)