Skip to content

Commit 1f789a4

Browse files
Gavin Shanakpm00
authored andcommitted
mm/readahead: limit page cache size in page_cache_ra_order()
In page_cache_ra_order(), the maximal order of the page cache to be allocated shouldn't be larger than MAX_PAGECACHE_ORDER. Otherwise, it's possible the large page cache can't be supported by xarray when the corresponding xarray entry is split. For example, HPAGE_PMD_ORDER is 13 on ARM64 when the base page size is 64KB. The PMD-sized page cache can't be supported by xarray. Link: https://lkml.kernel.org/r/[email protected] Fixes: 793917d ("mm/readahead: Add large folio readahead") Signed-off-by: Gavin Shan <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: Darrick J. Wong <[email protected]> Cc: Don Dutile <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: William Kucharski <[email protected]> Cc: Zhenyu Zhang <[email protected]> Cc: <[email protected]> [5.18+] Signed-off-by: Andrew Morton <[email protected]>
1 parent 099d906 commit 1f789a4

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

mm/readahead.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -503,11 +503,11 @@ void page_cache_ra_order(struct readahead_control *ractl,
503503

504504
limit = min(limit, index + ra->size - 1);
505505

506-
if (new_order < MAX_PAGECACHE_ORDER) {
506+
if (new_order < MAX_PAGECACHE_ORDER)
507507
new_order += 2;
508-
new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
509-
new_order = min_t(unsigned int, new_order, ilog2(ra->size));
510-
}
508+
509+
new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
510+
new_order = min_t(unsigned int, new_order, ilog2(ra->size));
511511

512512
/* See comment in page_cache_ra_unbounded() */
513513
nofs = memalloc_nofs_save();

0 commit comments

Comments
 (0)