Skip to content

Commit a220d6b

Browse files
jankaraakpm00
authored andcommitted
Revert "readahead: properly shorten readahead when falling back to do_page_cache_ra()"
This reverts commit 7c87758. Anders and Philippe have reported that recent kernels occasionally hang when used with NFS in readahead code. The problem has been bisected to 7c87758 ("readahead: properly shorten readahead when falling back to do_page_cache_ra()"). The cause of the problem is that ra->size can be shrunk by read_pages() call and subsequently we end up calling do_page_cache_ra() with negative (read huge positive) number of pages. Let's revert 7c87758 for now until we can find a proper way how the logic in read_pages() and page_cache_ra_order() can coexist. This can lead to reduced readahead throughput due to readahead window confusion but that's better than outright hangs. Link: https://lkml.kernel.org/r/[email protected] Fixes: 7c87758 ("readahead: properly shorten readahead when falling back to do_page_cache_ra()") Reported-by: Anders Blomdell <[email protected]> Reported-by: Philippe Troin <[email protected]> Signed-off-by: Jan Kara <[email protected]> Tested-by: Philippe Troin <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 4a475c0 commit a220d6b

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

mm/readahead.c

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -458,8 +458,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
458458
struct file_ra_state *ra, unsigned int new_order)
459459
{
460460
struct address_space *mapping = ractl->mapping;
461-
pgoff_t start = readahead_index(ractl);
462-
pgoff_t index = start;
461+
pgoff_t index = readahead_index(ractl);
463462
unsigned int min_order = mapping_min_folio_order(mapping);
464463
pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
465464
pgoff_t mark = index + ra->size - ra->async_size;
@@ -522,7 +521,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
522521
if (!err)
523522
return;
524523
fallback:
525-
do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
524+
do_page_cache_ra(ractl, ra->size, ra->async_size);
526525
}
527526

528527
static unsigned long ractl_max_pages(struct readahead_control *ractl,

0 commit comments

Comments
 (0)