Skip to content

Commit 90253ac

Browse files
x-y-zgregkh
authored andcommitted
mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order
commit fa5a061 upstream. folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag. An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1]. This issue can be exposed via: 1. splitting a has_hwpoisoned folio to >0 order from debugfs interface; 2. truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio(). And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors. Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1] Link: https://lkml.kernel.org/r/[email protected] Fixes: c010d47 ("mm: thp: split huge page to any lower order pages") Signed-off-by: Zi Yan <[email protected]> Acked-by: David Hildenbrand <[email protected]> Reviewed-by: Yang Shi <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Lance Yang <[email protected]> Reviewed-by: Miaohe Lin <[email protected]> Reviewed-by: Baolin Wang <[email protected]> Reviewed-by: Wei Yang <[email protected]> Cc: Pankaj Raghav <[email protected]> Cc: Barry Song <[email protected]> Cc: Dev Jain <[email protected]> Cc: Jane Chu <[email protected]> Cc: Liam Howlett <[email protected]> Cc: Luis Chamberalin <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Naoya Horiguchi <[email protected]> Cc: Nico Pache <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent 6393d21 commit 90253ac

File tree

1 file changed

+23
-2
lines changed

1 file changed

+23
-2
lines changed

mm/huge_memory.c

Lines changed: 23 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3091,9 +3091,17 @@ static void lru_add_page_tail(struct folio *folio, struct page *tail,
30913091
}
30923092
}
30933093

3094+
static bool page_range_has_hwpoisoned(struct page *page, long nr_pages)
3095+
{
3096+
for (; nr_pages; page++, nr_pages--)
3097+
if (PageHWPoison(page))
3098+
return true;
3099+
return false;
3100+
}
3101+
30943102
static void __split_huge_page_tail(struct folio *folio, int tail,
30953103
struct lruvec *lruvec, struct list_head *list,
3096-
unsigned int new_order)
3104+
unsigned int new_order, const bool handle_hwpoison)
30973105
{
30983106
struct page *head = &folio->page;
30993107
struct page *page_tail = head + tail;
@@ -3170,6 +3178,11 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
31703178
folio_set_large_rmappable(new_folio);
31713179
}
31723180

3181+
/* Set has_hwpoisoned flag on new_folio if any of its pages is HWPoison */
3182+
if (handle_hwpoison &&
3183+
page_range_has_hwpoisoned(page_tail, 1 << new_order))
3184+
folio_set_has_hwpoisoned(new_folio);
3185+
31733186
/* Finally unfreeze refcount. Additional reference from page cache. */
31743187
page_ref_unfreeze(page_tail,
31753188
1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ?
@@ -3194,6 +3207,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
31943207
pgoff_t end, unsigned int new_order)
31953208
{
31963209
struct folio *folio = page_folio(page);
3210+
/* Scan poisoned pages when split a poisoned folio to large folios */
3211+
const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order;
31973212
struct page *head = &folio->page;
31983213
struct lruvec *lruvec;
31993214
struct address_space *swap_cache = NULL;
@@ -3217,8 +3232,14 @@ static void __split_huge_page(struct page *page, struct list_head *list,
32173232

32183233
ClearPageHasHWPoisoned(head);
32193234

3235+
/* Check first new_nr pages since the loop below skips them */
3236+
if (handle_hwpoison &&
3237+
page_range_has_hwpoisoned(folio_page(folio, 0), new_nr))
3238+
folio_set_has_hwpoisoned(folio);
3239+
32203240
for (i = nr - new_nr; i >= new_nr; i -= new_nr) {
3221-
__split_huge_page_tail(folio, i, lruvec, list, new_order);
3241+
__split_huge_page_tail(folio, i, lruvec, list, new_order,
3242+
handle_hwpoison);
32223243
/* Some pages can be beyond EOF: drop them from page cache */
32233244
if (head[i].index >= end) {
32243245
struct folio *tail = page_folio(head + i);

0 commit comments

Comments
 (0)