Skip to content

Commit b6fd410

Browse files
Matthew Wilcox (Oracle)akpm00
authored andcommitted
memory-failure: use a folio in me_huge_page()
This function was already explicitly calling compound_head(); unfortunately the compiler can't know that and elide the redundant calls to compound_head() buried in page_mapping(), unlock_page(), etc. Switch to using a folio, which does let us elide these calls. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Cc: Naoya Horiguchi <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent f709239 commit b6fd410

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

mm/memory-failure.c

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1182,25 +1182,25 @@ static int me_swapcache_clean(struct page_state *ps, struct page *p)
11821182
*/
11831183
static int me_huge_page(struct page_state *ps, struct page *p)
11841184
{
1185+
struct folio *folio = page_folio(p);
11851186
int res;
1186-
struct page *hpage = compound_head(p);
11871187
struct address_space *mapping;
11881188
bool extra_pins = false;
11891189

1190-
mapping = page_mapping(hpage);
1190+
mapping = folio_mapping(folio);
11911191
if (mapping) {
1192-
res = truncate_error_page(hpage, page_to_pfn(p), mapping);
1192+
res = truncate_error_page(&folio->page, page_to_pfn(p), mapping);
11931193
/* The page is kept in page cache. */
11941194
extra_pins = true;
1195-
unlock_page(hpage);
1195+
folio_unlock(folio);
11961196
} else {
1197-
unlock_page(hpage);
1197+
folio_unlock(folio);
11981198
/*
11991199
* migration entry prevents later access on error hugepage,
12001200
* so we can free and dissolve it into buddy to save healthy
12011201
* subpages.
12021202
*/
1203-
put_page(hpage);
1203+
folio_put(folio);
12041204
if (__page_handle_poison(p) >= 0) {
12051205
page_ref_inc(p);
12061206
res = MF_RECOVERED;

0 commit comments

Comments
 (0)