Skip to content

Commit 1ce6473

Browse files
ioworker0akpm00
authored andcommitted
mm/thp: fix MTE tag mismatch when replacing zero-filled subpages
When both THP and MTE are enabled, splitting a THP and replacing its zero-filled subpages with the shared zeropage can cause MTE tag mismatch faults in userspace. Remapping zero-filled subpages to the shared zeropage is unsafe, as the zeropage has a fixed tag of zero, which may not match the tag expected by the userspace pointer. KSM already avoids this problem by using memcmp_pages(), which on arm64 intentionally reports MTE-tagged pages as non-identical to prevent unsafe merging. As suggested by David[1], this patch adopts the same pattern, replacing the memchr_inv() byte-level check with a call to pages_identical(). This leverages existing architecture-specific logic to determine if a page is truly identical to the shared zeropage. Having both the THP shrinker and KSM rely on pages_identical() makes the design more future-proof, IMO. Instead of handling quirks in generic code, we just let the architecture decide what makes two pages identical. [1] https://lore.kernel.org/all/[email protected] Link: https://lkml.kernel.org/r/[email protected] Fixes: b1f2020 ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Signed-off-by: Lance Yang <[email protected]> Reported-by: Qun-wei Lin <[email protected]> Closes: https://lore.kernel.org/all/[email protected] Suggested-by: David Hildenbrand <[email protected]> Acked-by: Zi Yan <[email protected]> Acked-by: David Hildenbrand <[email protected]> Acked-by: Usama Arif <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Reviewed-by: Wei Yang <[email protected]> Cc: Alistair Popple <[email protected]> Cc: andrew.yang <[email protected]> Cc: Baolin Wang <[email protected]> Cc: Barry Song <[email protected]> Cc: Byungchul Park <[email protected]> Cc: Charlie Jenkins <[email protected]> Cc: Chinwen Chang <[email protected]> Cc: Dev Jain <[email protected]> Cc: Domenico Cerasuolo <[email protected]> Cc: Gregory Price <[email protected]> Cc: "Huang, Ying" <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Joshua Hahn <[email protected]> Cc: Kairui Song <[email protected]> Cc: Kalesh Singh <[email protected]> Cc: Liam Howlett <[email protected]> Cc: Lorenzo Stoakes <[email protected]> Cc: Mariano Pache <[email protected]> Cc: Mathew Brost <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Rakie Kim <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Samuel Holland <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Yu Zhao <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent fcc0669 commit 1ce6473

File tree

2 files changed

+4
-19
lines changed

2 files changed

+4
-19
lines changed

mm/huge_memory.c

Lines changed: 3 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4104,32 +4104,23 @@ static unsigned long deferred_split_count(struct shrinker *shrink,
41044104
static bool thp_underused(struct folio *folio)
41054105
{
41064106
int num_zero_pages = 0, num_filled_pages = 0;
4107-
void *kaddr;
41084107
int i;
41094108

41104109
if (khugepaged_max_ptes_none == HPAGE_PMD_NR - 1)
41114110
return false;
41124111

41134112
for (i = 0; i < folio_nr_pages(folio); i++) {
4114-
kaddr = kmap_local_folio(folio, i * PAGE_SIZE);
4115-
if (!memchr_inv(kaddr, 0, PAGE_SIZE)) {
4116-
num_zero_pages++;
4117-
if (num_zero_pages > khugepaged_max_ptes_none) {
4118-
kunmap_local(kaddr);
4113+
if (pages_identical(folio_page(folio, i), ZERO_PAGE(0))) {
4114+
if (++num_zero_pages > khugepaged_max_ptes_none)
41194115
return true;
4120-
}
41214116
} else {
41224117
/*
41234118
* Another path for early exit once the number
41244119
* of non-zero filled pages exceeds threshold.
41254120
*/
4126-
num_filled_pages++;
4127-
if (num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none) {
4128-
kunmap_local(kaddr);
4121+
if (++num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none)
41294122
return false;
4130-
}
41314123
}
4132-
kunmap_local(kaddr);
41334124
}
41344125
return false;
41354126
}

mm/migrate.c

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -300,9 +300,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
300300
unsigned long idx)
301301
{
302302
struct page *page = folio_page(folio, idx);
303-
bool contains_data;
304303
pte_t newpte;
305-
void *addr;
306304

307305
if (PageCompound(page))
308306
return false;
@@ -319,11 +317,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
319317
* this subpage has been non present. If the subpage is only zero-filled
320318
* then map it to the shared zeropage.
321319
*/
322-
addr = kmap_local_page(page);
323-
contains_data = memchr_inv(addr, 0, PAGE_SIZE);
324-
kunmap_local(addr);
325-
326-
if (contains_data)
320+
if (!pages_identical(page, ZERO_PAGE(0)))
327321
return false;
328322

329323
newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),

0 commit comments

Comments
 (0)