Skip to content

Commit 2fba133

Browse files
Ryan Robertswilldeacon
authored andcommitted
mm/vmalloc: Gracefully unmap huge ptes
Commit f7ee1f1 ("mm/vmalloc: enable mapping of huge pages at pte level in vmap") added its support by reusing the set_huge_pte_at() API, which is otherwise only used for user mappings. But when unmapping those huge ptes, it continued to call ptep_get_and_clear(), which is a layering violation. To date, the only arch to implement this support is powerpc and it all happens to work ok for it. But arm64's implementation of ptep_get_and_clear() can not be safely used to clear a previous set_huge_pte_at(). So let's introduce a new arch opt-in function, arch_vmap_pte_range_unmap_size(), which can provide the size of a (present) pte. Then we can call huge_ptep_get_and_clear() to tear it down properly. Note that if vunmap_range() is called with a range that starts in the middle of a huge pte-mapped page, we must unmap the entire huge page so the behaviour is consistent with pmd and pud block mappings. In this case emit a warning just like we do for pmd/pud mappings. Reviewed-by: Anshuman Khandual <[email protected]> Reviewed-by: Uladzislau Rezki (Sony) <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Ryan Roberts <[email protected]> Tested-by: Luiz Capitulino <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
1 parent 61ef8dd commit 2fba133

File tree

2 files changed

+24
-2
lines changed

2 files changed

+24
-2
lines changed

include/linux/vmalloc.h

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,14 @@ static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, uns
113113
}
114114
#endif
115115

116+
#ifndef arch_vmap_pte_range_unmap_size
117+
static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr,
118+
pte_t *ptep)
119+
{
120+
return PAGE_SIZE;
121+
}
122+
#endif
123+
116124
#ifndef arch_vmap_pte_supported_shift
117125
static inline int arch_vmap_pte_supported_shift(unsigned long size)
118126
{

mm/vmalloc.c

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -350,12 +350,26 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
350350
pgtbl_mod_mask *mask)
351351
{
352352
pte_t *pte;
353+
pte_t ptent;
354+
unsigned long size = PAGE_SIZE;
353355

354356
pte = pte_offset_kernel(pmd, addr);
355357
do {
356-
pte_t ptent = ptep_get_and_clear(&init_mm, addr, pte);
358+
#ifdef CONFIG_HUGETLB_PAGE
359+
size = arch_vmap_pte_range_unmap_size(addr, pte);
360+
if (size != PAGE_SIZE) {
361+
if (WARN_ON(!IS_ALIGNED(addr, size))) {
362+
addr = ALIGN_DOWN(addr, size);
363+
pte = PTR_ALIGN_DOWN(pte, sizeof(*pte) * (size >> PAGE_SHIFT));
364+
}
365+
ptent = huge_ptep_get_and_clear(&init_mm, addr, pte, size);
366+
if (WARN_ON(end - addr < size))
367+
size = end - addr;
368+
} else
369+
#endif
370+
ptent = ptep_get_and_clear(&init_mm, addr, pte);
357371
WARN_ON(!pte_none(ptent) && !pte_present(ptent));
358-
} while (pte++, addr += PAGE_SIZE, addr != end);
372+
} while (pte += (size >> PAGE_SHIFT), addr += size, addr != end);
359373
*mask |= PGTBL_PTE_MODIFIED;
360374
}
361375

0 commit comments

Comments
 (0)