Skip to content

Commit abb7962

Browse files
Anshuman Khandualctmarinas
authored andcommitted
arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
Currently 'hugetlb_cma=' command line argument does not create CMA area on ARM64_16K_PAGES and ARM64_64K_PAGES based platforms. Instead, it just ends up with the following warning message. Reason being, hugetlb_cma_reserve() never gets called for these huge page sizes. [ 64.255669] hugetlb_cma: the option isn't supported by current arch This enables CMA areas reservation on ARM64_16K_PAGES and ARM64_64K_PAGES configs by defining an unified arm64_hugetlb_cma_reseve() that is wrapped in CONFIG_CMA. Call site for arm64_hugetlb_cma_reserve() is also protected as <asm/hugetlb.h> is conditionally included and hence cannot contain stub for the inverse config i.e !(CONFIG_HUGETLB_PAGE && CONFIG_CMA). Signed-off-by: Anshuman Khandual <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Barry Song <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
1 parent 0de674a commit abb7962

File tree

3 files changed

+42
-2
lines changed

3 files changed

+42
-2
lines changed

arch/arm64/include/asm/hugetlb.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,8 @@ extern void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr,
4949
pte_t *ptep, pte_t pte, unsigned long sz);
5050
#define set_huge_swap_pte_at set_huge_swap_pte_at
5151

52+
void __init arm64_hugetlb_cma_reserve(void);
53+
5254
#include <asm-generic/hugetlb.h>
5355

5456
#endif /* __ASM_HUGETLB_H */

arch/arm64/mm/hugetlbpage.c

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,44 @@
1919
#include <asm/tlbflush.h>
2020
#include <asm/pgalloc.h>
2121

22+
/*
23+
* HugeTLB Support Matrix
24+
*
25+
* ---------------------------------------------------
26+
* | Page Size | CONT PTE | PMD | CONT PMD | PUD |
27+
* ---------------------------------------------------
28+
* | 4K | 64K | 2M | 32M | 1G |
29+
* | 16K | 2M | 32M | 1G | |
30+
* | 64K | 2M | 512M | 16G | |
31+
* ---------------------------------------------------
32+
*/
33+
34+
/*
35+
* Reserve CMA areas for the largest supported gigantic
36+
* huge page when requested. Any other smaller gigantic
37+
* huge pages could still be served from those areas.
38+
*/
39+
#ifdef CONFIG_CMA
40+
void __init arm64_hugetlb_cma_reserve(void)
41+
{
42+
int order;
43+
44+
#ifdef CONFIG_ARM64_4K_PAGES
45+
order = PUD_SHIFT - PAGE_SHIFT;
46+
#else
47+
order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT;
48+
#endif
49+
/*
50+
* HugeTLB CMA reservation is required for gigantic
51+
* huge pages which could not be allocated via the
52+
* page allocator. Just warn if there is any change
53+
* breaking this assumption.
54+
*/
55+
WARN_ON(order <= MAX_ORDER);
56+
hugetlb_cma_reserve(order);
57+
}
58+
#endif /* CONFIG_CMA */
59+
2260
#ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
2361
bool arch_hugetlb_migration_supported(struct hstate *h)
2462
{

arch/arm64/mm/init.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -425,8 +425,8 @@ void __init bootmem_init(void)
425425
* initialize node_online_map that gets used in hugetlb_cma_reserve()
426426
* while allocating required CMA size across online nodes.
427427
*/
428-
#ifdef CONFIG_ARM64_4K_PAGES
429-
hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
428+
#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_CMA)
429+
arm64_hugetlb_cma_reserve();
430430
#endif
431431

432432
/*

0 commit comments

Comments
 (0)