Skip to content

Commit e13e792

Browse files
jyescasakpm00
authored andcommitted
mm: add CONFIG_PAGE_BLOCK_ORDER to select page block order
Problem: On large page size configurations (16KiB, 64KiB), the CMA alignment requirement (CMA_MIN_ALIGNMENT_BYTES) increases considerably, and this causes the CMA reservations to be larger than necessary. This means that system will have less available MIGRATE_UNMOVABLE and MIGRATE_RECLAIMABLE page blocks since MIGRATE_CMA can't fallback to them. The CMA_MIN_ALIGNMENT_BYTES increases because it depends on MAX_PAGE_ORDER which depends on ARCH_FORCE_MAX_ORDER. The value of ARCH_FORCE_MAX_ORDER increases on 16k and 64k kernels. For example, in ARM, the CMA alignment requirement when: - CONFIG_ARCH_FORCE_MAX_ORDER default value is used - CONFIG_TRANSPARENT_HUGEPAGE is set: PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order | CMA_MIN_ALIGNMENT_BYTES ----------------------------------------------------------------------- 4KiB | 10 | 9 | 4KiB * (2 ^ 9) = 2MiB 16Kib | 11 | 11 | 16KiB * (2 ^ 11) = 32MiB 64KiB | 13 | 13 | 64KiB * (2 ^ 13) = 512MiB There are some extreme cases for the CMA alignment requirement when: - CONFIG_ARCH_FORCE_MAX_ORDER maximum value is set - CONFIG_TRANSPARENT_HUGEPAGE is NOT set: - CONFIG_HUGETLB_PAGE is NOT set PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order | CMA_MIN_ALIGNMENT_BYTES ------------------------------------------------------------------------ 4KiB | 15 | 15 | 4KiB * (2 ^ 15) = 128MiB 16Kib | 13 | 13 | 16KiB * (2 ^ 13) = 128MiB 64KiB | 13 | 13 | 64KiB * (2 ^ 13) = 512MiB This affects the CMA reservations for the drivers. If a driver in a 4KiB kernel needs 4MiB of CMA memory, in a 16KiB kernel, the minimal reservation has to be 32MiB due to the alignment requirements: reserved-memory { ... cma_test_reserve: cma_test_reserve { compatible = "shared-dma-pool"; size = <0x0 0x400000>; /* 4 MiB */ ... }; }; reserved-memory { ... cma_test_reserve: cma_test_reserve { compatible = "shared-dma-pool"; size = <0x0 0x2000000>; /* 32 MiB */ ... }; }; Solution: Add a new config CONFIG_PAGE_BLOCK_ORDER that allows to set the page block order in all the architectures. The maximum page block order will be given by ARCH_FORCE_MAX_ORDER. By default, CONFIG_PAGE_BLOCK_ORDER will have the same value that ARCH_FORCE_MAX_ORDER. This will make sure that current kernel configurations won't be affected by this change. It is a opt-in change. This patch will allow to have the same CMA alignment requirements for large page sizes (16KiB, 64KiB) as that in 4kb kernels by setting a lower pageblock_order. Tests: - Verified that HugeTLB pages work when pageblock_order is 1, 7, 10 on 4k and 16k kernels. - Verified that Transparent Huge Pages work when pageblock_order is 1, 7, 10 on 4k and 16k kernels. - Verified that dma-buf heaps allocations work when pageblock_order is 1, 7, 10 on 4k and 16k kernels. Benchmarks: The benchmarks compare 16kb kernels with pageblock_order 10 and 7. The reason for the pageblock_order 7 is because this value makes the min CMA alignment requirement the same as that in 4kb kernels (2MB). - Perform 100K dma-buf heaps (/dev/dma_heap/system) allocations of SZ_8M, SZ_4M, SZ_2M, SZ_1M, SZ_64, SZ_8, SZ_4. Use simpleperf (https://developer.android.com/ndk/guides/simpleperf) to measure the # of instructions and page-faults on 16k kernels. The benchmark was executed 10 times. The averages are below: # instructions | #page-faults order 10 | order 7 | order 10 | order 7 -------------------------------------------------------- 13,891,765,770 | 11,425,777,314 | 220 | 217 14,456,293,487 | 12,660,819,302 | 224 | 219 13,924,261,018 | 13,243,970,736 | 217 | 221 13,910,886,504 | 13,845,519,630 | 217 | 221 14,388,071,190 | 13,498,583,098 | 223 | 224 13,656,442,167 | 12,915,831,681 | 216 | 218 13,300,268,343 | 12,930,484,776 | 222 | 218 13,625,470,223 | 14,234,092,777 | 219 | 218 13,508,964,965 | 13,432,689,094 | 225 | 219 13,368,950,667 | 13,683,587,37 | 219 | 225 ------------------------------------------------------------------- 13,803,137,433 | 13,131,974,268 | 220 | 220 Averages There were 4.85% #instructions when order was 7, in comparison with order 10. 13,803,137,433 - 13,131,974,268 = -671,163,166 (-4.86%) The number of page faults in order 7 and 10 were the same. These results didn't show any significant regression when the pageblock_order is set to 7 on 16kb kernels. - Run speedometer 3.1 (https://browserbench.org/Speedometer3.1/) 5 times on the 16k kernels with pageblock_order 7 and 10. order 10 | order 7 | order 7 - order 10 | (order 7 - order 10) % ------------------------------------------------------------------- 15.8 | 16.4 | 0.6 | 3.80% 16.4 | 16.2 | -0.2 | -1.22% 16.6 | 16.3 | -0.3 | -1.81% 16.8 | 16.3 | -0.5 | -2.98% 16.6 | 16.8 | 0.2 | 1.20% ------------------------------------------------------------------- 16.44 16.4 -0.04 -0.24% Averages The results didn't show any significant regression when the pageblock_order is set to 7 on 16kb kernels. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Juan Yescas <[email protected]> Acked-by: Zi Yan <[email protected]> Reviewed-by: Vlastimil Babka <[email protected]> Cc: Liam R. Howlett <[email protected]> Cc: Lorenzo Stoakes <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Minchan Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 49c6950 commit e13e792

File tree

4 files changed

+55
-5
lines changed

4 files changed

+55
-5
lines changed

include/linux/mmzone.h

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,22 @@
3737

3838
#define NR_PAGE_ORDERS (MAX_PAGE_ORDER + 1)
3939

40+
/* Defines the order for the number of pages that have a migrate type. */
41+
#ifndef CONFIG_PAGE_BLOCK_ORDER
42+
#define PAGE_BLOCK_ORDER MAX_PAGE_ORDER
43+
#else
44+
#define PAGE_BLOCK_ORDER CONFIG_PAGE_BLOCK_ORDER
45+
#endif /* CONFIG_PAGE_BLOCK_ORDER */
46+
47+
/*
48+
* The MAX_PAGE_ORDER, which defines the max order of pages to be allocated
49+
* by the buddy allocator, has to be larger or equal to the PAGE_BLOCK_ORDER,
50+
* which defines the order for the number of pages that can have a migrate type
51+
*/
52+
#if (PAGE_BLOCK_ORDER > MAX_PAGE_ORDER)
53+
#error MAX_PAGE_ORDER must be >= PAGE_BLOCK_ORDER
54+
#endif
55+
4056
/*
4157
* PAGE_ALLOC_COSTLY_ORDER is the order at which allocations are deemed
4258
* costly to service. That is between allocation orders which should

include/linux/pageblock-flags.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -41,18 +41,18 @@ extern unsigned int pageblock_order;
4141
* Huge pages are a constant size, but don't exceed the maximum allocation
4242
* granularity.
4343
*/
44-
#define pageblock_order MIN_T(unsigned int, HUGETLB_PAGE_ORDER, MAX_PAGE_ORDER)
44+
#define pageblock_order MIN_T(unsigned int, HUGETLB_PAGE_ORDER, PAGE_BLOCK_ORDER)
4545

4646
#endif /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE */
4747

4848
#elif defined(CONFIG_TRANSPARENT_HUGEPAGE)
4949

50-
#define pageblock_order MIN_T(unsigned int, HPAGE_PMD_ORDER, MAX_PAGE_ORDER)
50+
#define pageblock_order MIN_T(unsigned int, HPAGE_PMD_ORDER, PAGE_BLOCK_ORDER)
5151

5252
#else /* CONFIG_TRANSPARENT_HUGEPAGE */
5353

54-
/* If huge pages are not used, group by MAX_ORDER_NR_PAGES */
55-
#define pageblock_order MAX_PAGE_ORDER
54+
/* If huge pages are not used, group by PAGE_BLOCK_ORDER */
55+
#define pageblock_order PAGE_BLOCK_ORDER
5656

5757
#endif /* CONFIG_HUGETLB_PAGE */
5858

mm/Kconfig

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -993,6 +993,40 @@ config CMA_AREAS
993993

994994
If unsure, leave the default value "8" in UMA and "20" in NUMA.
995995

996+
#
997+
# Select this config option from the architecture Kconfig, if available, to set
998+
# the max page order for physically contiguous allocations.
999+
#
1000+
config ARCH_FORCE_MAX_ORDER
1001+
int
1002+
1003+
#
1004+
# When ARCH_FORCE_MAX_ORDER is not defined,
1005+
# the default page block order is MAX_PAGE_ORDER (10) as per
1006+
# include/linux/mmzone.h.
1007+
#
1008+
config PAGE_BLOCK_ORDER
1009+
int "Page Block Order"
1010+
range 1 10 if ARCH_FORCE_MAX_ORDER = 0
1011+
default 10 if ARCH_FORCE_MAX_ORDER = 0
1012+
range 1 ARCH_FORCE_MAX_ORDER if ARCH_FORCE_MAX_ORDER != 0
1013+
default ARCH_FORCE_MAX_ORDER if ARCH_FORCE_MAX_ORDER != 0
1014+
help
1015+
The page block order refers to the power of two number of pages that
1016+
are physically contiguous and can have a migrate type associated to
1017+
them. The maximum size of the page block order is limited by
1018+
ARCH_FORCE_MAX_ORDER.
1019+
1020+
This config allows overriding the default page block order when the
1021+
page block order is required to be smaller than ARCH_FORCE_MAX_ORDER
1022+
or MAX_PAGE_ORDER.
1023+
1024+
Reducing pageblock order can negatively impact THP generation
1025+
success rate. If your workloads uses THP heavily, please use this
1026+
option with caution.
1027+
1028+
Don't change if unsure.
1029+
9961030
config MEM_SOFT_DIRTY
9971031
bool "Track memory changes"
9981032
depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS

mm/mm_init.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1509,7 +1509,7 @@ static inline void setup_usemap(struct zone *zone) {}
15091509
/* Initialise the number of pages represented by NR_PAGEBLOCK_BITS */
15101510
void __init set_pageblock_order(void)
15111511
{
1512-
unsigned int order = MAX_PAGE_ORDER;
1512+
unsigned int order = PAGE_BLOCK_ORDER;
15131513

15141514
/* Check that pageblock_nr_pages has not already been setup */
15151515
if (pageblock_order)

0 commit comments

Comments
 (0)