Skip to content

Commit e3253ba

Browse files
hygonigregkh
authored andcommitted
mm: introduce and use {pgd,p4d}_populate_kernel()
commit f2d2f95 upstream. Introduce and use {pgd,p4d}_populate_kernel() in core MM code when populating PGD and P4D entries for the kernel address space. These helpers ensure proper synchronization of page tables when updating the kernel portion of top-level page tables. Until now, the kernel has relied on each architecture to handle synchronization of top-level page tables in an ad-hoc manner. For example, see commit 9b86152 ("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping changes"). However, this approach has proven fragile for following reasons: 1) It is easy to forget to perform the necessary page table synchronization when introducing new changes. For instance, commit 4917f55 ("mm/sparse-vmemmap: improve memory savings for compound devmaps") overlooked the need to synchronize page tables for the vmemmap area. 2) It is also easy to overlook that the vmemmap and direct mapping areas must not be accessed before explicit page table synchronization. For example, commit 8d40091 ("x86/vmemmap: handle unpopulated sub-pmd ranges")) caused crashes by accessing the vmemmap area before calling sync_global_pgds(). To address this, as suggested by Dave Hansen, introduce _kernel() variants of the page table population helpers, which invoke architecture-specific hooks to properly synchronize page tables. These are introduced in a new header file, include/linux/pgalloc.h, so they can be called from common code. They reuse existing infrastructure for vmalloc and ioremap. Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK, and the actual synchronization is performed by arch_sync_kernel_mappings(). This change currently targets only x86_64, so only PGD and P4D level helpers are introduced. Currently, these helpers are no-ops since no architecture sets PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK. In theory, PUD and PMD level helpers can be added later if needed by other architectures. For now, 32-bit architectures (x86-32 and arm) only handle PGTBL_PMD_MODIFIED, so p*d_populate_kernel() will never affect them unless we introduce a PMD level helper. [[email protected]: fix KASAN build error due to p*d_populate_kernel()] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Fixes: 8d40091 ("x86/vmemmap: handle unpopulated sub-pmd ranges") Signed-off-by: Harry Yoo <[email protected]> Suggested-by: Dave Hansen <[email protected]> Acked-by: Kiryl Shutsemau <[email protected]> Reviewed-by: Mike Rapoport (Microsoft) <[email protected]> Reviewed-by: Lorenzo Stoakes <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alistair Popple <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: "Aneesh Kumar K.V" <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: bibo mao <[email protected]> Cc: Borislav Betkov <[email protected]> Cc: Christoph Lameter (Ampere) <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dev Jain <[email protected]> Cc: Dmitriy Vyukov <[email protected]> Cc: Gwan-gyeong Mun <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jane Chu <[email protected]> Cc: Joao Martins <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: John Hubbard <[email protected]> Cc: Kevin Brodsky <[email protected]> Cc: Liam Howlett <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Peter Xu <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Qi Zheng <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleinxer <[email protected]> Cc: Thomas Huth <[email protected]> Cc: "Uladzislau Rezki (Sony)" <[email protected]> Cc: Vincenzo Frascino <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> [ Adjust context ] Signed-off-by: Harry Yoo <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent 5682aad commit e3253ba

File tree

5 files changed

+48
-18
lines changed

5 files changed

+48
-18
lines changed

include/linux/pgalloc.h

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
/* SPDX-License-Identifier: GPL-2.0 */
2+
#ifndef _LINUX_PGALLOC_H
3+
#define _LINUX_PGALLOC_H
4+
5+
#include <linux/pgtable.h>
6+
#include <asm/pgalloc.h>
7+
8+
/*
9+
* {pgd,p4d}_populate_kernel() are defined as macros to allow
10+
* compile-time optimization based on the configured page table levels.
11+
* Without this, linking may fail because callers (e.g., KASAN) may rely
12+
* on calls to these functions being optimized away when passing symbols
13+
* that exist only for certain page table levels.
14+
*/
15+
#define pgd_populate_kernel(addr, pgd, p4d) \
16+
do { \
17+
pgd_populate(&init_mm, pgd, p4d); \
18+
if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) \
19+
arch_sync_kernel_mappings(addr, addr); \
20+
} while (0)
21+
22+
#define p4d_populate_kernel(addr, p4d, pud) \
23+
do { \
24+
p4d_populate(&init_mm, p4d, pud); \
25+
if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED) \
26+
arch_sync_kernel_mappings(addr, addr); \
27+
} while (0)
28+
29+
#endif /* _LINUX_PGALLOC_H */

include/linux/pgtable.h

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1699,8 +1699,8 @@ static inline int pmd_protnone(pmd_t pmd)
16991699

17001700
/*
17011701
* Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
1702-
* and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
1703-
* needs to be called.
1702+
* and let generic vmalloc, ioremap and page table update code know when
1703+
* arch_sync_kernel_mappings() needs to be called.
17041704
*/
17051705
#ifndef ARCH_PAGE_TABLE_SYNC_MASK
17061706
#define ARCH_PAGE_TABLE_SYNC_MASK 0
@@ -1833,10 +1833,11 @@ static inline bool arch_has_pfn_modify_check(void)
18331833
/*
18341834
* Page Table Modification bits for pgtbl_mod_mask.
18351835
*
1836-
* These are used by the p?d_alloc_track*() set of functions an in the generic
1837-
* vmalloc/ioremap code to track at which page-table levels entries have been
1838-
* modified. Based on that the code can better decide when vmalloc and ioremap
1839-
* mapping changes need to be synchronized to other page-tables in the system.
1836+
* These are used by the p?d_alloc_track*() and p*d_populate_kernel()
1837+
* functions in the generic vmalloc, ioremap and page table update code
1838+
* to track at which page-table levels entries have been modified.
1839+
* Based on that the code can better decide when page table changes need
1840+
* to be synchronized to other page-tables in the system.
18401841
*/
18411842
#define __PGTBL_PGD_MODIFIED 0
18421843
#define __PGTBL_P4D_MODIFIED 1

mm/kasan/init.c

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@
1313
#include <linux/mm.h>
1414
#include <linux/pfn.h>
1515
#include <linux/slab.h>
16+
#include <linux/pgalloc.h>
1617

1718
#include <asm/page.h>
18-
#include <asm/pgalloc.h>
1919

2020
#include "kasan.h"
2121

@@ -203,7 +203,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
203203
pud_t *pud;
204204
pmd_t *pmd;
205205

206-
p4d_populate(&init_mm, p4d,
206+
p4d_populate_kernel(addr, p4d,
207207
lm_alias(kasan_early_shadow_pud));
208208
pud = pud_offset(p4d, addr);
209209
pud_populate(&init_mm, pud,
@@ -224,7 +224,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
224224
} else {
225225
p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
226226
pud_init(p);
227-
p4d_populate(&init_mm, p4d, p);
227+
p4d_populate_kernel(addr, p4d, p);
228228
}
229229
}
230230
zero_pud_populate(p4d, addr, next);
@@ -263,10 +263,10 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
263263
* puds,pmds, so pgd_populate(), pud_populate()
264264
* is noops.
265265
*/
266-
pgd_populate(&init_mm, pgd,
266+
pgd_populate_kernel(addr, pgd,
267267
lm_alias(kasan_early_shadow_p4d));
268268
p4d = p4d_offset(pgd, addr);
269-
p4d_populate(&init_mm, p4d,
269+
p4d_populate_kernel(addr, p4d,
270270
lm_alias(kasan_early_shadow_pud));
271271
pud = pud_offset(p4d, addr);
272272
pud_populate(&init_mm, pud,
@@ -285,7 +285,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
285285
if (!p)
286286
return -ENOMEM;
287287
} else {
288-
pgd_populate(&init_mm, pgd,
288+
pgd_populate_kernel(addr, pgd,
289289
early_alloc(PAGE_SIZE, NUMA_NO_NODE));
290290
}
291291
}

mm/percpu.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3129,7 +3129,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
31293129
#endif /* BUILD_EMBED_FIRST_CHUNK */
31303130

31313131
#ifdef BUILD_PAGE_FIRST_CHUNK
3132-
#include <asm/pgalloc.h>
3132+
#include <linux/pgalloc.h>
31333133

31343134
#ifndef P4D_TABLE_SIZE
31353135
#define P4D_TABLE_SIZE PAGE_SIZE
@@ -3157,15 +3157,15 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
31573157
p4d = memblock_alloc(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
31583158
if (!p4d)
31593159
goto err_alloc;
3160-
pgd_populate(&init_mm, pgd, p4d);
3160+
pgd_populate_kernel(addr, pgd, p4d);
31613161
}
31623162

31633163
p4d = p4d_offset(pgd, addr);
31643164
if (p4d_none(*p4d)) {
31653165
pud = memblock_alloc(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
31663166
if (!pud)
31673167
goto err_alloc;
3168-
p4d_populate(&init_mm, p4d, pud);
3168+
p4d_populate_kernel(addr, p4d, pud);
31693169
}
31703170

31713171
pud = pud_offset(p4d, addr);

mm/sparse-vmemmap.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,9 @@
2727
#include <linux/spinlock.h>
2828
#include <linux/vmalloc.h>
2929
#include <linux/sched.h>
30+
#include <linux/pgalloc.h>
3031

3132
#include <asm/dma.h>
32-
#include <asm/pgalloc.h>
3333

3434
/*
3535
* Allocate a block of memory to be used to back the virtual memory map
@@ -230,7 +230,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
230230
if (!p)
231231
return NULL;
232232
pud_init(p);
233-
p4d_populate(&init_mm, p4d, p);
233+
p4d_populate_kernel(addr, p4d, p);
234234
}
235235
return p4d;
236236
}
@@ -242,7 +242,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
242242
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
243243
if (!p)
244244
return NULL;
245-
pgd_populate(&init_mm, pgd, p);
245+
pgd_populate_kernel(addr, pgd, p);
246246
}
247247
return pgd;
248248
}

0 commit comments

Comments
 (0)