Skip to content

Commit 0f561fc

Browse files
hansendcIngo Molnar
authored andcommitted
x86/pti: Enable global pages for shared areas
The entry/exit text and cpu_entry_area are mapped into userspace and the kernel. But, they are not _PAGE_GLOBAL. This creates unnecessary TLB misses. Add the _PAGE_GLOBAL flag for these areas. Signed-off-by: Dave Hansen <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Dan Williams <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Kees Cook <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Nadav Amit <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent 639d6aa commit 0f561fc

File tree

2 files changed

+35
-2
lines changed

2 files changed

+35
-2
lines changed

arch/x86/mm/cpu_entry_area.c

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,20 @@ EXPORT_SYMBOL(get_cpu_entry_area);
2727
void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
2828
{
2929
unsigned long va = (unsigned long) cea_vaddr;
30+
pte_t pte = pfn_pte(pa >> PAGE_SHIFT, flags);
3031

31-
set_pte_vaddr(va, pfn_pte(pa >> PAGE_SHIFT, flags));
32+
/*
33+
* The cpu_entry_area is shared between the user and kernel
34+
* page tables. All of its ptes can safely be global.
35+
* _PAGE_GLOBAL gets reused to help indicate PROT_NONE for
36+
* non-present PTEs, so be careful not to set it in that
37+
* case to avoid confusion.
38+
*/
39+
if (boot_cpu_has(X86_FEATURE_PGE) &&
40+
(pgprot_val(flags) & _PAGE_PRESENT))
41+
pte = pte_set_flags(pte, _PAGE_GLOBAL);
42+
43+
set_pte_vaddr(va, pte);
3244
}
3345

3446
static void __init

arch/x86/mm/pti.c

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -299,6 +299,27 @@ pti_clone_pmds(unsigned long start, unsigned long end, pmdval_t clear)
299299
if (WARN_ON(!target_pmd))
300300
return;
301301

302+
/*
303+
* Only clone present PMDs. This ensures only setting
304+
* _PAGE_GLOBAL on present PMDs. This should only be
305+
* called on well-known addresses anyway, so a non-
306+
* present PMD would be a surprise.
307+
*/
308+
if (WARN_ON(!(pmd_flags(*pmd) & _PAGE_PRESENT)))
309+
return;
310+
311+
/*
312+
* Setting 'target_pmd' below creates a mapping in both
313+
* the user and kernel page tables. It is effectively
314+
* global, so set it as global in both copies. Note:
315+
* the X86_FEATURE_PGE check is not _required_ because
316+
* the CPU ignores _PAGE_GLOBAL when PGE is not
317+
* supported. The check keeps consistentency with
318+
* code that only set this bit when supported.
319+
*/
320+
if (boot_cpu_has(X86_FEATURE_PGE))
321+
*pmd = pmd_set_flags(*pmd, _PAGE_GLOBAL);
322+
302323
/*
303324
* Copy the PMD. That is, the kernelmode and usermode
304325
* tables will share the last-level page tables of this
@@ -348,7 +369,7 @@ static void __init pti_clone_entry_text(void)
348369
{
349370
pti_clone_pmds((unsigned long) __entry_text_start,
350371
(unsigned long) __irqentry_text_end,
351-
_PAGE_RW | _PAGE_GLOBAL);
372+
_PAGE_RW);
352373
}
353374

354375
/*

0 commit comments

Comments
 (0)