Skip to content

Commit 639d6aa

Browse files
hansendcIngo Molnar
authored andcommitted
x86/mm: Do not forbid _PAGE_RW before init for __ro_after_init
__ro_after_init data gets stuck in the .rodata section. That's normally fine because the kernel itself manages the R/W properties. But, if we run __change_page_attr() on an area which is __ro_after_init, the .rodata checks will trigger and force the area to be immediately read-only, even if it is early-ish in boot. This caused problems when trying to clear the _PAGE_GLOBAL bit for these area in the PTI code: it cleared _PAGE_GLOBAL like I asked, but also took it up on itself to clear _PAGE_RW. The kernel then oopses the next time it wrote to a __ro_after_init data structure. To fix this, add the kernel_set_to_readonly check, just like we have for kernel text, just a few lines below in this function. Signed-off-by: Dave Hansen <[email protected]> Acked-by: Kees Cook <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Dan Williams <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Nadav Amit <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent 430d400 commit 639d6aa

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

arch/x86/mm/pageattr.c

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -298,9 +298,11 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address,
298298

299299
/*
300300
* The .rodata section needs to be read-only. Using the pfn
301-
* catches all aliases.
301+
* catches all aliases. This also includes __ro_after_init,
302+
* so do not enforce until kernel_set_to_readonly is true.
302303
*/
303-
if (within(pfn, __pa_symbol(__start_rodata) >> PAGE_SHIFT,
304+
if (kernel_set_to_readonly &&
305+
within(pfn, __pa_symbol(__start_rodata) >> PAGE_SHIFT,
304306
__pa_symbol(__end_rodata) >> PAGE_SHIFT))
305307
pgprot_val(forbidden) |= _PAGE_RW;
306308

0 commit comments

Comments
 (0)