Skip to content

Commit 1f6f66f

Browse files
committed
x86/mm: Update ptep/pmdp_set_wrprotect() for _PAGE_SAVED_DIRTY
When shadow stack is in use, Write=0,Dirty=1 PTE are preserved for shadow stack. Copy-on-write PTEs then have Write=0,SavedDirty=1. When a PTE goes from Write=1,Dirty=1 to Write=0,SavedDirty=1, it could become a transient shadow stack PTE in two cases: 1. Some processors can start a write but end up seeing a Write=0 PTE by the time they get to the Dirty bit, creating a transient shadow stack PTE. However, this will not occur on processors supporting shadow stack, and a TLB flush is not necessary. 2. When _PAGE_DIRTY is replaced with _PAGE_SAVED_DIRTY non-atomically, a transient shadow stack PTE can be created as a result. Prevent the second case when doing a write protection and Dirty->SavedDirty shift at the same time with a CMPXCHG loop. The first case Note, in the PAE case CMPXCHG will need to operate on 8 byte, but try_cmpxchg() will not use CMPXCHG8B, so it cannot operate on a full PAE PTE. However the exiting logic is not operating on a full 8 byte region either, and relies on the fact that the Write bit is in the first 4 bytes when doing the clear_bit(). Since both the Dirty, SavedDirty and Write bits are in the first 4 bytes, casting to a long will be similar to the existing behavior which also casts to a long. Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many insights to the issue. Jann Horn provided the CMPXCHG solution. Co-developed-by: Yu-cheng Yu <[email protected]> Signed-off-by: Yu-cheng Yu <[email protected]> Signed-off-by: Rick Edgecombe <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Mike Rapoport (IBM) <[email protected]> Tested-by: Pengfei Xu <[email protected]> Tested-by: John Allen <[email protected]> Tested-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/all/20230613001108.3040476-12-rick.p.edgecombe%40intel.com
1 parent fca4d41 commit 1f6f66f

File tree

1 file changed

+22
-2
lines changed

1 file changed

+22
-2
lines changed

arch/x86/include/asm/pgtable.h

Lines changed: 22 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1190,7 +1190,17 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
11901190
static inline void ptep_set_wrprotect(struct mm_struct *mm,
11911191
unsigned long addr, pte_t *ptep)
11921192
{
1193-
clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte);
1193+
/*
1194+
* Avoid accidentally creating shadow stack PTEs
1195+
* (Write=0,Dirty=1). Use cmpxchg() to prevent races with
1196+
* the hardware setting Dirty=1.
1197+
*/
1198+
pte_t old_pte, new_pte;
1199+
1200+
old_pte = READ_ONCE(*ptep);
1201+
do {
1202+
new_pte = pte_wrprotect(old_pte);
1203+
} while (!try_cmpxchg((long *)&ptep->pte, (long *)&old_pte, *(long *)&new_pte));
11941204
}
11951205

11961206
#define flush_tlb_fix_spurious_fault(vma, address, ptep) do { } while (0)
@@ -1242,7 +1252,17 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
12421252
static inline void pmdp_set_wrprotect(struct mm_struct *mm,
12431253
unsigned long addr, pmd_t *pmdp)
12441254
{
1245-
clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp);
1255+
/*
1256+
* Avoid accidentally creating shadow stack PTEs
1257+
* (Write=0,Dirty=1). Use cmpxchg() to prevent races with
1258+
* the hardware setting Dirty=1.
1259+
*/
1260+
pmd_t old_pmd, new_pmd;
1261+
1262+
old_pmd = READ_ONCE(*pmdp);
1263+
do {
1264+
new_pmd = pmd_wrprotect(old_pmd);
1265+
} while (!try_cmpxchg((long *)pmdp, (long *)&old_pmd, *(long *)&new_pmd));
12461266
}
12471267

12481268
#ifndef pmdp_establish

0 commit comments

Comments
 (0)