Skip to content

Commit dfd5c2f

Browse files
sean-jcgregkh
authored andcommitted
KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE
commit 2867eb7 upstream. Apply make_spte()'s optimization to skip trying to unsync shadow pages if and only if the old SPTE was a leaf SPTE, as non-leaf SPTEs in direct MMUs are always writable, i.e. could trigger a false positive and incorrectly lead to KVM creating a SPTE without write-protecting or marking shadow pages unsync. This bug only affects the TDP MMU, as the shadow MMU only overwrites a shadow-present SPTE when synchronizing SPTEs (and only 4KiB SPTEs can be unsync). Specifically, mmu_set_spte() drops any non-leaf SPTEs *before* calling make_spte(), whereas the TDP MMU can do a direct replacement of a page table with the leaf SPTE. Opportunistically update the comment to explain why skipping the unsync stuff is safe, as opposed to simply saying "it's someone else's problem". Cc: [email protected] Tested-by: Alex Bennée <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Tested-by: Dmitry Osipenko <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-ID: <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent 1a77ffc commit dfd5c2f

File tree

1 file changed

+13
-5
lines changed

1 file changed

+13
-5
lines changed

arch/x86/kvm/mmu/spte.c

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -226,12 +226,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
226226
spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask;
227227

228228
/*
229-
* Optimization: for pte sync, if spte was writable the hash
230-
* lookup is unnecessary (and expensive). Write protection
231-
* is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots.
232-
* Same reasoning can be applied to dirty page accounting.
229+
* When overwriting an existing leaf SPTE, and the old SPTE was
230+
* writable, skip trying to unsync shadow pages as any relevant
231+
* shadow pages must already be unsync, i.e. the hash lookup is
232+
* unnecessary (and expensive).
233+
*
234+
* The same reasoning applies to dirty page/folio accounting;
235+
* KVM will mark the folio dirty using the old SPTE, thus
236+
* there's no need to immediately mark the new SPTE as dirty.
237+
*
238+
* Note, both cases rely on KVM not changing PFNs without first
239+
* zapping the old SPTE, which is guaranteed by both the shadow
240+
* MMU and the TDP MMU.
233241
*/
234-
if (is_writable_pte(old_spte))
242+
if (is_last_spte(old_spte, level) && is_writable_pte(old_spte))
235243
goto out;
236244

237245
/*

0 commit comments

Comments
 (0)