Skip to content

Commit 13dd097

Browse files
sean-jcWen Zhiwei
authored andcommitted
KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE
stable inclusion from stable-v6.6.64 commit d79f765b2eb8808d1c771f08e1a6000c06bf9f3e category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IBL4B6 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=d79f765b2eb8808d1c771f08e1a6000c06bf9f3e -------------------------------- commit 2867eb782cf7f64c2ac427596133b6f9c3f64b7a upstream. Apply make_spte()'s optimization to skip trying to unsync shadow pages if and only if the old SPTE was a leaf SPTE, as non-leaf SPTEs in direct MMUs are always writable, i.e. could trigger a false positive and incorrectly lead to KVM creating a SPTE without write-protecting or marking shadow pages unsync. This bug only affects the TDP MMU, as the shadow MMU only overwrites a shadow-present SPTE when synchronizing SPTEs (and only 4KiB SPTEs can be unsync). Specifically, mmu_set_spte() drops any non-leaf SPTEs *before* calling make_spte(), whereas the TDP MMU can do a direct replacement of a page table with the leaf SPTE. Opportunistically update the comment to explain why skipping the unsync stuff is safe, as opposed to simply saying "it's someone else's problem". Cc: [email protected] Tested-by: Alex Bennée <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Tested-by: Dmitry Osipenko <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-ID: <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Wen Zhiwei <[email protected]>
1 parent 43cc9d6 commit 13dd097

File tree

1 file changed

+13
-5
lines changed

1 file changed

+13
-5
lines changed

arch/x86/kvm/mmu/spte.c

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -206,12 +206,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
206206
spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask;
207207

208208
/*
209-
* Optimization: for pte sync, if spte was writable the hash
210-
* lookup is unnecessary (and expensive). Write protection
211-
* is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots.
212-
* Same reasoning can be applied to dirty page accounting.
209+
* When overwriting an existing leaf SPTE, and the old SPTE was
210+
* writable, skip trying to unsync shadow pages as any relevant
211+
* shadow pages must already be unsync, i.e. the hash lookup is
212+
* unnecessary (and expensive).
213+
*
214+
* The same reasoning applies to dirty page/folio accounting;
215+
* KVM will mark the folio dirty using the old SPTE, thus
216+
* there's no need to immediately mark the new SPTE as dirty.
217+
*
218+
* Note, both cases rely on KVM not changing PFNs without first
219+
* zapping the old SPTE, which is guaranteed by both the shadow
220+
* MMU and the TDP MMU.
213221
*/
214-
if (is_writable_pte(old_spte))
222+
if (is_last_spte(old_spte, level) && is_writable_pte(old_spte))
215223
goto out;
216224

217225
/*

0 commit comments

Comments
 (0)