Skip to content

Commit dfaae84

Browse files
committed
KVM: x86/mmu: Try "unprotect for retry" iff there are indirect SPs
Try to unprotect shadow pages if and only if indirect_shadow_pages is non- zero, i.e. iff there is at least one protected such shadow page. Pre- checking indirect_shadow_pages avoids taking mmu_lock for write when the gfn is write-protected by a third party, i.e. not for KVM shadow paging, and in the *extremely* unlikely case that a different task has already unprotected the last shadow page. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
1 parent 01dd4d3 commit dfaae84

File tree

1 file changed

+11
-0
lines changed

1 file changed

+11
-0
lines changed

arch/x86/kvm/mmu/mmu.c

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2718,6 +2718,17 @@ bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa)
27182718
gpa_t gpa = cr2_or_gpa;
27192719
bool r;
27202720

2721+
/*
2722+
* Bail early if there aren't any write-protected shadow pages to avoid
2723+
* unnecessarily taking mmu_lock lock, e.g. if the gfn is write-tracked
2724+
* by a third party. Reading indirect_shadow_pages without holding
2725+
* mmu_lock is safe, as this is purely an optimization, i.e. a false
2726+
* positive is benign, and a false negative will simply result in KVM
2727+
* skipping the unprotect+retry path, which is also an optimization.
2728+
*/
2729+
if (!READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages))
2730+
return false;
2731+
27212732
if (!vcpu->arch.mmu->root_role.direct)
27222733
gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
27232734

0 commit comments

Comments
 (0)