Skip to content

Commit f816029

Browse files
Lai Jiangshanbonzini
authored andcommitted
KVM: X86: Fix missed remote tlb flush in rmap_write_protect()
When kvm->tlbs_dirty > 0, some rmaps might have been deleted without flushing tlb remotely after kvm_sync_page(). If @gfn was writable before and it's rmaps was deleted in kvm_sync_page(), and if the tlb entry is still in a remote running VCPU, the @gfn is not safely protected. To fix the problem, kvm_sync_page() does the remote flush when needed to avoid the problem. Fixes: a4ee1ca ("KVM: MMU: delay flush all tlbs on sync_page path") Signed-off-by: Lai Jiangshan <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
1 parent faf6b75 commit f816029

File tree

1 file changed

+2
-21
lines changed

1 file changed

+2
-21
lines changed

arch/x86/kvm/mmu/paging_tmpl.h

Lines changed: 2 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1047,14 +1047,6 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gpa_t vaddr,
10471047
* Using the cached information from sp->gfns is safe because:
10481048
* - The spte has a reference to the struct page, so the pfn for a given gfn
10491049
* can't change unless all sptes pointing to it are nuked first.
1050-
*
1051-
* Note:
1052-
* We should flush all tlbs if spte is dropped even though guest is
1053-
* responsible for it. Since if we don't, kvm_mmu_notifier_invalidate_page
1054-
* and kvm_mmu_notifier_invalidate_range_start detect the mapping page isn't
1055-
* used by guest then tlbs are not flushed, so guest is allowed to access the
1056-
* freed pages.
1057-
* And we increase kvm->tlbs_dirty to delay tlbs flush in this case.
10581050
*/
10591051
static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
10601052
{
@@ -1107,13 +1099,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
11071099
return 0;
11081100

11091101
if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
1110-
/*
1111-
* Update spte before increasing tlbs_dirty to make
1112-
* sure no tlb flush is lost after spte is zapped; see
1113-
* the comments in kvm_flush_remote_tlbs().
1114-
*/
1115-
smp_wmb();
1116-
vcpu->kvm->tlbs_dirty++;
1102+
set_spte_ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH;
11171103
continue;
11181104
}
11191105

@@ -1128,12 +1114,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
11281114

11291115
if (gfn != sp->gfns[i]) {
11301116
drop_spte(vcpu->kvm, &sp->spt[i]);
1131-
/*
1132-
* The same as above where we are doing
1133-
* prefetch_invalid_gpte().
1134-
*/
1135-
smp_wmb();
1136-
vcpu->kvm->tlbs_dirty++;
1117+
set_spte_ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH;
11371118
continue;
11381119
}
11391120

0 commit comments

Comments
 (0)