Skip to content

Commit ee49a89

Browse files
sean-jcbonzini
authored andcommitted
KVM: x86: Move SVM's APICv sanity check to common x86
Move SVM's assertion that vCPU's APICv state is consistent with its VM's state out of svm_vcpu_run() and into x86's common inner run loop. The assertion and underlying logic is not unique to SVM, it's just that SVM has more inhibiting conditions and thus is more likely to run headfirst into any KVM bugs. Add relevant comments to document exactly why the update path has unusual ordering between the update the kick, why said ordering is safe, and also the basic rules behind the assertion in the run loop. Cc: Maxim Levitsky <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
1 parent 9b4eb77 commit ee49a89

File tree

2 files changed

+20
-2
lines changed

2 files changed

+20
-2
lines changed

arch/x86/kvm/svm/svm.c

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3864,8 +3864,6 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
38643864

38653865
pre_svm_run(vcpu);
38663866

3867-
WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
3868-
38693867
sync_lapic_to_cr8(vcpu);
38703868

38713869
if (unlikely(svm->asid != svm->vmcb->control.asid)) {

arch/x86/kvm/x86.c

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9481,6 +9481,18 @@ void __kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
94819481

94829482
if (!!old != !!new) {
94839483
trace_kvm_apicv_update_request(activate, bit);
9484+
/*
9485+
* Kick all vCPUs before setting apicv_inhibit_reasons to avoid
9486+
* false positives in the sanity check WARN in svm_vcpu_run().
9487+
* This task will wait for all vCPUs to ack the kick IRQ before
9488+
* updating apicv_inhibit_reasons, and all other vCPUs will
9489+
* block on acquiring apicv_update_lock so that vCPUs can't
9490+
* redo svm_vcpu_run() without seeing the new inhibit state.
9491+
*
9492+
* Note, holding apicv_update_lock and taking it in the read
9493+
* side (handling the request) also prevents other vCPUs from
9494+
* servicing the request with a stale apicv_inhibit_reasons.
9495+
*/
94849496
kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE);
94859497
kvm->arch.apicv_inhibit_reasons = new;
94869498
if (new) {
@@ -9815,6 +9827,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
98159827
}
98169828

98179829
for (;;) {
9830+
/*
9831+
* Assert that vCPU vs. VM APICv state is consistent. An APICv
9832+
* update must kick and wait for all vCPUs before toggling the
9833+
* per-VM state, and responsing vCPUs must wait for the update
9834+
* to complete before servicing KVM_REQ_APICV_UPDATE.
9835+
*/
9836+
WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
9837+
98189838
exit_fastpath = static_call(kvm_x86_run)(vcpu);
98199839
if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
98209840
break;

0 commit comments

Comments
 (0)