Skip to content

Commit 35dcb3a

Browse files
Marc Zyngieroupton
authored andcommitted
KVM: arm64: Make vcpu flag updates non-preemptible
Per-vcpu flags are updated using a non-atomic RMW operation. Which means it is possible to get preempted between the read and write operations. Another interesting thing to note is that preemption also updates flags, as we have some flag manipulation in both the load and put operations. It is thus possible to lose information communicated by either load or put, as the preempted flag update will overwrite the flags when the thread is resumed. This is specially critical if either load or put has stored information which depends on the physical CPU the vcpu runs on. This results in really elusive bugs, and kudos must be given to Mostafa for the long hours of debugging, and finally spotting the problem. Fix it by disabling preemption during the RMW operation, which ensures that the state stays consistent. Also upgrade vcpu_get_flag path to use READ_ONCE() to make sure the field is always atomically accessed. Fixes: e87abb7 ("KVM: arm64: Add helpers to manipulate vcpu flags among a set") Reported-by: Mostafa Saleh <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Oliver Upton <[email protected]>
1 parent e816252 commit 35dcb3a

File tree

1 file changed

+18
-1
lines changed

1 file changed

+18
-1
lines changed

arch/arm64/include/asm/kvm_host.h

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -576,19 +576,34 @@ struct kvm_vcpu_arch {
576576
({ \
577577
__build_check_flag(v, flagset, f, m); \
578578
\
579-
v->arch.flagset & (m); \
579+
READ_ONCE(v->arch.flagset) & (m); \
580580
})
581581

582+
/*
583+
* Note that the set/clear accessors must be preempt-safe in order to
584+
* avoid nesting them with load/put which also manipulate flags...
585+
*/
586+
#ifdef __KVM_NVHE_HYPERVISOR__
587+
/* the nVHE hypervisor is always non-preemptible */
588+
#define __vcpu_flags_preempt_disable()
589+
#define __vcpu_flags_preempt_enable()
590+
#else
591+
#define __vcpu_flags_preempt_disable() preempt_disable()
592+
#define __vcpu_flags_preempt_enable() preempt_enable()
593+
#endif
594+
582595
#define __vcpu_set_flag(v, flagset, f, m) \
583596
do { \
584597
typeof(v->arch.flagset) *fset; \
585598
\
586599
__build_check_flag(v, flagset, f, m); \
587600
\
588601
fset = &v->arch.flagset; \
602+
__vcpu_flags_preempt_disable(); \
589603
if (HWEIGHT(m) > 1) \
590604
*fset &= ~(m); \
591605
*fset |= (f); \
606+
__vcpu_flags_preempt_enable(); \
592607
} while (0)
593608

594609
#define __vcpu_clear_flag(v, flagset, f, m) \
@@ -598,7 +613,9 @@ struct kvm_vcpu_arch {
598613
__build_check_flag(v, flagset, f, m); \
599614
\
600615
fset = &v->arch.flagset; \
616+
__vcpu_flags_preempt_disable(); \
601617
*fset &= ~(m); \
618+
__vcpu_flags_preempt_enable(); \
602619
} while (0)
603620

604621
#define vcpu_get_flag(v, ...) __vcpu_get_flag((v), __VA_ARGS__)

0 commit comments

Comments
 (0)