Skip to content

Commit 5f39efc

Browse files
Fuad TabbaMarc Zyngier
authored andcommitted
KVM: arm64: Handle protected guests at 32 bits
Protected KVM does not support protected AArch32 guests. However, it is possible for the guest to force run AArch32, potentially causing problems. Add an extra check so that if the hypervisor catches the guest doing that, it can prevent the guest from running again by resetting vcpu->arch.target and returning ARM_EXCEPTION_IL. If this were to happen, The VMM can try and fix it by re- initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is likely not possible for protected VMs. Adapted from commit 22f5538 ("KVM: arm64: Handle Asymmetric AArch32 systems") Signed-off-by: Fuad Tabba <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 1423afc commit 5f39efc

File tree

1 file changed

+34
-0
lines changed

1 file changed

+34
-0
lines changed

arch/arm64/kvm/hyp/nvhe/switch.c

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -232,6 +232,37 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm)
232232
return hyp_exit_handlers;
233233
}
234234

235+
/*
236+
* Some guests (e.g., protected VMs) are not be allowed to run in AArch32.
237+
* The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
238+
* guest from dropping to AArch32 EL0 if implemented by the CPU. If the
239+
* hypervisor spots a guest in such a state ensure it is handled, and don't
240+
* trust the host to spot or fix it. The check below is based on the one in
241+
* kvm_arch_vcpu_ioctl_run().
242+
*
243+
* Returns false if the guest ran in AArch32 when it shouldn't have, and
244+
* thus should exit to the host, or true if a the guest run loop can continue.
245+
*/
246+
static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)
247+
{
248+
struct kvm *kvm = kern_hyp_va(vcpu->kvm);
249+
250+
if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu)) {
251+
/*
252+
* As we have caught the guest red-handed, decide that it isn't
253+
* fit for purpose anymore by making the vcpu invalid. The VMM
254+
* can try and fix it by re-initializing the vcpu with
255+
* KVM_ARM_VCPU_INIT, however, this is likely not possible for
256+
* protected VMs.
257+
*/
258+
vcpu->arch.target = -1;
259+
*exit_code = ARM_EXCEPTION_IL;
260+
return false;
261+
}
262+
263+
return true;
264+
}
265+
235266
/* Switch to the guest for legacy non-VHE systems */
236267
int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
237268
{
@@ -294,6 +325,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
294325
/* Jump in the fire! */
295326
exit_code = __guest_enter(vcpu);
296327

328+
if (unlikely(!handle_aarch32_guest(vcpu, &exit_code)))
329+
break;
330+
297331
/* And we're baaack! */
298332
} while (fixup_guest_exit(vcpu, &exit_code));
299333

0 commit comments

Comments
 (0)