Skip to content

Commit ed49fe5

Browse files
willdeaconMarc Zyngier
authored andcommitted
KVM: arm64: Ensure TLBI uses correct VMID after changing context
When the target context passed to enter_vmid_context() matches the current running context, the function returns early without manipulating the registers of the stage-2 MMU. This can result in a stale VMID due to the lack of an ISB instruction in exit_vmid_context() after writing the VTTBR when ARM64_WORKAROUND_SPECULATIVE_AT is not enabled. For example, with pKVM enabled: // Initially running in host context enter_vmid_context(guest); -> __load_stage2(guest); isb // Writes VTCR & VTTBR exit_vmid_context(guest); -> __load_stage2(host); // Restores VTCR & VTTBR enter_vmid_context(host); -> Returns early as we're already in host context tlbi vmalls12e1is // !!! Can use the stale VMID as we // haven't performed context // synchronisation since restoring // VTTBR.VMID Add an unconditional ISB instruction to exit_vmid_context() after restoring the VTTBR. This already existed for the ARM64_WORKAROUND_SPECULATIVE_AT path, so we can simply hoist that onto the common path. Cc: Marc Zyngier <[email protected]> Cc: Oliver Upton <[email protected]> Cc: Fuad Tabba <[email protected]> Fixes: 58f3b0f ("KVM: arm64: Support TLB invalidation in guest context") Signed-off-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Marc Zyngier <[email protected]>
1 parent dc0dddb commit ed49fe5

File tree

1 file changed

+3
-3
lines changed
  • arch/arm64/kvm/hyp/nvhe

1 file changed

+3
-3
lines changed

arch/arm64/kvm/hyp/nvhe/tlb.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -132,10 +132,10 @@ static void exit_vmid_context(struct tlb_inv_context *cxt)
132132
else
133133
__load_host_stage2();
134134

135-
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
136-
/* Ensure write of the old VMID */
137-
isb();
135+
/* Ensure write of the old VMID */
136+
isb();
138137

138+
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
139139
if (!(cxt->sctlr & SCTLR_ELx_M)) {
140140
write_sysreg_el1(cxt->sctlr, SYS_SCTLR);
141141
isb();

0 commit comments

Comments
 (0)