Skip to content

Commit 909b583

Browse files
committed
KVM: arm64: Avoid soft lockups due to I-cache maintenance
Gavin reports of soft lockups on his Ampere Altra Max machine when backing KVM guests with hugetlb pages. Upon further investigation, it was found that the system is unable to keep up with parallel I-cache invalidations done by KVM's stage-2 fault handler. This is ultimately an implementation problem. I-cache maintenance instructions are available at EL0, so nothing stops a malicious userspace from hammering a system with CMOs and cause it to fall over. "Fixing" this problem in KVM is nothing more than slapping a bandage over a much deeper problem. Anyway, the kernel already has a heuristic for limiting TLB invalidations to avoid soft lockups. Reuse that logic to limit I-cache CMOs done by KVM to map executable pages on systems without FEAT_DIC. While at it, restructure __invalidate_icache_guest_page() to improve readability and squeeze our new condition into the existing branching structure. Link: https://lore.kernel.org/kvmarm/[email protected]/ Reviewed-by: Gavin Shan <[email protected]> Tested-by: Gavin Shan <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Oliver Upton <[email protected]>
1 parent ec1c3b9 commit 909b583

File tree

1 file changed

+31
-6
lines changed

1 file changed

+31
-6
lines changed

arch/arm64/include/asm/kvm_mmu.h

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -224,16 +224,41 @@ static inline void __clean_dcache_guest_page(void *va, size_t size)
224224
kvm_flush_dcache_to_poc(va, size);
225225
}
226226

227+
static inline size_t __invalidate_icache_max_range(void)
228+
{
229+
u8 iminline;
230+
u64 ctr;
231+
232+
asm volatile(ALTERNATIVE_CB("movz %0, #0\n"
233+
"movk %0, #0, lsl #16\n"
234+
"movk %0, #0, lsl #32\n"
235+
"movk %0, #0, lsl #48\n",
236+
ARM64_ALWAYS_SYSTEM,
237+
kvm_compute_final_ctr_el0)
238+
: "=r" (ctr));
239+
240+
iminline = SYS_FIELD_GET(CTR_EL0, IminLine, ctr) + 2;
241+
return MAX_DVM_OPS << iminline;
242+
}
243+
227244
static inline void __invalidate_icache_guest_page(void *va, size_t size)
228245
{
229-
if (icache_is_aliasing()) {
230-
/* any kind of VIPT cache */
246+
/*
247+
* VPIPT I-cache maintenance must be done from EL2. See comment in the
248+
* nVHE flavor of __kvm_tlb_flush_vmid_ipa().
249+
*/
250+
if (icache_is_vpipt() && read_sysreg(CurrentEL) != CurrentEL_EL2)
251+
return;
252+
253+
/*
254+
* Blow the whole I-cache if it is aliasing (i.e. VIPT) or the
255+
* invalidation range exceeds our arbitrary limit on invadations by
256+
* cache line.
257+
*/
258+
if (icache_is_aliasing() || size > __invalidate_icache_max_range())
231259
icache_inval_all_pou();
232-
} else if (read_sysreg(CurrentEL) != CurrentEL_EL1 ||
233-
!icache_is_vpipt()) {
234-
/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
260+
else
235261
icache_inval_pou((unsigned long)va, (unsigned long)va + size);
236-
}
237262
}
238263

239264
void kvm_set_way_flush(struct kvm_vcpu *vcpu);

0 commit comments

Comments
 (0)