Skip to content

Commit d43e267

Browse files
committed
KVM: x86: only do L1TF workaround on affected processors
KVM stores the gfn in MMIO SPTEs as a caching optimization. These are split in two parts, as in "[high 11111 low]", to thwart any attempt to use these bits in an L1TF attack. This works as long as there are 5 free bits between MAXPHYADDR and bit 50 (inclusive), leaving bit 51 free so that the MMIO access triggers a reserved-bit-set page fault. The bit positions however were computed wrongly for AMD processors that have encryption support. In this case, x86_phys_bits is reduced (for example from 48 to 43, to account for the C bit at position 47 and four bits used internally to store the SEV ASID and other stuff) while x86_cache_bits in would remain set to 48, and _all_ bits between the reduced MAXPHYADDR and bit 51 are set. Then low_phys_bits would also cover some of the bits that are set in the shadow_mmio_value, terribly confusing the gfn caching mechanism. To fix this, avoid splitting gfns as long as the processor does not have the L1TF bug (which includes all AMD processors). When there is no splitting, low_phys_bits can be set to the reduced MAXPHYADDR removing the overlap. This fixes "npt=0" operation on EPYC processors. Thanks to Maxim Levitsky for bisecting this bug. Cc: [email protected] Fixes: 52918ed ("KVM: SVM: Override default MMIO mask if memory encryption is enabled") Signed-off-by: Paolo Bonzini <[email protected]>
1 parent c4e0e4a commit d43e267

File tree

1 file changed

+10
-9
lines changed

1 file changed

+10
-9
lines changed

arch/x86/kvm/mmu/mmu.c

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -335,6 +335,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask)
335335
{
336336
BUG_ON((u64)(unsigned)access_mask != access_mask);
337337
BUG_ON((mmio_mask & mmio_value) != mmio_value);
338+
WARN_ON(mmio_value & (shadow_nonpresent_or_rsvd_mask << shadow_nonpresent_or_rsvd_mask_len));
339+
WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
338340
shadow_mmio_value = mmio_value | SPTE_MMIO_MASK;
339341
shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK;
340342
shadow_mmio_access_mask = access_mask;
@@ -583,16 +585,15 @@ static void kvm_mmu_reset_all_pte_masks(void)
583585
* the most significant bits of legal physical address space.
584586
*/
585587
shadow_nonpresent_or_rsvd_mask = 0;
586-
low_phys_bits = boot_cpu_data.x86_cache_bits;
587-
if (boot_cpu_data.x86_cache_bits <
588-
52 - shadow_nonpresent_or_rsvd_mask_len) {
588+
low_phys_bits = boot_cpu_data.x86_phys_bits;
589+
if (boot_cpu_has_bug(X86_BUG_L1TF) &&
590+
!WARN_ON_ONCE(boot_cpu_data.x86_cache_bits >=
591+
52 - shadow_nonpresent_or_rsvd_mask_len)) {
592+
low_phys_bits = boot_cpu_data.x86_cache_bits
593+
- shadow_nonpresent_or_rsvd_mask_len;
589594
shadow_nonpresent_or_rsvd_mask =
590-
rsvd_bits(boot_cpu_data.x86_cache_bits -
591-
shadow_nonpresent_or_rsvd_mask_len,
592-
boot_cpu_data.x86_cache_bits - 1);
593-
low_phys_bits -= shadow_nonpresent_or_rsvd_mask_len;
594-
} else
595-
WARN_ON_ONCE(boot_cpu_has_bug(X86_BUG_L1TF));
595+
rsvd_bits(low_phys_bits, boot_cpu_data.x86_cache_bits - 1);
596+
}
596597

597598
shadow_nonpresent_or_rsvd_lower_gfn_mask =
598599
GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT);

0 commit comments

Comments
 (0)