Skip to content

Commit 13fb592

Browse files
committed
kvm: x86: disable shattered huge page recovery for PREEMPT_RT.
If a huge page is recovered (and becomes no executable) while another thread is executing it, the resulting contention on mmu_lock can cause latency spikes. Disabling recovery for PREEMPT_RT kernels fixes this issue. Signed-off-by: Paolo Bonzini <[email protected]>
1 parent 8c5bd25 commit 13fb592

File tree

1 file changed

+5
-0
lines changed

1 file changed

+5
-0
lines changed

arch/x86/kvm/mmu.c

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,12 @@
5151
extern bool itlb_multihit_kvm_mitigation;
5252

5353
static int __read_mostly nx_huge_pages = -1;
54+
#ifdef CONFIG_PREEMPT_RT
55+
/* Recovery can cause latency spikes, disable it for PREEMPT_RT. */
56+
static uint __read_mostly nx_huge_pages_recovery_ratio = 0;
57+
#else
5458
static uint __read_mostly nx_huge_pages_recovery_ratio = 60;
59+
#endif
5560

5661
static int set_nx_huge_pages(const char *val, const struct kernel_param *kp);
5762
static int set_nx_huge_pages_recovery_ratio(const char *val, const struct kernel_param *kp);

0 commit comments

Comments
 (0)