Skip to content

Commit d5aaad6

Browse files
sean-jcbonzini
authored andcommitted
KVM: x86/mmu: Fix per-cpu counter corruption on 32-bit builds
Take a signed 'long' instead of an 'unsigned long' for the number of pages to add/subtract to the total number of pages used by the MMU. This fixes a zero-extension bug on 32-bit kernels that effectively corrupts the per-cpu counter used by the shrinker. Per-cpu counters take a signed 64-bit value on both 32-bit and 64-bit kernels, whereas kvm_mod_used_mmu_pages() takes an unsigned long and thus an unsigned 32-bit value on 32-bit kernels. As a result, the value used to adjust the per-cpu counter is zero-extended (unsigned -> signed), not sign-extended (signed -> signed), and so KVM's intended -1 gets morphed to 4294967295 and effectively corrupts the counter. This was found by a staggering amount of sheer dumb luck when running kvm-unit-tests on a 32-bit KVM build. The shrinker just happened to kick in while running tests and do_shrink_slab() logged an error about trying to free a negative number of objects. The truly lucky part is that the kernel just happened to be a slightly stale build, as the shrinker no longer yells about negative objects as of commit 18bb473 ("mm: vmscan: shrink deferred objects proportional to priority"). vmscan: shrink_slab: mmu_shrink_scan+0x0/0x210 [kvm] negative objects to delete nr=-858993460 Fixes: bc8a3d8 ("kvm: mmu: Fix overflow on kvm mmu page limit calculation") Cc: [email protected] Cc: Ben Gardon <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Reviewed-by: Jim Mattson <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
1 parent 13c2c3c commit d5aaad6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

arch/x86/kvm/mmu/mmu.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1644,7 +1644,7 @@ static int is_empty_shadow_page(u64 *spt)
16441644
* aggregate version in order to make the slab shrinker
16451645
* faster
16461646
*/
1647-
static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, unsigned long nr)
1647+
static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
16481648
{
16491649
kvm->arch.n_used_mmu_pages += nr;
16501650
percpu_counter_add(&kvm_total_used_mmu_pages, nr);

0 commit comments

Comments
 (0)