Skip to content

Commit 66bc627

Browse files
sean-jcbonzini
authored andcommitted
KVM: x86/mmu: Don't mark "struct page" accessed when zapping SPTEs
Don't mark pages/folios as accessed in the primary MMU when zapping SPTEs, as doing so relies on kvm_pfn_to_refcounted_page(), and generally speaking is unnecessary and wasteful. KVM participates in page aging via mmu_notifiers, so there's no need to push "accessed" updates to the primary MMU. And if KVM zaps a SPTe in response to an mmu_notifier, marking it accessed _after_ the primary MMU has decided to zap the page is likely to go unnoticed, i.e. odds are good that, if the page is being zapped for reclaim, the page will be swapped out regardless of whether or not KVM marks the page accessed. Dropping x86's use of kvm_set_pfn_accessed() also paves the way for removing kvm_pfn_to_refcounted_page() and all its users. Tested-by: Alex Bennée <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Tested-by: Dmitry Osipenko <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-ID: <[email protected]>
1 parent 31fccdd commit 66bc627

File tree

2 files changed

+0
-20
lines changed

2 files changed

+0
-20
lines changed

arch/x86/kvm/mmu/mmu.c

Lines changed: 0 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -559,10 +559,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
559559
*/
560560
static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
561561
{
562-
kvm_pfn_t pfn;
563562
u64 old_spte = *sptep;
564563
int level = sptep_to_sp(sptep)->role.level;
565-
struct page *page;
566564

567565
if (!is_shadow_present_pte(old_spte) ||
568566
!spte_has_volatile_bits(old_spte))
@@ -574,21 +572,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
574572
return old_spte;
575573

576574
kvm_update_page_stats(kvm, level, -1);
577-
578-
pfn = spte_to_pfn(old_spte);
579-
580-
/*
581-
* KVM doesn't hold a reference to any pages mapped into the guest, and
582-
* instead uses the mmu_notifier to ensure that KVM unmaps any pages
583-
* before they are reclaimed. Sanity check that, if the pfn is backed
584-
* by a refcounted page, the refcount is elevated.
585-
*/
586-
page = kvm_pfn_to_refcounted_page(pfn);
587-
WARN_ON_ONCE(page && !page_count(page));
588-
589-
if (is_accessed_spte(old_spte))
590-
kvm_set_pfn_accessed(pfn);
591-
592575
return old_spte;
593576
}
594577

arch/x86/kvm/mmu/tdp_mmu.c

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -861,9 +861,6 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
861861

862862
tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
863863

864-
if (is_accessed_spte(iter.old_spte))
865-
kvm_set_pfn_accessed(spte_to_pfn(iter.old_spte));
866-
867864
/*
868865
* Zappings SPTEs in invalid roots doesn't require a TLB flush,
869866
* see kvm_tdp_mmu_zap_invalidated_roots() for details.

0 commit comments

Comments
 (0)