Skip to content

Commit 174b6e4

Browse files
committed
KVM: x86/mmu: Decrease indentation in logic to sync new indirect shadow page
Combine the back-to-back if-statements for synchronizing children when linking a new indirect shadow page in order to decrease the indentation, and to make it easier to "see" the logic in its entirety. No functional change intended. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
1 parent acf2923 commit 174b6e4

File tree

1 file changed

+19
-21
lines changed

1 file changed

+19
-21
lines changed

arch/x86/kvm/mmu/paging_tmpl.h

Lines changed: 19 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -674,27 +674,25 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
674674
sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn,
675675
false, access);
676676

677-
if (sp != ERR_PTR(-EEXIST)) {
678-
/*
679-
* We must synchronize the pagetable before linking it
680-
* because the guest doesn't need to flush tlb when
681-
* the gpte is changed from non-present to present.
682-
* Otherwise, the guest may use the wrong mapping.
683-
*
684-
* For PG_LEVEL_4K, kvm_mmu_get_page() has already
685-
* synchronized it transiently via kvm_sync_page().
686-
*
687-
* For higher level pagetable, we synchronize it via
688-
* the slower mmu_sync_children(). If it needs to
689-
* break, some progress has been made; return
690-
* RET_PF_RETRY and retry on the next #PF.
691-
* KVM_REQ_MMU_SYNC is not necessary but it
692-
* expedites the process.
693-
*/
694-
if (sp->unsync_children &&
695-
mmu_sync_children(vcpu, sp, false))
696-
return RET_PF_RETRY;
697-
}
677+
/*
678+
* Synchronize the new page before linking it, as the CPU (KVM)
679+
* is architecturally disallowed from inserting non-present
680+
* entries into the TLB, i.e. the guest isn't required to flush
681+
* the TLB when changing the gPTE from non-present to present.
682+
*
683+
* For PG_LEVEL_4K, kvm_mmu_find_shadow_page() has already
684+
* synchronized the page via kvm_sync_page().
685+
*
686+
* For higher level pages, which cannot be unsync themselves
687+
* but can have unsync children, synchronize via the slower
688+
* mmu_sync_children(). If KVM needs to drop mmu_lock due to
689+
* contention or to reschedule, instruct the caller to retry
690+
* the #PF (mmu_sync_children() ensures forward progress will
691+
* be made).
692+
*/
693+
if (sp != ERR_PTR(-EEXIST) && sp->unsync_children &&
694+
mmu_sync_children(vcpu, sp, false))
695+
return RET_PF_RETRY;
698696

699697
/*
700698
* Verify that the gpte in the page we've just write

0 commit comments

Comments
 (0)