Skip to content

Commit acf2923

Browse files
committed
KVM: x86/mmu: Clean up function comments for dirty logging APIs
Rework the function comment for kvm_arch_mmu_enable_log_dirty_pt_masked() into the body of the function, as it has gotten a bit stale, is harder to read without the code context, and is the last source of warnings for W=1 builds in KVM x86 due to using a kernel-doc comment without documenting all parameters. Opportunistically subsume the functions comments for kvm_mmu_write_protect_pt_masked() and kvm_mmu_clear_dirty_pt_masked(), as there is no value in regurgitating similar information at a higher level, and capturing the differences between write-protection and PML-based dirty logging is best done in a common location. No functional change intended. Cc: David Matlack <[email protected]> Reviewed-by: Kai Huang <[email protected]> Reviewed-by: Pankaj Gupta <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
1 parent 47ac09b commit acf2923

File tree

1 file changed

+15
-33
lines changed

1 file changed

+15
-33
lines changed

arch/x86/kvm/mmu/mmu.c

Lines changed: 15 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1307,15 +1307,6 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
13071307
return flush;
13081308
}
13091309

1310-
/**
1311-
* kvm_mmu_write_protect_pt_masked - write protect selected PT level pages
1312-
* @kvm: kvm instance
1313-
* @slot: slot to protect
1314-
* @gfn_offset: start of the BITS_PER_LONG pages we care about
1315-
* @mask: indicates which pages we should protect
1316-
*
1317-
* Used when we do not need to care about huge page mappings.
1318-
*/
13191310
static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
13201311
struct kvm_memory_slot *slot,
13211312
gfn_t gfn_offset, unsigned long mask)
@@ -1339,16 +1330,6 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
13391330
}
13401331
}
13411332

1342-
/**
1343-
* kvm_mmu_clear_dirty_pt_masked - clear MMU D-bit for PT level pages, or write
1344-
* protect the page if the D-bit isn't supported.
1345-
* @kvm: kvm instance
1346-
* @slot: slot to clear D-bit
1347-
* @gfn_offset: start of the BITS_PER_LONG pages we care about
1348-
* @mask: indicates which pages we should clear D-bit
1349-
*
1350-
* Used for PML to re-log the dirty GPAs after userspace querying dirty_bitmap.
1351-
*/
13521333
static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
13531334
struct kvm_memory_slot *slot,
13541335
gfn_t gfn_offset, unsigned long mask)
@@ -1372,24 +1353,16 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
13721353
}
13731354
}
13741355

1375-
/**
1376-
* kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected
1377-
* PT level pages.
1378-
*
1379-
* It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
1380-
* enable dirty logging for them.
1381-
*
1382-
* We need to care about huge page mappings: e.g. during dirty logging we may
1383-
* have such mappings.
1384-
*/
13851356
void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
13861357
struct kvm_memory_slot *slot,
13871358
gfn_t gfn_offset, unsigned long mask)
13881359
{
13891360
/*
1390-
* Huge pages are NOT write protected when we start dirty logging in
1391-
* initially-all-set mode; must write protect them here so that they
1392-
* are split to 4K on the first write.
1361+
* If the slot was assumed to be "initially all dirty", write-protect
1362+
* huge pages to ensure they are split to 4KiB on the first write (KVM
1363+
* dirty logs at 4KiB granularity). If eager page splitting is enabled,
1364+
* immediately try to split huge pages, e.g. so that vCPUs don't get
1365+
* saddled with the cost of splitting.
13931366
*
13941367
* The gfn_offset is guaranteed to be aligned to 64, but the base_gfn
13951368
* of memslot has no such restriction, so the range can cross two large
@@ -1411,7 +1384,16 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
14111384
PG_LEVEL_2M);
14121385
}
14131386

1414-
/* Now handle 4K PTEs. */
1387+
/*
1388+
* (Re)Enable dirty logging for all 4KiB SPTEs that map the GFNs in
1389+
* mask. If PML is enabled and the GFN doesn't need to be write-
1390+
* protected for other reasons, e.g. shadow paging, clear the Dirty bit.
1391+
* Otherwise clear the Writable bit.
1392+
*
1393+
* Note that kvm_mmu_clear_dirty_pt_masked() is called whenever PML is
1394+
* enabled but it chooses between clearing the Dirty bit and Writeable
1395+
* bit based on the context.
1396+
*/
14151397
if (kvm_x86_ops.cpu_dirty_log_size)
14161398
kvm_mmu_clear_dirty_pt_masked(kvm, slot, gfn_offset, mask);
14171399
else

0 commit comments

Comments
 (0)