Skip to content

Commit d844479

Browse files
committed
vfio/type1: Use mapping page mask for pfnmaps
jira LE-3557 Rebuild_History Non-Buildable kernel-5.14.0-570.26.1.el9_6 commit-author Alex Williamson <[email protected]> commit 0fd0684 Empty-Commit: Cherry-Pick Conflicts during history rebuild. Will be included in final tarball splat. Ref for failed cherry-pick at: ciq/ciq_backports/kernel-5.14.0-570.26.1.el9_6/0fd06844.failed vfio-pci supports huge_fault for PCI MMIO BARs and will insert pud and pmd mappings for well aligned mappings. follow_pfnmap_start() walks the page table and therefore knows the page mask of the level where the address is found and returns this through follow_pfnmap_args.addr_mask. Subsequent pfns from this address until the end of the mapping page are necessarily consecutive. Use this information to retrieve a range of pfnmap pfns in a single pass. With optimal mappings and alignment on systems with 1GB pud and 4KB page size, this reduces iterations for DMA mapping PCI BARs by a factor of 256K. In real world testing, the overhead of iterating pfns for a VM DMA mapping a 32GB PCI BAR is reduced from ~1s to sub-millisecond overhead. Reviewed-by: Peter Xu <[email protected]> Reviewed-by: Mitchell Augustin <[email protected]> Tested-by: Mitchell Augustin <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alex Williamson <[email protected]> (cherry picked from commit 0fd0684) Signed-off-by: Jonathan Maple <[email protected]> # Conflicts: # drivers/vfio/vfio_iommu_type1.c
1 parent a72ae6f commit d844479

File tree

1 file changed

+83
-0
lines changed

1 file changed

+83
-0
lines changed
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
vfio/type1: Use mapping page mask for pfnmaps
2+
3+
jira LE-3557
4+
Rebuild_History Non-Buildable kernel-5.14.0-570.26.1.el9_6
5+
commit-author Alex Williamson <[email protected]>
6+
commit 0fd06844de5d063cb384384e06a11ec7141a35d5
7+
Empty-Commit: Cherry-Pick Conflicts during history rebuild.
8+
Will be included in final tarball splat. Ref for failed cherry-pick at:
9+
ciq/ciq_backports/kernel-5.14.0-570.26.1.el9_6/0fd06844.failed
10+
11+
vfio-pci supports huge_fault for PCI MMIO BARs and will insert pud and
12+
pmd mappings for well aligned mappings. follow_pfnmap_start() walks the
13+
page table and therefore knows the page mask of the level where the
14+
address is found and returns this through follow_pfnmap_args.addr_mask.
15+
Subsequent pfns from this address until the end of the mapping page are
16+
necessarily consecutive. Use this information to retrieve a range of
17+
pfnmap pfns in a single pass.
18+
19+
With optimal mappings and alignment on systems with 1GB pud and 4KB
20+
page size, this reduces iterations for DMA mapping PCI BARs by a
21+
factor of 256K. In real world testing, the overhead of iterating
22+
pfns for a VM DMA mapping a 32GB PCI BAR is reduced from ~1s to
23+
sub-millisecond overhead.
24+
25+
Reviewed-by: Peter Xu <[email protected]>
26+
Reviewed-by: Mitchell Augustin <[email protected]>
27+
Tested-by: Mitchell Augustin <[email protected]>
28+
Reviewed-by: Jason Gunthorpe <[email protected]>
29+
Link: https://lore.kernel.org/r/[email protected]
30+
Signed-off-by: Alex Williamson <[email protected]>
31+
(cherry picked from commit 0fd06844de5d063cb384384e06a11ec7141a35d5)
32+
Signed-off-by: Jonathan Maple <[email protected]>
33+
34+
# Conflicts:
35+
# drivers/vfio/vfio_iommu_type1.c
36+
diff --cc drivers/vfio/vfio_iommu_type1.c
37+
index 410214696525,0ac56072af9f..000000000000
38+
--- a/drivers/vfio/vfio_iommu_type1.c
39+
+++ b/drivers/vfio/vfio_iommu_type1.c
40+
@@@ -523,14 -520,12 +523,14 @@@ static void vfio_batch_fini(struct vfio
41+
42+
static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
43+
unsigned long vaddr, unsigned long *pfn,
44+
- bool write_fault)
45+
+ unsigned long *addr_mask, bool write_fault)
46+
{
47+
- struct follow_pfnmap_args args = { .vma = vma, .address = vaddr };
48+
+ pte_t *ptep;
49+
+ pte_t pte;
50+
+ spinlock_t *ptl;
51+
int ret;
52+
53+
- ret = follow_pfnmap_start(&args);
54+
+ ret = follow_pte(vma->vm_mm, vaddr, &ptep, &ptl);
55+
if (ret) {
56+
bool unlocked = false;
57+
58+
@@@ -549,14 -544,14 +549,23 @@@
59+
return ret;
60+
}
61+
62+
++<<<<<<< HEAD
63+
+ pte = ptep_get(ptep);
64+
+
65+
+ if (write_fault && !pte_write(pte))
66+
+ ret = -EFAULT;
67+
+ else
68+
+ *pfn = pte_pfn(pte);
69+
++=======
70+
+ if (write_fault && !args.writable) {
71+
+ ret = -EFAULT;
72+
+ } else {
73+
+ *pfn = args.pfn;
74+
+ *addr_mask = args.addr_mask;
75+
+ }
76+
++>>>>>>> 0fd06844de5d (vfio/type1: Use mapping page mask for pfnmaps)
77+
78+
- follow_pfnmap_end(&args);
79+
+ pte_unmap_unlock(ptep, ptl);
80+
return ret;
81+
}
82+
83+
* Unmerged path drivers/vfio/vfio_iommu_type1.c

0 commit comments

Comments
 (0)