Skip to content

Commit 3b5795c

Browse files
committed
iommu/dma: fix zeroing of bounce buffer padding used by untrusted devices
jira LE-1907 cve CVE-2024-35814 Rebuild_History Non-Buildable kernel-4.18.0-553.16.1.el8_10 commit-author Michael Kelley <[email protected]> commit 2650073 Empty-Commit: Cherry-Pick Conflicts during history rebuild. Will be included in final tarball splat. Ref for failed cherry-pick at: ciq/ciq_backports/kernel-4.18.0-553.16.1.el8_10/2650073f.failed iommu_dma_map_page() allocates swiotlb memory as a bounce buffer when an untrusted device wants to map only part of the memory in an granule. The goal is to disallow the untrusted device having DMA access to unrelated kernel data that may be sharing the granule. To meet this goal, the bounce buffer itself is zeroed, and any additional swiotlb memory up to alloc_size after the bounce buffer end (i.e., "post-padding") is also zeroed. However, as of commit 901c728 ("Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE"""), swiotlb_tbl_map_single() always initializes the contents of the bounce buffer to the original memory. Zeroing the bounce buffer is redundant and probably wrong per the discussion in that commit. Only the post-padding needs to be zeroed. Also, when the DMA min_align_mask is non-zero, the allocated bounce buffer space may not start on a granule boundary. The swiotlb memory from the granule boundary to the start of the allocated bounce buffer might belong to some unrelated bounce buffer. So as described in the "second issue" in [1], it can't be zeroed to protect against untrusted devices. But as of commit af13356 ("swiotlb: extend buffer pre-padding to alloc_align_mask if necessary"), swiotlb_tbl_map_single() allocates pre-padding slots when necessary to meet min_align_mask requirements, making it possible to zero the pre-padding area as well. Finally, iommu_dma_map_page() uses the swiotlb for untrusted devices and also for certain kmalloc() memory. Current code does the zeroing for both cases, but it is needed only for the untrusted device case. Fix all of this by updating iommu_dma_map_page() to zero both the pre-padding and post-padding areas, but not the actual bounce buffer. Do this only in the case where the bounce buffer is used because of an untrusted device. [1] https://lore.kernel.org/all/[email protected]/ Signed-off-by: Michael Kelley <[email protected]> Reviewed-by: Petr Tesarik <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> (cherry picked from commit 2650073) Signed-off-by: Jonathan Maple <[email protected]> # Conflicts: # drivers/iommu/dma-iommu.c
1 parent 56cb912 commit 3b5795c

File tree

1 file changed

+104
-0
lines changed

1 file changed

+104
-0
lines changed
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
iommu/dma: fix zeroing of bounce buffer padding used by untrusted devices
2+
3+
jira LE-1907
4+
cve CVE-2024-35814
5+
Rebuild_History Non-Buildable kernel-4.18.0-553.16.1.el8_10
6+
commit-author Michael Kelley <[email protected]>
7+
commit 2650073f1b5858008c32712f3d9e1e808ce7e967
8+
Empty-Commit: Cherry-Pick Conflicts during history rebuild.
9+
Will be included in final tarball splat. Ref for failed cherry-pick at:
10+
ciq/ciq_backports/kernel-4.18.0-553.16.1.el8_10/2650073f.failed
11+
12+
iommu_dma_map_page() allocates swiotlb memory as a bounce buffer when an
13+
untrusted device wants to map only part of the memory in an granule. The
14+
goal is to disallow the untrusted device having DMA access to unrelated
15+
kernel data that may be sharing the granule. To meet this goal, the
16+
bounce buffer itself is zeroed, and any additional swiotlb memory up to
17+
alloc_size after the bounce buffer end (i.e., "post-padding") is also
18+
zeroed.
19+
20+
However, as of commit 901c7280ca0d ("Reinstate some of "swiotlb: rework
21+
"fix info leak with DMA_FROM_DEVICE"""), swiotlb_tbl_map_single() always
22+
initializes the contents of the bounce buffer to the original memory.
23+
Zeroing the bounce buffer is redundant and probably wrong per the
24+
discussion in that commit. Only the post-padding needs to be zeroed.
25+
26+
Also, when the DMA min_align_mask is non-zero, the allocated bounce
27+
buffer space may not start on a granule boundary. The swiotlb memory
28+
from the granule boundary to the start of the allocated bounce buffer
29+
might belong to some unrelated bounce buffer. So as described in the
30+
"second issue" in [1], it can't be zeroed to protect against untrusted
31+
devices. But as of commit af133562d5af ("swiotlb: extend buffer
32+
pre-padding to alloc_align_mask if necessary"), swiotlb_tbl_map_single()
33+
allocates pre-padding slots when necessary to meet min_align_mask
34+
requirements, making it possible to zero the pre-padding area as well.
35+
36+
Finally, iommu_dma_map_page() uses the swiotlb for untrusted devices
37+
and also for certain kmalloc() memory. Current code does the zeroing
38+
for both cases, but it is needed only for the untrusted device case.
39+
40+
Fix all of this by updating iommu_dma_map_page() to zero both the
41+
pre-padding and post-padding areas, but not the actual bounce buffer.
42+
Do this only in the case where the bounce buffer is used because
43+
of an untrusted device.
44+
45+
[1] https://lore.kernel.org/all/[email protected]/
46+
47+
Signed-off-by: Michael Kelley <[email protected]>
48+
Reviewed-by: Petr Tesarik <[email protected]>
49+
Signed-off-by: Christoph Hellwig <[email protected]>
50+
(cherry picked from commit 2650073f1b5858008c32712f3d9e1e808ce7e967)
51+
Signed-off-by: Jonathan Maple <[email protected]>
52+
53+
# Conflicts:
54+
# drivers/iommu/dma-iommu.c
55+
diff --cc drivers/iommu/dma-iommu.c
56+
index e531d2c4ba52,c745196bc150..000000000000
57+
--- a/drivers/iommu/dma-iommu.c
58+
+++ b/drivers/iommu/dma-iommu.c
59+
@@@ -1015,17 -1152,16 +1015,28 @@@ static dma_addr_t iommu_dma_map_page(st
60+
* If both the physical buffer start address and size are
61+
* page aligned, we don't need to use a bounce page.
62+
*/
63+
++<<<<<<< HEAD
64+
+ if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) {
65+
+ void *padding_start;
66+
+ size_t padding_size, aligned_size;
67+
+
68+
++=======
69+
+ if (dev_use_swiotlb(dev, size, dir) &&
70+
+ iova_offset(iovad, phys | size)) {
71+
++>>>>>>> 2650073f1b58 (iommu/dma: fix zeroing of bounce buffer padding used by untrusted devices)
72+
if (!is_swiotlb_active(dev)) {
73+
dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n");
74+
return DMA_MAPPING_ERROR;
75+
}
76+
77+
++<<<<<<< HEAD
78+
+ aligned_size = iova_align(iovad, size);
79+
+ phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size,
80+
++=======
81+
+ trace_swiotlb_bounced(dev, phys, size);
82+
+
83+
+ phys = swiotlb_tbl_map_single(dev, phys, size,
84+
++>>>>>>> 2650073f1b58 (iommu/dma: fix zeroing of bounce buffer padding used by untrusted devices)
85+
iova_mask(iovad), dir, attrs);
86+
87+
if (phys == DMA_MAPPING_ERROR)
88+
* Unmerged path drivers/iommu/dma-iommu.c
89+
diff --git a/include/linux/iova.h b/include/linux/iova.h
90+
index 4f41bb5086bf..faa13a06f6c9 100644
91+
--- a/include/linux/iova.h
92+
+++ b/include/linux/iova.h
93+
@@ -67,6 +67,11 @@ static inline size_t iova_align(struct iova_domain *iovad, size_t size)
94+
return ALIGN(size, iovad->granule);
95+
}
96+
97+
+static inline size_t iova_align_down(struct iova_domain *iovad, size_t size)
98+
+{
99+
+ return ALIGN_DOWN(size, iovad->granule);
100+
+}
101+
+
102+
static inline dma_addr_t iova_dma_addr(struct iova_domain *iovad, struct iova *iova)
103+
{
104+
return (dma_addr_t)iova->pfn_lo << iova_shift(iovad);

0 commit comments

Comments
 (0)