Skip to content

Commit 205cb41

Browse files
committed
swiotlb: fix swiotlb_bounce() to do partial sync's correctly
jira LE-1907 cve CVE-2024-35814 Rebuild_History Non-Buildable kernel-4.18.0-553.16.1.el8_10 commit-author Michael Kelley <[email protected]> commit e8068f2 Empty-Commit: Cherry-Pick Conflicts during history rebuild. Will be included in final tarball splat. Ref for failed cherry-pick at: ciq/ciq_backports/kernel-4.18.0-553.16.1.el8_10/e8068f2d.failed In current code, swiotlb_bounce() may do partial sync's correctly in some circumstances, but may incorrectly fail in other circumstances. The failure cases require both of these to be true: 1) swiotlb_align_offset() returns a non-zero "offset" value 2) the tlb_addr of the partial sync area points into the first "offset" bytes of the _second_ or subsequent swiotlb slot allocated for the mapping Code added in commit 868c9dd ("swiotlb: add overflow checks to swiotlb_bounce") attempts to WARN on the invalid case where tlb_addr points into the first "offset" bytes of the _first_ allocated slot. But there's no way for swiotlb_bounce() to distinguish the first slot from the second and subsequent slots, so the WARN can be triggered incorrectly when #2 above is true. Related, current code calculates an adjustment to the orig_addr stored in the swiotlb slot. The adjustment compensates for the difference in the tlb_addr used for the partial sync vs. the tlb_addr for the full mapping. The adjustment is stored in the local variable tlb_offset. But when #1 and #2 above are true, it's valid for this adjustment to be negative. In such case the arithmetic to adjust orig_addr produces the wrong result due to tlb_offset being declared as unsigned. Fix these problems by removing the over-constraining validations added in 868c9dd. Change the declaration of tlb_offset to be signed instead of unsigned so the adjustment arithmetic works correctly. Tested with a test-only hack to how swiotlb_tbl_map_single() calls swiotlb_bounce(). Instead of calling swiotlb_bounce() just once for the entire mapped area, do a loop with each iteration doing only a 128 byte partial sync until the entire mapped area is sync'ed. Then with swiotlb=force on the kernel boot line, run a variety of raw disk writes followed by read and verification of all bytes of the written data. The storage device has DMA min_align_mask set, and the writes are done with a variety of original buffer memory address alignments and overall buffer sizes. For many of the combinations, current code triggers the WARN statements, or the data verification fails. With the fixes, no WARNs occur and all verifications pass. Fixes: 5f89468 ("swiotlb: manipulate orig_addr when tlb_addr has offset") Fixes: 868c9dd ("swiotlb: add overflow checks to swiotlb_bounce") Signed-off-by: Michael Kelley <[email protected]> Dominique Martinet <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> (cherry picked from commit e8068f2) Signed-off-by: Jonathan Maple <[email protected]> # Conflicts: # kernel/dma/swiotlb.c
1 parent 117c18f commit 205cb41

File tree

1 file changed

+105
-0
lines changed

1 file changed

+105
-0
lines changed
Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
swiotlb: fix swiotlb_bounce() to do partial sync's correctly
2+
3+
jira LE-1907
4+
cve CVE-2024-35814
5+
Rebuild_History Non-Buildable kernel-4.18.0-553.16.1.el8_10
6+
commit-author Michael Kelley <[email protected]>
7+
commit e8068f2d756d57a5206fa3180ade365a8c12ed85
8+
Empty-Commit: Cherry-Pick Conflicts during history rebuild.
9+
Will be included in final tarball splat. Ref for failed cherry-pick at:
10+
ciq/ciq_backports/kernel-4.18.0-553.16.1.el8_10/e8068f2d.failed
11+
12+
In current code, swiotlb_bounce() may do partial sync's correctly in
13+
some circumstances, but may incorrectly fail in other circumstances.
14+
The failure cases require both of these to be true:
15+
16+
1) swiotlb_align_offset() returns a non-zero "offset" value
17+
2) the tlb_addr of the partial sync area points into the first
18+
"offset" bytes of the _second_ or subsequent swiotlb slot allocated
19+
for the mapping
20+
21+
Code added in commit 868c9ddc182b ("swiotlb: add overflow checks
22+
to swiotlb_bounce") attempts to WARN on the invalid case where
23+
tlb_addr points into the first "offset" bytes of the _first_
24+
allocated slot. But there's no way for swiotlb_bounce() to distinguish
25+
the first slot from the second and subsequent slots, so the WARN
26+
can be triggered incorrectly when #2 above is true.
27+
28+
Related, current code calculates an adjustment to the orig_addr stored
29+
in the swiotlb slot. The adjustment compensates for the difference
30+
in the tlb_addr used for the partial sync vs. the tlb_addr for the full
31+
mapping. The adjustment is stored in the local variable tlb_offset.
32+
But when #1 and #2 above are true, it's valid for this adjustment to
33+
be negative. In such case the arithmetic to adjust orig_addr produces
34+
the wrong result due to tlb_offset being declared as unsigned.
35+
36+
Fix these problems by removing the over-constraining validations added
37+
in 868c9ddc182b. Change the declaration of tlb_offset to be signed
38+
instead of unsigned so the adjustment arithmetic works correctly.
39+
40+
Tested with a test-only hack to how swiotlb_tbl_map_single() calls
41+
swiotlb_bounce(). Instead of calling swiotlb_bounce() just once
42+
for the entire mapped area, do a loop with each iteration doing
43+
only a 128 byte partial sync until the entire mapped area is
44+
sync'ed. Then with swiotlb=force on the kernel boot line, run a
45+
variety of raw disk writes followed by read and verification of
46+
all bytes of the written data. The storage device has DMA
47+
min_align_mask set, and the writes are done with a variety of
48+
original buffer memory address alignments and overall buffer
49+
sizes. For many of the combinations, current code triggers the
50+
WARN statements, or the data verification fails. With the fixes,
51+
no WARNs occur and all verifications pass.
52+
53+
Fixes: 5f89468e2f06 ("swiotlb: manipulate orig_addr when tlb_addr has offset")
54+
Fixes: 868c9ddc182b ("swiotlb: add overflow checks to swiotlb_bounce")
55+
Signed-off-by: Michael Kelley <[email protected]>
56+
Dominique Martinet <[email protected]>
57+
Signed-off-by: Christoph Hellwig <[email protected]>
58+
(cherry picked from commit e8068f2d756d57a5206fa3180ade365a8c12ed85)
59+
Signed-off-by: Jonathan Maple <[email protected]>
60+
61+
# Conflicts:
62+
# kernel/dma/swiotlb.c
63+
diff --cc kernel/dma/swiotlb.c
64+
index c0e227dcb45e,d57c8837c813..000000000000
65+
--- a/kernel/dma/swiotlb.c
66+
+++ b/kernel/dma/swiotlb.c
67+
@@@ -509,22 -868,18 +509,37 @@@ static void swiotlb_bounce(struct devic
68+
if (orig_addr == INVALID_PHYS_ADDR)
69+
return;
70+
71+
++<<<<<<< HEAD
72+
+ tlb_offset = tlb_addr & (IO_TLB_SIZE - 1);
73+
+ orig_addr_offset = swiotlb_align_offset(dev, orig_addr);
74+
+ if (tlb_offset < orig_addr_offset) {
75+
+ dev_WARN_ONCE(dev, 1,
76+
+ "Access before mapping start detected. orig offset %u, requested offset %u.\n",
77+
+ orig_addr_offset, tlb_offset);
78+
+ return;
79+
+ }
80+
+
81+
+ tlb_offset -= orig_addr_offset;
82+
+ if (tlb_offset > alloc_size) {
83+
+ dev_WARN_ONCE(dev, 1,
84+
+ "Buffer overflow detected. Allocation size: %zu. Mapping size: %zu+%u.\n",
85+
+ alloc_size, size, tlb_offset);
86+
+ return;
87+
+ }
88+
++=======
89+
+ /*
90+
+ * It's valid for tlb_offset to be negative. This can happen when the
91+
+ * "offset" returned by swiotlb_align_offset() is non-zero, and the
92+
+ * tlb_addr is pointing within the first "offset" bytes of the second
93+
+ * or subsequent slots of the allocated swiotlb area. While it's not
94+
+ * valid for tlb_addr to be pointing within the first "offset" bytes
95+
+ * of the first slot, there's no way to check for such an error since
96+
+ * this function can't distinguish the first slot from the second and
97+
+ * subsequent slots.
98+
+ */
99+
+ tlb_offset = (tlb_addr & (IO_TLB_SIZE - 1)) -
100+
+ swiotlb_align_offset(dev, 0, orig_addr);
101+
++>>>>>>> e8068f2d756d (swiotlb: fix swiotlb_bounce() to do partial sync's correctly)
102+
103+
orig_addr += tlb_offset;
104+
alloc_size -= tlb_offset;
105+
* Unmerged path kernel/dma/swiotlb.c

0 commit comments

Comments
 (0)