Skip to content

Commit cd252ee

Browse files
committed
swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
jira LE-1907 cve CVE-2024-35814 Rebuild_History Non-Buildable kernel-4.18.0-553.16.1.el8_10 commit-author Will Deacon <[email protected]> commit 14cebf6 Empty-Commit: Cherry-Pick Conflicts during history rebuild. Will be included in final tarball splat. Ref for failed cherry-pick at: ciq/ciq_backports/kernel-4.18.0-553.16.1.el8_10/14cebf68.failed For swiotlb allocations >= PAGE_SIZE, the slab search historically adjusted the stride to avoid checking unaligned slots. This had the side-effect of aligning large mapping requests to PAGE_SIZE, but that was broken by 0eee5ae ("swiotlb: fix slot alignment checks"). Since this alignment could be relied upon drivers, reinstate PAGE_SIZE alignment for swiotlb mappings >= PAGE_SIZE. Reported-by: Michael Kelley <[email protected]> Signed-off-by: Will Deacon <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Reviewed-by: Petr Tesarik <[email protected]> Tested-by: Nicolin Chen <[email protected]> Tested-by: Michael Kelley <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> (cherry picked from commit 14cebf6) Signed-off-by: Jonathan Maple <[email protected]> # Conflicts: # kernel/dma/swiotlb.c
1 parent 1375c8b commit cd252ee

File tree

1 file changed

+113
-0
lines changed

1 file changed

+113
-0
lines changed
Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
2+
3+
jira LE-1907
4+
cve CVE-2024-35814
5+
Rebuild_History Non-Buildable kernel-4.18.0-553.16.1.el8_10
6+
commit-author Will Deacon <[email protected]>
7+
commit 14cebf689a78e8a1c041138af221ef6eac6bc7da
8+
Empty-Commit: Cherry-Pick Conflicts during history rebuild.
9+
Will be included in final tarball splat. Ref for failed cherry-pick at:
10+
ciq/ciq_backports/kernel-4.18.0-553.16.1.el8_10/14cebf68.failed
11+
12+
For swiotlb allocations >= PAGE_SIZE, the slab search historically
13+
adjusted the stride to avoid checking unaligned slots. This had the
14+
side-effect of aligning large mapping requests to PAGE_SIZE, but that
15+
was broken by 0eee5ae10256 ("swiotlb: fix slot alignment checks").
16+
17+
Since this alignment could be relied upon drivers, reinstate PAGE_SIZE
18+
alignment for swiotlb mappings >= PAGE_SIZE.
19+
20+
Reported-by: Michael Kelley <[email protected]>
21+
Signed-off-by: Will Deacon <[email protected]>
22+
Reviewed-by: Robin Murphy <[email protected]>
23+
Reviewed-by: Petr Tesarik <[email protected]>
24+
Tested-by: Nicolin Chen <[email protected]>
25+
Tested-by: Michael Kelley <[email protected]>
26+
Signed-off-by: Christoph Hellwig <[email protected]>
27+
(cherry picked from commit 14cebf689a78e8a1c041138af221ef6eac6bc7da)
28+
Signed-off-by: Jonathan Maple <[email protected]>
29+
30+
# Conflicts:
31+
# kernel/dma/swiotlb.c
32+
diff --cc kernel/dma/swiotlb.c
33+
index c0e227dcb45e,86fe172b5958..000000000000
34+
--- a/kernel/dma/swiotlb.c
35+
+++ b/kernel/dma/swiotlb.c
36+
@@@ -611,36 -1012,50 +611,66 @@@ static int swiotlb_do_find_slots(struc
37+
unsigned int slot_index;
38+
39+
BUG_ON(!nslots);
40+
++<<<<<<< HEAD
41+
+ BUG_ON(area_index >= mem->nareas);
42+
+
43+
+ /*
44+
+ * For allocations of PAGE_SIZE or larger only look for page aligned
45+
+ * allocations.
46+
+ */
47+
+ if (alloc_size >= PAGE_SIZE)
48+
+ iotlb_align_mask |= ~PAGE_MASK;
49+
+ iotlb_align_mask &= ~(IO_TLB_SIZE - 1);
50+
+
51+
+ /*
52+
+ * For mappings with an alignment requirement don't bother looping to
53+
+ * unaligned slots once we found an aligned one.
54+
+ */
55+
+ stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1;
56+
+
57+
++=======
58+
+ BUG_ON(area_index >= pool->nareas);
59+
+
60+
+ /*
61+
+ * Historically, swiotlb allocations >= PAGE_SIZE were guaranteed to be
62+
+ * page-aligned in the absence of any other alignment requirements.
63+
+ * 'alloc_align_mask' was later introduced to specify the alignment
64+
+ * explicitly, however this is passed as zero for streaming mappings
65+
+ * and so we preserve the old behaviour there in case any drivers are
66+
+ * relying on it.
67+
+ */
68+
+ if (!alloc_align_mask && !iotlb_align_mask && alloc_size >= PAGE_SIZE)
69+
+ alloc_align_mask = PAGE_SIZE - 1;
70+
+
71+
+ /*
72+
+ * Ensure that the allocation is at least slot-aligned and update
73+
+ * 'iotlb_align_mask' to ignore bits that will be preserved when
74+
+ * offsetting into the allocation.
75+
+ */
76+
+ alloc_align_mask |= (IO_TLB_SIZE - 1);
77+
+ iotlb_align_mask &= ~alloc_align_mask;
78+
+
79+
+ /*
80+
+ * For mappings with an alignment requirement don't bother looping to
81+
+ * unaligned slots once we found an aligned one.
82+
+ */
83+
+ stride = get_max_slots(max(alloc_align_mask, iotlb_align_mask));
84+
+
85+
++>>>>>>> 14cebf689a78 (swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE)
86+
spin_lock_irqsave(&area->lock, flags);
87+
- if (unlikely(nslots > pool->area_nslabs - area->used))
88+
+ if (unlikely(nslots > mem->area_nslabs - area->used))
89+
goto not_found;
90+
91+
- slot_base = area_index * pool->area_nslabs;
92+
+ slot_base = area_index * mem->area_nslabs;
93+
index = area->index;
94+
95+
- for (slots_checked = 0; slots_checked < pool->area_nslabs; ) {
96+
- phys_addr_t tlb_addr;
97+
-
98+
+ for (slots_checked = 0; slots_checked < mem->area_nslabs; ) {
99+
slot_index = slot_base + index;
100+
- tlb_addr = slot_addr(tbl_dma_addr, slot_index);
101+
102+
- if ((tlb_addr & alloc_align_mask) ||
103+
- (orig_addr && (tlb_addr & iotlb_align_mask) !=
104+
- (orig_addr & iotlb_align_mask))) {
105+
- index = wrap_area_index(pool, index + 1);
106+
+ if (orig_addr &&
107+
+ (slot_addr(tbl_dma_addr, slot_index) &
108+
+ iotlb_align_mask) != (orig_addr & iotlb_align_mask)) {
109+
+ index = wrap_area_index(mem, index + 1);
110+
slots_checked++;
111+
continue;
112+
}
113+
* Unmerged path kernel/dma/swiotlb.c

0 commit comments

Comments
 (0)