Skip to content

Commit 0eee5ae

Browse files
Petr TesarikChristoph Hellwig
authored andcommitted
swiotlb: fix slot alignment checks
Explicit alignment and page alignment are used only to calculate the stride, not when checking actual slot physical address. Originally, only page alignment was implemented, and that worked, because the whole SWIOTLB is allocated on a page boundary, so aligning the start index was sufficient to ensure a page-aligned slot. When commit 1f221a0 ("swiotlb: respect min_align_mask") added support for min_align_mask, the index could be incremented in the search loop, potentially finding an unaligned slot if minimum device alignment is between IO_TLB_SIZE and PAGE_SIZE. The bug could go unnoticed, because the slot size is 2 KiB, and the most common page size is 4 KiB, so there is no alignment value in between. IIUC the intention has been to find a slot that conforms to all alignment constraints: device minimum alignment, an explicit alignment (given as function parameter) and optionally page alignment (if allocation size is >= PAGE_SIZE). The most restrictive mask can be trivially computed with logical AND. The rest can stay. Fixes: 1f221a0 ("swiotlb: respect min_align_mask") Fixes: e81e99b ("swiotlb: Support aligned swiotlb buffers") Signed-off-by: Petr Tesarik <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
1 parent 39e7d2a commit 0eee5ae

File tree

1 file changed

+10
-6
lines changed

1 file changed

+10
-6
lines changed

kernel/dma/swiotlb.c

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -634,22 +634,26 @@ static int swiotlb_do_find_slots(struct device *dev, int area_index,
634634
BUG_ON(!nslots);
635635
BUG_ON(area_index >= mem->nareas);
636636

637+
/*
638+
* For allocations of PAGE_SIZE or larger only look for page aligned
639+
* allocations.
640+
*/
641+
if (alloc_size >= PAGE_SIZE)
642+
iotlb_align_mask &= PAGE_MASK;
643+
iotlb_align_mask &= alloc_align_mask;
644+
637645
/*
638646
* For mappings with an alignment requirement don't bother looping to
639-
* unaligned slots once we found an aligned one. For allocations of
640-
* PAGE_SIZE or larger only look for page aligned allocations.
647+
* unaligned slots once we found an aligned one.
641648
*/
642649
stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1;
643-
if (alloc_size >= PAGE_SIZE)
644-
stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT));
645-
stride = max(stride, (alloc_align_mask >> IO_TLB_SHIFT) + 1);
646650

647651
spin_lock_irqsave(&area->lock, flags);
648652
if (unlikely(nslots > mem->area_nslabs - area->used))
649653
goto not_found;
650654

651655
slot_base = area_index * mem->area_nslabs;
652-
index = wrap_area_index(mem, ALIGN(area->index, stride));
656+
index = area->index;
653657

654658
for (slots_checked = 0; slots_checked < mem->area_nslabs; ) {
655659
slot_index = slot_base + index;

0 commit comments

Comments
 (0)