Skip to content

Commit 91ef26f

Browse files
author
Christoph Hellwig
committed
dma-direct: relax addressability checks in dma_direct_supported
dma_direct_supported tries to find the minimum addressable bitmask based on the end pfn and optional magic that architectures can use to communicate the size of the magic ZONE_DMA that can be used for bounce buffering. But between the DMA offsets that can change per device (or sometimes even region), the fact the ZONE_DMA isn't even guaranteed to be the lowest addresses and failure of having proper interfaces to the MM code this fails at least for one arm subarchitecture. As all the legacy DMA implementations have supported 32-bit DMA masks, and 32-bit masks are guranteed to always work by the API contract (using bounce buffers if needed), we can short cut the complicated check and always return true without breaking existing assumptions. Hopefully we can properly clean up the interaction with the arch defined zones and the bootmem allocator eventually. Fixes: ad3c7b1 ("arm: use swiotlb for bounce buffering on LPAE configs") Reported-by: Peter Ujfalusi <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Tested-by: Peter Ujfalusi <[email protected]>
1 parent 8c8c5a4 commit 91ef26f

File tree

1 file changed

+11
-13
lines changed

1 file changed

+11
-13
lines changed

kernel/dma/direct.c

Lines changed: 11 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -472,28 +472,26 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
472472
}
473473
#endif /* CONFIG_MMU */
474474

475-
/*
476-
* Because 32-bit DMA masks are so common we expect every architecture to be
477-
* able to satisfy them - either by not supporting more physical memory, or by
478-
* providing a ZONE_DMA32. If neither is the case, the architecture needs to
479-
* use an IOMMU instead of the direct mapping.
480-
*/
481475
int dma_direct_supported(struct device *dev, u64 mask)
482476
{
483-
u64 min_mask;
484-
485-
if (IS_ENABLED(CONFIG_ZONE_DMA))
486-
min_mask = DMA_BIT_MASK(zone_dma_bits);
487-
else
488-
min_mask = DMA_BIT_MASK(32);
477+
u64 min_mask = (max_pfn - 1) << PAGE_SHIFT;
489478

490-
min_mask = min_t(u64, min_mask, (max_pfn - 1) << PAGE_SHIFT);
479+
/*
480+
* Because 32-bit DMA masks are so common we expect every architecture
481+
* to be able to satisfy them - either by not supporting more physical
482+
* memory, or by providing a ZONE_DMA32. If neither is the case, the
483+
* architecture needs to use an IOMMU instead of the direct mapping.
484+
*/
485+
if (mask >= DMA_BIT_MASK(32))
486+
return 1;
491487

492488
/*
493489
* This check needs to be against the actual bit mask value, so
494490
* use __phys_to_dma() here so that the SME encryption mask isn't
495491
* part of the check.
496492
*/
493+
if (IS_ENABLED(CONFIG_ZONE_DMA))
494+
min_mask = min_t(u64, min_mask, DMA_BIT_MASK(zone_dma_bits));
497495
return mask >= __phys_to_dma(dev, min_mask);
498496
}
499497

0 commit comments

Comments
 (0)