Skip to content

Commit 972bf25

Browse files
John Garry via iommujoergroedel
authored andcommitted
iommu/iova: Move fast alloc size roundup into alloc_iova_fast()
It really is a property of the IOVA rcache code that we need to alloc a power-of-2 size, so relocate the functionality to resize into alloc_iova_fast(), rather than the callsites. Signed-off-by: John Garry <[email protected]> Acked-by: Will Deacon <[email protected]> Reviewed-by: Xie Yongji <[email protected]> Acked-by: Jason Wang <[email protected]> Acked-by: Michael S. Tsirkin <[email protected]> Acked-by: Robin Murphy <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
1 parent 9abe2ac commit 972bf25

File tree

3 files changed

+9
-16
lines changed

3 files changed

+9
-16
lines changed

drivers/iommu/dma-iommu.c

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -442,14 +442,6 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
442442

443443
shift = iova_shift(iovad);
444444
iova_len = size >> shift;
445-
/*
446-
* Freeing non-power-of-two-sized allocations back into the IOVA caches
447-
* will come back to bite us badly, so we have to waste a bit of space
448-
* rounding up anything cacheable to make sure that can't happen. The
449-
* order of the unadjusted size will still match upon freeing.
450-
*/
451-
if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
452-
iova_len = roundup_pow_of_two(iova_len);
453445

454446
dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
455447

drivers/iommu/iova.c

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -497,6 +497,15 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
497497
unsigned long iova_pfn;
498498
struct iova *new_iova;
499499

500+
/*
501+
* Freeing non-power-of-two-sized allocations back into the IOVA caches
502+
* will come back to bite us badly, so we have to waste a bit of space
503+
* rounding up anything cacheable to make sure that can't happen. The
504+
* order of the unadjusted size will still match upon freeing.
505+
*/
506+
if (size < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
507+
size = roundup_pow_of_two(size);
508+
500509
iova_pfn = iova_rcache_get(iovad, size, limit_pfn + 1);
501510
if (iova_pfn)
502511
return iova_pfn;

drivers/vdpa/vdpa_user/iova_domain.c

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -292,14 +292,6 @@ vduse_domain_alloc_iova(struct iova_domain *iovad,
292292
unsigned long iova_len = iova_align(iovad, size) >> shift;
293293
unsigned long iova_pfn;
294294

295-
/*
296-
* Freeing non-power-of-two-sized allocations back into the IOVA caches
297-
* will come back to bite us badly, so we have to waste a bit of space
298-
* rounding up anything cacheable to make sure that can't happen. The
299-
* order of the unadjusted size will still match upon freeing.
300-
*/
301-
if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
302-
iova_len = roundup_pow_of_two(iova_len);
303295
iova_pfn = alloc_iova_fast(iovad, iova_len, limit >> shift, true);
304296

305297
return iova_pfn << shift;

0 commit comments

Comments
 (0)