Skip to content

Commit 7a7c5ba

Browse files
rmurphy-armjoergroedel
authored andcommitted
iommu: Indicate queued flushes via gather data
Since iommu_iotlb_gather exists to help drivers optimise flushing for a given unmap request, it is also the logical place to indicate whether the unmap is strict or not, and thus help them further optimise for whether to expect a sync or a flush_all subsequently. As part of that, it also seems fair to make the flush queue code take responsibility for enforcing the really subtle ordering requirement it brings, so that we don't need to worry about forgetting that if new drivers want to add flush queue support, and can consolidate the existing versions. While we're adding to the kerneldoc, also fill in some info for @freelist which was overlooked previously. Signed-off-by: Robin Murphy <[email protected]> Link: https://lore.kernel.org/r/bf5f8e2ad84e48c712ccbf80fa8c610594c7595f.1628682049.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <[email protected]>
1 parent 8d97124 commit 7a7c5ba

File tree

3 files changed

+15
-1
lines changed

3 files changed

+15
-1
lines changed

drivers/iommu/dma-iommu.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -481,6 +481,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
481481
dma_addr -= iova_off;
482482
size = iova_align(iovad, size + iova_off);
483483
iommu_iotlb_gather_init(&iotlb_gather);
484+
iotlb_gather.queued = cookie->fq_domain;
484485

485486
unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather);
486487
WARN_ON(unmapped != size);

drivers/iommu/iova.c

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -637,6 +637,13 @@ void queue_iova(struct iova_domain *iovad,
637637
unsigned long flags;
638638
unsigned idx;
639639

640+
/*
641+
* Order against the IOMMU driver's pagetable update from unmapping
642+
* @pte, to guarantee that iova_domain_flush() observes that if called
643+
* from a different CPU before we release the lock below.
644+
*/
645+
smp_wmb();
646+
640647
spin_lock_irqsave(&fq->lock, flags);
641648

642649
/*

include/linux/iommu.h

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,16 +161,22 @@ enum iommu_dev_features {
161161
* @start: IOVA representing the start of the range to be flushed
162162
* @end: IOVA representing the end of the range to be flushed (inclusive)
163163
* @pgsize: The interval at which to perform the flush
164+
* @freelist: Removed pages to free after sync
165+
* @queued: Indicates that the flush will be queued
164166
*
165167
* This structure is intended to be updated by multiple calls to the
166168
* ->unmap() function in struct iommu_ops before eventually being passed
167-
* into ->iotlb_sync().
169+
* into ->iotlb_sync(). Drivers can add pages to @freelist to be freed after
170+
* ->iotlb_sync() or ->iotlb_flush_all() have cleared all cached references to
171+
* them. @queued is set to indicate when ->iotlb_flush_all() will be called
172+
* later instead of ->iotlb_sync(), so drivers may optimise accordingly.
168173
*/
169174
struct iommu_iotlb_gather {
170175
unsigned long start;
171176
unsigned long end;
172177
size_t pgsize;
173178
struct page *freelist;
179+
bool queued;
174180
};
175181

176182
/**

0 commit comments

Comments
 (0)