Skip to content

Commit 39b3b3c

Browse files
jpbruckerjoergroedel
authored andcommitted
iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE
We don't currently support IOMMUs with a page granule larger than the system page size. The IOVA allocator has a BUG_ON() in this case, and VFIO has a WARN_ON(). Removing these obstacles ranges doesn't seem possible without major changes to the DMA API and VFIO. Some callers of iommu_map(), for example, want to map multiple page-aligned regions adjacent to each others for scatter-gather purposes. Even in simple DMA API uses, a call to dma_map_page() would let the endpoint access neighbouring memory. And VFIO users cannot ensure that their virtual address buffer is physically contiguous at the IOMMU granule. Rather than triggering the IOVA BUG_ON() on mismatched page sizes, abort the vdomain finalise() with an error message. We could simply abort the viommu probe(), but an upcoming extension to virtio-iommu will allow setting different page masks for each endpoint. Reported-by: Bharat Bhushan <[email protected]> Signed-off-by: Jean-Philippe Brucker <[email protected]> Reviewed-by: Bharat Bhushan <[email protected]> Reviewed-by: Eric Auger <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
1 parent 7062af3 commit 39b3b3c

File tree

1 file changed

+12
-2
lines changed

1 file changed

+12
-2
lines changed

drivers/iommu/virtio-iommu.c

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -607,12 +607,22 @@ static struct iommu_domain *viommu_domain_alloc(unsigned type)
607607
return &vdomain->domain;
608608
}
609609

610-
static int viommu_domain_finalise(struct viommu_dev *viommu,
610+
static int viommu_domain_finalise(struct viommu_endpoint *vdev,
611611
struct iommu_domain *domain)
612612
{
613613
int ret;
614+
unsigned long viommu_page_size;
615+
struct viommu_dev *viommu = vdev->viommu;
614616
struct viommu_domain *vdomain = to_viommu_domain(domain);
615617

618+
viommu_page_size = 1UL << __ffs(viommu->pgsize_bitmap);
619+
if (viommu_page_size > PAGE_SIZE) {
620+
dev_err(vdev->dev,
621+
"granule 0x%lx larger than system page size 0x%lx\n",
622+
viommu_page_size, PAGE_SIZE);
623+
return -EINVAL;
624+
}
625+
616626
ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain,
617627
viommu->last_domain, GFP_KERNEL);
618628
if (ret < 0)
@@ -659,7 +669,7 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
659669
* Properly initialize the domain now that we know which viommu
660670
* owns it.
661671
*/
662-
ret = viommu_domain_finalise(vdev->viommu, domain);
672+
ret = viommu_domain_finalise(vdev, domain);
663673
} else if (vdomain->viommu != vdev->viommu) {
664674
dev_err(dev, "cannot attach to foreign vIOMMU\n");
665675
ret = -EXDEV;

0 commit comments

Comments
 (0)