Skip to content

Commit 2650073

Browse files
mhklinuxChristoph Hellwig
authored andcommitted
iommu/dma: fix zeroing of bounce buffer padding used by untrusted devices
iommu_dma_map_page() allocates swiotlb memory as a bounce buffer when an untrusted device wants to map only part of the memory in an granule. The goal is to disallow the untrusted device having DMA access to unrelated kernel data that may be sharing the granule. To meet this goal, the bounce buffer itself is zeroed, and any additional swiotlb memory up to alloc_size after the bounce buffer end (i.e., "post-padding") is also zeroed. However, as of commit 901c728 ("Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE"""), swiotlb_tbl_map_single() always initializes the contents of the bounce buffer to the original memory. Zeroing the bounce buffer is redundant and probably wrong per the discussion in that commit. Only the post-padding needs to be zeroed. Also, when the DMA min_align_mask is non-zero, the allocated bounce buffer space may not start on a granule boundary. The swiotlb memory from the granule boundary to the start of the allocated bounce buffer might belong to some unrelated bounce buffer. So as described in the "second issue" in [1], it can't be zeroed to protect against untrusted devices. But as of commit af13356 ("swiotlb: extend buffer pre-padding to alloc_align_mask if necessary"), swiotlb_tbl_map_single() allocates pre-padding slots when necessary to meet min_align_mask requirements, making it possible to zero the pre-padding area as well. Finally, iommu_dma_map_page() uses the swiotlb for untrusted devices and also for certain kmalloc() memory. Current code does the zeroing for both cases, but it is needed only for the untrusted device case. Fix all of this by updating iommu_dma_map_page() to zero both the pre-padding and post-padding areas, but not the actual bounce buffer. Do this only in the case where the bounce buffer is used because of an untrusted device. [1] https://lore.kernel.org/all/[email protected]/ Signed-off-by: Michael Kelley <[email protected]> Reviewed-by: Petr Tesarik <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
1 parent 327e2c9 commit 2650073

File tree

2 files changed

+21
-13
lines changed

2 files changed

+21
-13
lines changed

drivers/iommu/dma-iommu.c

Lines changed: 16 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1154,34 +1154,37 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
11541154
*/
11551155
if (dev_use_swiotlb(dev, size, dir) &&
11561156
iova_offset(iovad, phys | size)) {
1157-
void *padding_start;
1158-
size_t padding_size, aligned_size;
1159-
11601157
if (!is_swiotlb_active(dev)) {
11611158
dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n");
11621159
return DMA_MAPPING_ERROR;
11631160
}
11641161

11651162
trace_swiotlb_bounced(dev, phys, size);
11661163

1167-
aligned_size = iova_align(iovad, size);
11681164
phys = swiotlb_tbl_map_single(dev, phys, size,
11691165
iova_mask(iovad), dir, attrs);
11701166

11711167
if (phys == DMA_MAPPING_ERROR)
11721168
return DMA_MAPPING_ERROR;
11731169

1174-
/* Cleanup the padding area. */
1175-
padding_start = phys_to_virt(phys);
1176-
padding_size = aligned_size;
1170+
/*
1171+
* Untrusted devices should not see padding areas with random
1172+
* leftover kernel data, so zero the pre- and post-padding.
1173+
* swiotlb_tbl_map_single() has initialized the bounce buffer
1174+
* proper to the contents of the original memory buffer.
1175+
*/
1176+
if (dev_is_untrusted(dev)) {
1177+
size_t start, virt = (size_t)phys_to_virt(phys);
11771178

1178-
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
1179-
(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) {
1180-
padding_start += size;
1181-
padding_size -= size;
1182-
}
1179+
/* Pre-padding */
1180+
start = iova_align_down(iovad, virt);
1181+
memset((void *)start, 0, virt - start);
11831182

1184-
memset(padding_start, 0, padding_size);
1183+
/* Post-padding */
1184+
start = virt + size;
1185+
memset((void *)start, 0,
1186+
iova_align(iovad, start) - start);
1187+
}
11851188
}
11861189

11871190
if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))

include/linux/iova.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,11 @@ static inline size_t iova_align(struct iova_domain *iovad, size_t size)
6565
return ALIGN(size, iovad->granule);
6666
}
6767

68+
static inline size_t iova_align_down(struct iova_domain *iovad, size_t size)
69+
{
70+
return ALIGN_DOWN(size, iovad->granule);
71+
}
72+
6873
static inline dma_addr_t iova_dma_addr(struct iova_domain *iovad, struct iova *iova)
6974
{
7075
return (dma_addr_t)iova->pfn_lo << iova_shift(iovad);

0 commit comments

Comments
 (0)