Skip to content

Commit 972aa49

Browse files
committed
Provide a new two step DMA mapping API
Currently the only efficient way to map a complex memory description through the DMA API is by using the scatter list APIs. The SG APIs are unique in that they efficiently combine the two fundamental operations of sizing and allocating a large IOVA window from the IOMMU and processing all the per-address swiotlb/flushing/p2p/map details. This uniqueness has been a long standing pain point as the scatter list API is mandatory, but expensive to use. It prevents any kind of optimization or feature improvement (such as avoiding struct page for P2P) due to the impossibility of improving the scatter list. Several approaches have been explored to expand the DMA API with additional scatterlist-like structures (BIO, rlist), instead split up the DMA API to allow callers to bring their own data structure. The API is split up into parts: - Allocate IOVA space: To do any pre-allocation required. This is done based on the caller supplying some details about how much IOMMU address space it would need in worst case. - Map and unmap relevant structures to pre-allocated IOVA space: Perform the actual mapping into the pre-allocated IOVA. This is very similar to dma_map_page(). Thanks Signed-off-by: Leon Romanovsky <[email protected]>
2 parents 4ffb62f + 3ee7d94 commit 972aa49

File tree

10 files changed

+764
-201
lines changed

10 files changed

+764
-201
lines changed

Documentation/core-api/dma-api.rst

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -530,6 +530,77 @@ routines, e.g.:::
530530
....
531531
}
532532

533+
Part Ie - IOVA-based DMA mappings
534+
---------------------------------
535+
536+
These APIs allow a very efficient mapping when using an IOMMU. They are an
537+
optional path that requires extra code and are only recommended for drivers
538+
where DMA mapping performance, or the space usage for storing the DMA addresses
539+
matter. All the considerations from the previous section apply here as well.
540+
541+
::
542+
543+
bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state,
544+
phys_addr_t phys, size_t size);
545+
546+
Is used to try to allocate IOVA space for mapping operation. If it returns
547+
false this API can't be used for the given device and the normal streaming
548+
DMA mapping API should be used. The ``struct dma_iova_state`` is allocated
549+
by the driver and must be kept around until unmap time.
550+
551+
::
552+
553+
static inline bool dma_use_iova(struct dma_iova_state *state)
554+
555+
Can be used by the driver to check if the IOVA-based API is used after a
556+
call to dma_iova_try_alloc. This can be useful in the unmap path.
557+
558+
::
559+
560+
int dma_iova_link(struct device *dev, struct dma_iova_state *state,
561+
phys_addr_t phys, size_t offset, size_t size,
562+
enum dma_data_direction dir, unsigned long attrs);
563+
564+
Is used to link ranges to the IOVA previously allocated. The start of all
565+
but the first call to dma_iova_link for a given state must be aligned
566+
to the DMA merge boundary returned by ``dma_get_merge_boundary())``, and
567+
the size of all but the last range must be aligned to the DMA merge boundary
568+
as well.
569+
570+
::
571+
572+
int dma_iova_sync(struct device *dev, struct dma_iova_state *state,
573+
size_t offset, size_t size);
574+
575+
Must be called to sync the IOMMU page tables for IOVA-range mapped by one or
576+
more calls to ``dma_iova_link()``.
577+
578+
For drivers that use a one-shot mapping, all ranges can be unmapped and the
579+
IOVA freed by calling:
580+
581+
::
582+
583+
void dma_iova_destroy(struct device *dev, struct dma_iova_state *state,
584+
size_t mapped_len, enum dma_data_direction dir,
585+
unsigned long attrs);
586+
587+
Alternatively drivers can dynamically manage the IOVA space by unmapping
588+
and mapping individual regions. In that case
589+
590+
::
591+
592+
void dma_iova_unlink(struct device *dev, struct dma_iova_state *state,
593+
size_t offset, size_t size, enum dma_data_direction dir,
594+
unsigned long attrs);
595+
596+
is used to unmap a range previously mapped, and
597+
598+
::
599+
600+
void dma_iova_free(struct device *dev, struct dma_iova_state *state);
601+
602+
is used to free the IOVA space. All regions must have been unmapped using
603+
``dma_iova_unlink()`` before calling ``dma_iova_free()``.
533604

534605
Part II - Non-coherent DMA allocations
535606
--------------------------------------

0 commit comments

Comments
 (0)