Skip to content

Commit a5b0eb3

Browse files
Paul Durrantandyhhp
authored andcommitted
x86/mm/p2m: stop checking for IOMMU shared page tables in mmio_order()
Now that the iommu_map() and iommu_unmap() operations take an order parameter and elide flushing there's no strong reason why modifying MMIO ranges in the p2m should be restricted to a 4k granularity simply because the IOMMU is enabled but shared page tables are not in operation. Signed-off-by: Paul Durrant <[email protected]> Reviewed-by: Jan Beulich <[email protected]>
1 parent e8afe11 commit a5b0eb3

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

xen/arch/x86/mm/p2m.c

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2210,13 +2210,12 @@ static unsigned int mmio_order(const struct domain *d,
22102210
unsigned long start_fn, unsigned long nr)
22112211
{
22122212
/*
2213-
* Note that the !iommu_use_hap_pt() here has three effects:
2214-
* - cover iommu_{,un}map_page() not having an "order" input yet,
2213+
* Note that the !hap_enabled() here has two effects:
22152214
* - exclude shadow mode (which doesn't support large MMIO mappings),
22162215
* - exclude PV guests, should execution reach this code for such.
22172216
* So be careful when altering this.
22182217
*/
2219-
if ( !iommu_use_hap_pt(d) ||
2218+
if ( !hap_enabled(d) ||
22202219
(start_fn & ((1UL << PAGE_ORDER_2M) - 1)) || !(nr >> PAGE_ORDER_2M) )
22212220
return PAGE_ORDER_4K;
22222221

0 commit comments

Comments
 (0)