Skip to content

Commit 767c784

Browse files
Marek VasutLorenzo Pieralisi
authored andcommitted
PCI: rcar: Recalculate inbound range alignment for each controller entry
Due to hardware constraints, the size of each inbound range entry populated into the controller cannot be larger than the alignment of the entry's start address. Currently, the alignment for each "dma-ranges" inbound range is calculated only once for each range and the increment for programming the controller is also derived from it only once. Thus, a "dma-ranges" entry describing a memory at 0x48000000 and size 0x38000000 would lead to multiple controller entries, each 0x08000000 long. This is inefficient, especially considering that by adding the size to the start address, the alignment increases. This patch moves the alignment calculation into the loop populating the controller entries, thus updating the alignment for each controller entry. Tested-by: Yoshihiro Shimoda <[email protected]> Signed-off-by: Marek Vasut <[email protected]> Signed-off-by: Lorenzo Pieralisi <[email protected]> Reviewed-by: Andrew Murray <[email protected]> Reviewed-by: Simon Horman <[email protected]> Reviewed-by: Yoshihiro Shimoda <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Lorenzo Pieralisi <[email protected]> Cc: Wolfram Sang <[email protected]> Cc: [email protected]
1 parent 85bff4c commit 767c784

File tree

1 file changed

+19
-18
lines changed

1 file changed

+19
-18
lines changed

drivers/pci/controller/pcie-rcar.c

Lines changed: 19 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1027,29 +1027,30 @@ static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie,
10271027
if (restype & IORESOURCE_PREFETCH)
10281028
flags |= LAM_PREFETCH;
10291029

1030-
/*
1031-
* If the size of the range is larger than the alignment of the start
1032-
* address, we have to use multiple entries to perform the mapping.
1033-
*/
1034-
if (cpu_addr > 0) {
1035-
unsigned long nr_zeros = __ffs64(cpu_addr);
1036-
u64 alignment = 1ULL << nr_zeros;
1037-
1038-
size = min(range->size, alignment);
1039-
} else {
1040-
size = range->size;
1041-
}
1042-
/* Hardware supports max 4GiB inbound region */
1043-
size = min(size, 1ULL << 32);
1044-
1045-
mask = roundup_pow_of_two(size) - 1;
1046-
mask &= ~0xf;
1047-
10481030
while (cpu_addr < cpu_end) {
10491031
if (idx >= MAX_NR_INBOUND_MAPS - 1) {
10501032
dev_err(pcie->dev, "Failed to map inbound regions!\n");
10511033
return -EINVAL;
10521034
}
1035+
/*
1036+
* If the size of the range is larger than the alignment of
1037+
* the start address, we have to use multiple entries to
1038+
* perform the mapping.
1039+
*/
1040+
if (cpu_addr > 0) {
1041+
unsigned long nr_zeros = __ffs64(cpu_addr);
1042+
u64 alignment = 1ULL << nr_zeros;
1043+
1044+
size = min(range->size, alignment);
1045+
} else {
1046+
size = range->size;
1047+
}
1048+
/* Hardware supports max 4GiB inbound region */
1049+
size = min(size, 1ULL << 32);
1050+
1051+
mask = roundup_pow_of_two(size) - 1;
1052+
mask &= ~0xf;
1053+
10531054
/*
10541055
* Set up 64-bit inbound regions as the range parser doesn't
10551056
* distinguish between 32 and 64-bit types.

0 commit comments

Comments
 (0)