Skip to content

Commit 0e9cb59

Browse files
Steven Pricectmarinas
authored andcommitted
arm64: mm: Avoid TLBI when marking pages as valid
When __change_memory_common() is purely setting the valid bit on a PTE (e.g. via the set_memory_valid() call) there is no need for a TLBI as either the entry isn't changing (the valid bit was already set) or the entry was invalid and so should not have been cached in the TLB. Reviewed-by: Catalin Marinas <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Steven Price <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
1 parent fbf979a commit 0e9cb59

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

arch/arm64/mm/pageattr.c

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,13 @@ static int __change_memory_common(unsigned long start, unsigned long size,
6060
ret = apply_to_page_range(&init_mm, start, size, change_page_range,
6161
&data);
6262

63-
flush_tlb_kernel_range(start, start + size);
63+
/*
64+
* If the memory is being made valid without changing any other bits
65+
* then a TLBI isn't required as a non-valid entry cannot be cached in
66+
* the TLB.
67+
*/
68+
if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask))
69+
flush_tlb_kernel_range(start, start + size);
6470
return ret;
6571
}
6672

0 commit comments

Comments
 (0)