Skip to content

Commit 3567fa6

Browse files
ardbiesheuvelctmarinas
authored andcommitted
arm64: kaslr: Adjust randomization range dynamically
Currently, we base the KASLR randomization range on a rough estimate of the available space in the upper VA region: the lower 1/4th has the module region and the upper 1/4th has the fixmap, vmemmap and PCI I/O ranges, and so we pick a random location in the remaining space in the middle. Once we enable support for 5-level paging with 4k pages, this no longer works: the vmemmap region, being dimensioned to cover a 52-bit linear region, takes up so much space in the upper VA region (the size of which is based on a 48-bit VA space for compatibility with non-LVA hardware) that the region above the vmalloc region takes up more than a quarter of the available space. So instead of a heuristic, let's derive the randomization range from the actual boundaries of the vmalloc region. Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Mark Rutland <[email protected]>
1 parent d432b8d commit 3567fa6

File tree

2 files changed

+8
-5
lines changed

2 files changed

+8
-5
lines changed

arch/arm64/kernel/image-vars.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ PROVIDE(__pi___memcpy = __pi_memcpy);
3636
PROVIDE(__pi___memmove = __pi_memmove);
3737
PROVIDE(__pi___memset = __pi_memset);
3838

39+
PROVIDE(__pi_vabits_actual = vabits_actual);
40+
3941
#ifdef CONFIG_KVM
4042

4143
/*

arch/arm64/kernel/pi/kaslr_early.c

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414

1515
#include <asm/archrandom.h>
1616
#include <asm/memory.h>
17+
#include <asm/pgtable.h>
1718

1819
/* taken from lib/string.c */
1920
static char *__strstr(const char *s1, const char *s2)
@@ -87,7 +88,7 @@ static u64 get_kaslr_seed(void *fdt)
8788

8889
asmlinkage u64 kaslr_early_init(void *fdt)
8990
{
90-
u64 seed;
91+
u64 seed, range;
9192

9293
if (is_kaslr_disabled_cmdline(fdt))
9394
return 0;
@@ -102,9 +103,9 @@ asmlinkage u64 kaslr_early_init(void *fdt)
102103
/*
103104
* OK, so we are proceeding with KASLR enabled. Calculate a suitable
104105
* kernel image offset from the seed. Let's place the kernel in the
105-
* middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of
106-
* the lower and upper quarters to avoid colliding with other
107-
* allocations.
106+
* 'middle' half of the VMALLOC area, and stay clear of the lower and
107+
* upper quarters to avoid colliding with other allocations.
108108
*/
109-
return BIT(VA_BITS_MIN - 3) + (seed & GENMASK(VA_BITS_MIN - 3, 0));
109+
range = (VMALLOC_END - KIMAGE_VADDR) / 2;
110+
return range / 2 + (((__uint128_t)range * seed) >> 64);
110111
}

0 commit comments

Comments
 (0)