Skip to content

Commit f1d4d47

Browse files
rpptsuryasaimadhu
authored andcommitted
x86/setup: Always reserve the first 1M of RAM
There are BIOSes that are known to corrupt the memory under 1M, or more precisely under 640K because the memory above 640K is anyway reserved for the EGA/VGA frame buffer and BIOS. To prevent usage of the memory that will be potentially clobbered by the kernel, the beginning of the memory is always reserved. The exact size of the reserved area is determined by CONFIG_X86_RESERVE_LOW build time and the "reservelow=" command line option. The reserved range may be from 4K to 640K with the default of 64K. There are also configurations that reserve the entire 1M range, like machines with SandyBridge graphic devices or systems that enable crash kernel. In addition to the potentially clobbered memory, EBDA of unknown size may be as low as 128K and the memory above that EBDA start is also reserved early. It would have been possible to reserve the entire range under 1M unless for the real mode trampoline that must reside in that area. To accommodate placement of the real mode trampoline and keep the memory safe from being clobbered by BIOS, reserve the first 64K of RAM before memory allocations are possible and then, after the real mode trampoline is allocated, reserve the entire range from 0 to 1M. Update trim_snb_memory() and reserve_real_mode() to avoid redundant reservations of the same memory range. Also make sure the memory under 1M is not getting freed by efi_free_boot_services(). [ bp: Massage commit message and comments. ] Fixes: a799c2b ("x86/setup: Consolidate early memory reservations") Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Tested-by: Hugh Dickins <[email protected]> Link: https://bugzilla.kernel.org/show_bug.cgi?id=213177 Link: https://lkml.kernel.org/r/[email protected]
1 parent 2b31e8e commit f1d4d47

File tree

3 files changed

+41
-20
lines changed

3 files changed

+41
-20
lines changed

arch/x86/kernel/setup.c

Lines changed: 21 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -638,11 +638,11 @@ static void __init trim_snb_memory(void)
638638
* them from accessing certain memory ranges, namely anything below
639639
* 1M and in the pages listed in bad_pages[] above.
640640
*
641-
* To avoid these pages being ever accessed by SNB gfx devices
642-
* reserve all memory below the 1 MB mark and bad_pages that have
643-
* not already been reserved at boot time.
641+
* To avoid these pages being ever accessed by SNB gfx devices reserve
642+
* bad_pages that have not already been reserved at boot time.
643+
* All memory below the 1 MB mark is anyway reserved later during
644+
* setup_arch(), so there is no need to reserve it here.
644645
*/
645-
memblock_reserve(0, 1<<20);
646646

647647
for (i = 0; i < ARRAY_SIZE(bad_pages); i++) {
648648
if (memblock_reserve(bad_pages[i], PAGE_SIZE))
@@ -734,14 +734,14 @@ static void __init early_reserve_memory(void)
734734
* The first 4Kb of memory is a BIOS owned area, but generally it is
735735
* not listed as such in the E820 table.
736736
*
737-
* Reserve the first memory page and typically some additional
738-
* memory (64KiB by default) since some BIOSes are known to corrupt
739-
* low memory. See the Kconfig help text for X86_RESERVE_LOW.
737+
* Reserve the first 64K of memory since some BIOSes are known to
738+
* corrupt low memory. After the real mode trampoline is allocated the
739+
* rest of the memory below 640k is reserved.
740740
*
741741
* In addition, make sure page 0 is always reserved because on
742742
* systems with L1TF its contents can be leaked to user processes.
743743
*/
744-
memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));
744+
memblock_reserve(0, SZ_64K);
745745

746746
early_reserve_initrd();
747747

@@ -752,6 +752,7 @@ static void __init early_reserve_memory(void)
752752

753753
reserve_ibft_region();
754754
reserve_bios_regions();
755+
trim_snb_memory();
755756
}
756757

757758
/*
@@ -1082,14 +1083,20 @@ void __init setup_arch(char **cmdline_p)
10821083
(max_pfn_mapped<<PAGE_SHIFT) - 1);
10831084
#endif
10841085

1085-
reserve_real_mode();
1086-
10871086
/*
1088-
* Reserving memory causing GPU hangs on Sandy Bridge integrated
1089-
* graphics devices should be done after we allocated memory under
1090-
* 1M for the real mode trampoline.
1087+
* Find free memory for the real mode trampoline and place it
1088+
* there.
1089+
* If there is not enough free memory under 1M, on EFI-enabled
1090+
* systems there will be additional attempt to reclaim the memory
1091+
* for the real mode trampoline at efi_free_boot_services().
1092+
*
1093+
* Unconditionally reserve the entire first 1M of RAM because
1094+
* BIOSes are know to corrupt low memory and several
1095+
* hundred kilobytes are not worth complex detection what memory gets
1096+
* clobbered. Moreover, on machines with SandyBridge graphics or in
1097+
* setups that use crashkernel the entire 1M is reserved anyway.
10911098
*/
1092-
trim_snb_memory();
1099+
reserve_real_mode();
10931100

10941101
init_mem_mapping();
10951102

arch/x86/platform/efi/quirks.c

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -450,6 +450,18 @@ void __init efi_free_boot_services(void)
450450
size -= rm_size;
451451
}
452452

453+
/*
454+
* Don't free memory under 1M for two reasons:
455+
* - BIOS might clobber it
456+
* - Crash kernel needs it to be reserved
457+
*/
458+
if (start + size < SZ_1M)
459+
continue;
460+
if (start < SZ_1M) {
461+
size -= (SZ_1M - start);
462+
start = SZ_1M;
463+
}
464+
453465
memblock_free_late(start, size);
454466
}
455467

arch/x86/realmode/init.c

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,14 +29,16 @@ void __init reserve_real_mode(void)
2929

3030
/* Has to be under 1M so we can execute real-mode AP code. */
3131
mem = memblock_find_in_range(0, 1<<20, size, PAGE_SIZE);
32-
if (!mem) {
32+
if (!mem)
3333
pr_info("No sub-1M memory is available for the trampoline\n");
34-
return;
35-
}
34+
else
35+
set_real_mode_mem(mem);
3636

37-
memblock_reserve(mem, size);
38-
set_real_mode_mem(mem);
39-
crash_reserve_low_1M();
37+
/*
38+
* Unconditionally reserve the entire fisrt 1M, see comment in
39+
* setup_arch().
40+
*/
41+
memblock_reserve(0, SZ_1M);
4042
}
4143

4244
static void sme_sev_setup_real_mode(struct trampoline_header *th)

0 commit comments

Comments
 (0)