Skip to content
This repository was archived by the owner on Jul 23, 2020. It is now read-only.

Commit 91b9ce0

Browse files
suryasaimadhugregkh
authored andcommitted
x86: Fix early boot crash on gcc-10, third try
commit a9a3ed1 upstream. ... or the odyssey of trying to disable the stack protector for the function which generates the stack canary value. The whole story started with Sergei reporting a boot crash with a kernel built with gcc-10: Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b37df9 rockchip-linux#139 Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013 Call Trace: dump_stack panic ? start_secondary __stack_chk_fail start_secondary secondary_startup_64 -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary This happens because gcc-10 tail-call optimizes the last function call in start_secondary() - cpu_startup_entry() - and thus emits a stack canary check which fails because the canary value changes after the boot_init_stack_canary() call. To fix that, the initial attempt was to mark the one function which generates the stack canary with: __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused) however, using the optimize attribute doesn't work cumulatively as the attribute does not add to but rather replaces previously supplied optimization options - roughly all -fxxx options. The key one among them being -fno-omit-frame-pointer and thus leading to not present frame pointer - frame pointer which the kernel needs. The next attempt to prevent compilers from tail-call optimizing the last function call cpu_startup_entry(), shy of carving out start_secondary() into a separate compilation unit and building it with -fno-stack-protector, was to add an empty asm(""). This current solution was short and sweet, and reportedly, is supported by both compilers but we didn't get very far this time: future (LTO?) optimization passes could potentially eliminate this, which leads us to the third attempt: having an actual memory barrier there which the compiler cannot ignore or move around etc. That should hold for a long time, but hey we said that about the other two solutions too so... Reported-by: Sergei Trofimovich <slyfox@gentoo.org> Signed-off-by: Borislav Petkov <bp@suse.de> Tested-by: Kalle Valo <kvalo@codeaurora.org> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent f8e370c commit 91b9ce0

File tree

5 files changed

+23
-1
lines changed

5 files changed

+23
-1
lines changed

arch/x86/include/asm/stackprotector.h

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,8 +55,13 @@
5555
/*
5656
* Initialize the stackprotector canary value.
5757
*
58-
* NOTE: this must only be called from functions that never return,
58+
* NOTE: this must only be called from functions that never return
5959
* and it must always be inlined.
60+
*
61+
* In addition, it should be called from a compilation unit for which
62+
* stack protector is disabled. Alternatively, the caller should not end
63+
* with a function call which gets tail-call optimized as that would
64+
* lead to checking a modified canary value.
6065
*/
6166
static __always_inline void boot_init_stack_canary(void)
6267
{

arch/x86/kernel/smpboot.c

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -262,6 +262,14 @@ static void notrace start_secondary(void *unused)
262262

263263
wmb();
264264
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
265+
266+
/*
267+
* Prevent tail call to cpu_startup_entry() because the stack protector
268+
* guard has been changed a couple of function calls up, in
269+
* boot_init_stack_canary() and must not be checked before tail calling
270+
* another function.
271+
*/
272+
prevent_tail_call_optimization();
265273
}
266274

267275
/**

arch/x86/xen/smp_pv.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,7 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
9292
cpu_bringup();
9393
boot_init_stack_canary();
9494
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
95+
prevent_tail_call_optimization();
9596
}
9697

9798
void xen_smp_intr_free_pv(unsigned int cpu)

include/linux/compiler.h

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -356,4 +356,10 @@ static inline void *offset_to_ptr(const int *off)
356356
/* &a[0] degrades to a pointer: a different type from an array */
357357
#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
358358

359+
/*
360+
* This is needed in functions which generate the stack canary, see
361+
* arch/x86/kernel/smpboot.c::start_secondary() for an example.
362+
*/
363+
#define prevent_tail_call_optimization() mb()
364+
359365
#endif /* __LINUX_COMPILER_H */

init/main.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -782,6 +782,8 @@ asmlinkage __visible void __init start_kernel(void)
782782

783783
/* Do the rest non-__init'ed, we're now alive */
784784
arch_call_rest_init();
785+
786+
prevent_tail_call_optimization();
785787
}
786788

787789
/* Call all constructor functions linked into the kernel. */

0 commit comments

Comments
 (0)