Skip to content

Commit f220125

Browse files
committed
x86/retbleed: Add __x86_return_thunk alignment checks
Add a linker assertion and compute the 0xcc padding dynamically so that __x86_return_thunk is always cacheline-aligned. Leave the SYM_START() macro in as the untraining doesn't need ENDBR annotations anyway. Suggested-by: Andrew Cooper <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Andrew Cooper <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 7583e8f commit f220125

File tree

2 files changed

+5
-1
lines changed

2 files changed

+5
-1
lines changed

arch/x86/kernel/vmlinux.lds.S

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -508,4 +508,8 @@ INIT_PER_CPU(irq_stack_backing_store);
508508
"fixed_percpu_data is not at start of per-cpu area");
509509
#endif
510510

511+
#ifdef CONFIG_RETHUNK
512+
. = ASSERT((__x86_return_thunk & 0x3f) == 0, "__x86_return_thunk not cacheline-aligned");
513+
#endif
514+
511515
#endif /* CONFIG_X86_64 */

arch/x86/lib/retpoline.S

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
143143
* from re-poisioning the BTB prediction.
144144
*/
145145
.align 64
146-
.skip 63, 0xcc
146+
.skip 64 - (__x86_return_thunk - zen_untrain_ret), 0xcc
147147
SYM_START(zen_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
148148
ANNOTATE_NOENDBR
149149
/*

0 commit comments

Comments
 (0)