Skip to content

Commit f2cb4f9

Browse files
Peter ZijlstraIngo Molnar
authored andcommitted
x86/kprobe: Add comments to arch_{,un}optimize_kprobes()
Add a few words describing how it is safe to overwrite the 4 bytes after a kprobe. In specific it is possible the JMP.d32 required for the optimized kprobe overwrites multiple instructions. Tested-by: Alexei Starovoitov <[email protected]> Tested-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent 04ae87a commit f2cb4f9

File tree

1 file changed

+12
-2
lines changed
  • arch/x86/kernel/kprobes

1 file changed

+12
-2
lines changed

arch/x86/kernel/kprobes/opt.c

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -414,8 +414,12 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op,
414414
}
415415

416416
/*
417-
* Replace breakpoints (int3) with relative jumps.
417+
* Replace breakpoints (INT3) with relative jumps (JMP.d32).
418418
* Caller must call with locking kprobe_mutex and text_mutex.
419+
*
420+
* The caller will have installed a regular kprobe and after that issued
421+
* syncrhonize_rcu_tasks(), this ensures that the instruction(s) that live in
422+
* the 4 bytes after the INT3 are unused and can now be overwritten.
419423
*/
420424
void arch_optimize_kprobes(struct list_head *oplist)
421425
{
@@ -441,7 +445,13 @@ void arch_optimize_kprobes(struct list_head *oplist)
441445
}
442446
}
443447

444-
/* Replace a relative jump with a breakpoint (int3). */
448+
/*
449+
* Replace a relative jump (JMP.d32) with a breakpoint (INT3).
450+
*
451+
* After that, we can restore the 4 bytes after the INT3 to undo what
452+
* arch_optimize_kprobes() scribbled. This is safe since those bytes will be
453+
* unused once the INT3 lands.
454+
*/
445455
void arch_unoptimize_kprobe(struct optimized_kprobe *op)
446456
{
447457
arch_arm_kprobe(&op->kp);

0 commit comments

Comments
 (0)