Skip to content

Commit f66c044

Browse files
mhiramatIngo Molnar
authored andcommitted
kprobes: Set unoptimized flag after unoptimizing code
Set the unoptimized flag after confirming the code is completely unoptimized. Without this fix, when a kprobe hits the intermediate modified instruction (the first byte is replaced by an INT3, but later bytes can still be a jump address operand) while unoptimizing, it can return to the middle byte of the modified code, which causes an invalid instruction exception in the kernel. Usually, this is a rare case, but if we put a probe on the function call while text patching, it always causes a kernel panic as below: # echo p text_poke+5 > kprobe_events # echo 1 > events/kprobes/enable # echo 0 > events/kprobes/enable invalid opcode: 0000 [#1] PREEMPT SMP PTI RIP: 0010:text_poke+0x9/0x50 Call Trace: arch_unoptimize_kprobe+0x22/0x28 arch_unoptimize_kprobes+0x39/0x87 kprobe_optimizer+0x6e/0x290 process_one_work+0x2a0/0x610 worker_thread+0x28/0x3d0 ? process_one_work+0x610/0x610 kthread+0x10d/0x130 ? kthread_park+0x80/0x80 ret_from_fork+0x3a/0x50 text_poke() is used for patching the code in optprobes. This can happen even if we blacklist text_poke() and other functions, because there is a small time window during which we show the intermediate code to other CPUs. [ mingo: Edited the changelog. ] Tested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Masami Hiramatsu <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Fixes: 6274de4 ("kprobes: Support delayed unoptimizing") Link: https://lkml.kernel.org/r/157483422375.25881.13508326028469515760.stgit@devnote2 Signed-off-by: Ingo Molnar <[email protected]>
1 parent 285a54e commit f66c044

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

kernel/kprobes.c

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -510,6 +510,8 @@ static void do_unoptimize_kprobes(void)
510510
arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);
511511
/* Loop free_list for disarming */
512512
list_for_each_entry_safe(op, tmp, &freeing_list, list) {
513+
/* Switching from detour code to origin */
514+
op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
513515
/* Disarm probes if marked disabled */
514516
if (kprobe_disabled(&op->kp))
515517
arch_disarm_kprobe(&op->kp);
@@ -649,6 +651,7 @@ static void force_unoptimize_kprobe(struct optimized_kprobe *op)
649651
{
650652
lockdep_assert_cpus_held();
651653
arch_unoptimize_kprobe(op);
654+
op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
652655
if (kprobe_disabled(&op->kp))
653656
arch_disarm_kprobe(&op->kp);
654657
}
@@ -676,7 +679,6 @@ static void unoptimize_kprobe(struct kprobe *p, bool force)
676679
return;
677680
}
678681

679-
op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
680682
if (!list_empty(&op->list)) {
681683
/* Dequeue from the optimization queue */
682684
list_del_init(&op->list);

0 commit comments

Comments
 (0)