Skip to content

Commit 3d62ab3

Browse files
Luo Gengkunrostedt
authored andcommitted
tracing: Fix tracing_marker may trigger page fault during preempt_disable
Both tracing_mark_write and tracing_mark_raw_write call __copy_from_user_inatomic during preempt_disable. But in some case, __copy_from_user_inatomic may trigger page fault, and will call schedule() subtly. And if a task is migrated to other cpu, the following warning will be trigger: if (RB_WARN_ON(cpu_buffer, !local_read(&cpu_buffer->committing))) An example can illustrate this issue: process flow CPU --------------------------------------------------------------------- tracing_mark_raw_write(): cpu:0 ... ring_buffer_lock_reserve(): cpu:0 ... cpu = raw_smp_processor_id() cpu:0 cpu_buffer = buffer->buffers[cpu] cpu:0 ... ... __copy_from_user_inatomic(): cpu:0 ... # page fault do_mem_abort(): cpu:0 ... # Call schedule schedule() cpu:0 ... # the task schedule to cpu1 __buffer_unlock_commit(): cpu:1 ... ring_buffer_unlock_commit(): cpu:1 ... cpu = raw_smp_processor_id() cpu:1 cpu_buffer = buffer->buffers[cpu] cpu:1 As shown above, the process will acquire cpuid twice and the return values are not the same. To fix this problem using copy_from_user_nofault instead of __copy_from_user_inatomic, as the former performs 'access_ok' before copying. Link: https://lore.kernel.org/[email protected] Fixes: 656c7f0 ("tracing: Replace kmap with copy_from_user() in trace_marker writing") Signed-off-by: Luo Gengkun <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
1 parent 81ac633 commit 3d62ab3

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

kernel/trace/trace.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7209,7 +7209,7 @@ static ssize_t write_marker_to_buffer(struct trace_array *tr, const char __user
72097209
entry = ring_buffer_event_data(event);
72107210
entry->ip = ip;
72117211

7212-
len = __copy_from_user_inatomic(&entry->buf, ubuf, cnt);
7212+
len = copy_from_user_nofault(&entry->buf, ubuf, cnt);
72137213
if (len) {
72147214
memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
72157215
cnt = FAULTED_SIZE;
@@ -7306,7 +7306,7 @@ static ssize_t write_raw_marker_to_buffer(struct trace_array *tr,
73067306

73077307
entry = ring_buffer_event_data(event);
73087308

7309-
len = __copy_from_user_inatomic(&entry->id, ubuf, cnt);
7309+
len = copy_from_user_nofault(&entry->id, ubuf, cnt);
73107310
if (len) {
73117311
entry->id = -1;
73127312
memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);

0 commit comments

Comments
 (0)