Skip to content

Commit e5a971d

Browse files
committed
ftrace: Use synchronize_rcu_tasks_rude() instead of ftrace_sync()
This commit replaces the schedule_on_each_cpu(ftrace_sync) instances with synchronize_rcu_tasks_rude(). Suggested-by: Steven Rostedt <[email protected]> Cc: Ingo Molnar <[email protected]> [ paulmck: Make Kconfig adjustments noted by kbuild test robot. ] Signed-off-by: Paul E. McKenney <[email protected]>
1 parent 25246fc commit e5a971d

File tree

2 files changed

+4
-14
lines changed

2 files changed

+4
-14
lines changed

kernel/trace/Kconfig

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -158,6 +158,7 @@ config FUNCTION_TRACER
158158
select CONTEXT_SWITCH_TRACER
159159
select GLOB
160160
select TASKS_RCU if PREEMPTION
161+
select TASKS_RUDE_RCU
161162
help
162163
Enable the kernel to trace every kernel function. This is done
163164
by using a compiler feature to insert a small, 5-byte No-Operation

kernel/trace/ftrace.c

Lines changed: 3 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -160,17 +160,6 @@ static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip,
160160
op->saved_func(ip, parent_ip, op, regs);
161161
}
162162

163-
static void ftrace_sync(struct work_struct *work)
164-
{
165-
/*
166-
* This function is just a stub to implement a hard force
167-
* of synchronize_rcu(). This requires synchronizing
168-
* tasks even in userspace and idle.
169-
*
170-
* Yes, function tracing is rude.
171-
*/
172-
}
173-
174163
static void ftrace_sync_ipi(void *data)
175164
{
176165
/* Probably not needed, but do it anyway */
@@ -256,7 +245,7 @@ static void update_ftrace_function(void)
256245
* Make sure all CPUs see this. Yes this is slow, but static
257246
* tracing is slow and nasty to have enabled.
258247
*/
259-
schedule_on_each_cpu(ftrace_sync);
248+
synchronize_rcu_tasks_rude();
260249
/* Now all cpus are using the list ops. */
261250
function_trace_op = set_function_trace_op;
262251
/* Make sure the function_trace_op is visible on all CPUs */
@@ -2932,7 +2921,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
29322921
* infrastructure to do the synchronization, thus we must do it
29332922
* ourselves.
29342923
*/
2935-
schedule_on_each_cpu(ftrace_sync);
2924+
synchronize_rcu_tasks_rude();
29362925

29372926
/*
29382927
* When the kernel is preeptive, tasks can be preempted
@@ -5887,7 +5876,7 @@ ftrace_graph_release(struct inode *inode, struct file *file)
58875876
* infrastructure to do the synchronization, thus we must do it
58885877
* ourselves.
58895878
*/
5890-
schedule_on_each_cpu(ftrace_sync);
5879+
synchronize_rcu_tasks_rude();
58915880

58925881
free_ftrace_hash(old_hash);
58935882
}

0 commit comments

Comments
 (0)