Skip to content

Commit 16b9569

Browse files
Sebastian Andrzej SiewiorPeter Zijlstra
authored andcommitted
perf: Don't disable preemption in perf_pending_task().
perf_pending_task() is invoked in task context and disables preemption because perf_swevent_get_recursion_context() used to access per-CPU variables. The other reason is to create a RCU read section while accessing the perf_event. The recursion counter is no longer a per-CPU accounter so disabling preemption is no longer required. The RCU section is needed and must be created explicit. Replace the preemption-disable section with a explicit RCU-read section. Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Marco Elver <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 0d40a6d commit 16b9569

File tree

1 file changed

+5
-6
lines changed

1 file changed

+5
-6
lines changed

kernel/events/core.c

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5208,10 +5208,9 @@ static void perf_pending_task_sync(struct perf_event *event)
52085208
}
52095209

52105210
/*
5211-
* All accesses related to the event are within the same
5212-
* non-preemptible section in perf_pending_task(). The RCU
5213-
* grace period before the event is freed will make sure all
5214-
* those accesses are complete by then.
5211+
* All accesses related to the event are within the same RCU section in
5212+
* perf_pending_task(). The RCU grace period before the event is freed
5213+
* will make sure all those accesses are complete by then.
52155214
*/
52165215
rcuwait_wait_event(&event->pending_work_wait, !event->pending_work, TASK_UNINTERRUPTIBLE);
52175216
}
@@ -6831,7 +6830,7 @@ static void perf_pending_task(struct callback_head *head)
68316830
* critical section as the ->pending_work reset. See comment in
68326831
* perf_pending_task_sync().
68336832
*/
6834-
preempt_disable_notrace();
6833+
rcu_read_lock();
68356834
/*
68366835
* If we 'fail' here, that's OK, it means recursion is already disabled
68376836
* and we won't recurse 'further'.
@@ -6844,10 +6843,10 @@ static void perf_pending_task(struct callback_head *head)
68446843
local_dec(&event->ctx->nr_pending);
68456844
rcuwait_wake_up(&event->pending_work_wait);
68466845
}
6846+
rcu_read_unlock();
68476847

68486848
if (rctx >= 0)
68496849
perf_swevent_put_recursion_context(rctx);
6850-
preempt_enable_notrace();
68516850
}
68526851

68536852
#ifdef CONFIG_GUEST_PERF_EVENTS

0 commit comments

Comments
 (0)