Skip to content

Commit cbe0d8d

Browse files
committed
rcu-tasks: Wait for trc_read_check_handler() IPIs
Currently, RCU Tasks Trace initializes the trc_n_readers_need_end counter to the value one, increments it before each trc_read_check_handler() IPI, then decrements it within trc_read_check_handler() if the target task was in a quiescent state (or if the target task moved to some other CPU while the IPI was in flight), complaining if the new value was zero. The rationale for complaining is that the initial value of one must be decremented away before zero can be reached, and this decrement has not yet happened. Except that trc_read_check_handler() is initiated with an asynchronous smp_call_function_single(), which might be significantly delayed. This can result in false-positive complaints about the counter reaching zero. This commit therefore waits for in-flight IPI handlers to complete before decrementing away the initial value of one from the trc_n_readers_need_end counter. Signed-off-by: Paul E. McKenney <[email protected]>
1 parent 6880fa6 commit cbe0d8d

File tree

1 file changed

+14
-0
lines changed

1 file changed

+14
-0
lines changed

kernel/rcu/tasks.h

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1150,14 +1150,28 @@ static void check_all_holdout_tasks_trace(struct list_head *hop,
11501150
}
11511151
}
11521152

1153+
static void rcu_tasks_trace_empty_fn(void *unused)
1154+
{
1155+
}
1156+
11531157
/* Wait for grace period to complete and provide ordering. */
11541158
static void rcu_tasks_trace_postgp(struct rcu_tasks *rtp)
11551159
{
1160+
int cpu;
11561161
bool firstreport;
11571162
struct task_struct *g, *t;
11581163
LIST_HEAD(holdouts);
11591164
long ret;
11601165

1166+
// Wait for any lingering IPI handlers to complete. Note that
1167+
// if a CPU has gone offline or transitioned to userspace in the
1168+
// meantime, all IPI handlers should have been drained beforehand.
1169+
// Yes, this assumes that CPUs process IPIs in order. If that ever
1170+
// changes, there will need to be a recheck and/or timed wait.
1171+
for_each_online_cpu(cpu)
1172+
if (smp_load_acquire(per_cpu_ptr(&trc_ipi_to_cpu, cpu)))
1173+
smp_call_function_single(cpu, rcu_tasks_trace_empty_fn, NULL, 1);
1174+
11611175
// Remove the safety count.
11621176
smp_mb__before_atomic(); // Order vs. earlier atomics
11631177
atomic_dec(&trc_n_readers_need_end);

0 commit comments

Comments
 (0)