Skip to content

Commit b5ea037

Browse files
committed
rcu: Clear ->core_needs_qs at GP end or self-reported QS
The rcu_data structure's ->core_needs_qs field does not necessarily get cleared in a timely fashion after the corresponding CPUs' quiescent state has been reported. From a functional viewpoint, no harm done, but this can result in excessive invocation of RCU core processing, as witnessed by the kernel test robot, which saw greatly increased softirq overhead. This commit therefore restores the rcu_report_qs_rdp() function's clearing of this field, but only when running on the corresponding CPU. Cases where some other CPU reports the quiescent state (for example, on behalf of an idle CPU) are handled by setting this field appropriately within the __note_gp_changes() function's end-of-grace-period checks. This handling is carried out regardless of whether the end of a grace period actually happened, thus handling the case where a CPU goes non-idle after a quiescent state is reported on its behalf, but before the grace period ends. This fix also avoids cross-CPU updates to ->core_needs_qs, While in the area, this commit changes the __note_gp_changes() need_gp variable's name to need_qs because it is a quiescent state that is needed from the CPU in question. Fixes: ed93dfc ("rcu: Confine ->core_needs_qs accesses to the corresponding CPU") Reported-by: kernel test robot <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
1 parent bb6d3fb commit b5ea037

File tree

1 file changed

+9
-4
lines changed

1 file changed

+9
-4
lines changed

kernel/rcu/tree.c

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1386,7 +1386,7 @@ static void __maybe_unused rcu_advance_cbs_nowake(struct rcu_node *rnp,
13861386
static bool __note_gp_changes(struct rcu_node *rnp, struct rcu_data *rdp)
13871387
{
13881388
bool ret = false;
1389-
bool need_gp;
1389+
bool need_qs;
13901390
const bool offloaded = IS_ENABLED(CONFIG_RCU_NOCB_CPU) &&
13911391
rcu_segcblist_is_offloaded(&rdp->cblist);
13921392

@@ -1400,10 +1400,13 @@ static bool __note_gp_changes(struct rcu_node *rnp, struct rcu_data *rdp)
14001400
unlikely(READ_ONCE(rdp->gpwrap))) {
14011401
if (!offloaded)
14021402
ret = rcu_advance_cbs(rnp, rdp); /* Advance CBs. */
1403+
rdp->core_needs_qs = false;
14031404
trace_rcu_grace_period(rcu_state.name, rdp->gp_seq, TPS("cpuend"));
14041405
} else {
14051406
if (!offloaded)
14061407
ret = rcu_accelerate_cbs(rnp, rdp); /* Recent CBs. */
1408+
if (rdp->core_needs_qs)
1409+
rdp->core_needs_qs = !!(rnp->qsmask & rdp->grpmask);
14071410
}
14081411

14091412
/* Now handle the beginnings of any new-to-this-CPU grace periods. */
@@ -1415,9 +1418,9 @@ static bool __note_gp_changes(struct rcu_node *rnp, struct rcu_data *rdp)
14151418
* go looking for one.
14161419
*/
14171420
trace_rcu_grace_period(rcu_state.name, rnp->gp_seq, TPS("cpustart"));
1418-
need_gp = !!(rnp->qsmask & rdp->grpmask);
1419-
rdp->cpu_no_qs.b.norm = need_gp;
1420-
rdp->core_needs_qs = need_gp;
1421+
need_qs = !!(rnp->qsmask & rdp->grpmask);
1422+
rdp->cpu_no_qs.b.norm = need_qs;
1423+
rdp->core_needs_qs = need_qs;
14211424
zero_cpu_stall_ticks(rdp);
14221425
}
14231426
rdp->gp_seq = rnp->gp_seq; /* Remember new grace-period state. */
@@ -1987,6 +1990,8 @@ rcu_report_qs_rdp(int cpu, struct rcu_data *rdp)
19871990
return;
19881991
}
19891992
mask = rdp->grpmask;
1993+
if (rdp->cpu == smp_processor_id())
1994+
rdp->core_needs_qs = false;
19901995
if ((rnp->qsmask & mask) == 0) {
19911996
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
19921997
} else {

0 commit comments

Comments
 (0)