Skip to content

Commit ed93dfc

Browse files
committed
rcu: Confine ->core_needs_qs accesses to the corresponding CPU
Commit 671a635 ("rcu: Avoid unnecessary softirq when system is idle") fixed a bug that could result in an indefinite number of unnecessary invocations of the RCU_SOFTIRQ handler at the trailing edge of a scheduler-clock interrupt. However, the fix introduced off-CPU stores to ->core_needs_qs. These writes did not conflict with the on-CPU stores because the CPU's leaf rcu_node structure's ->lock was held across all such stores. However, the loads from ->core_needs_qs were not promoted to READ_ONCE() and, worse yet, the code loading from ->core_needs_qs was written assuming that it was only ever updated by the corresponding CPU. So operation has been robust, but only by luck. This situation is therefore an accident waiting to happen. This commit therefore takes a different approach. Instead of clearing ->core_needs_qs from the grace-period kthread's force-quiescent-state processing, it modifies the rcu_pending() function to suppress the rcu_sched_clock_irq() function's call to invoke_rcu_core() if there is no grace period in progress. This avoids the infinite needless RCU_SOFTIRQ handlers while still keeping all accesses to ->core_needs_qs local to the corresponding CPU. Signed-off-by: Paul E. McKenney <[email protected]>
1 parent 516e5ae commit ed93dfc

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

kernel/rcu/tree.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1989,7 +1989,6 @@ rcu_report_qs_rdp(int cpu, struct rcu_data *rdp)
19891989
return;
19901990
}
19911991
mask = rdp->grpmask;
1992-
rdp->core_needs_qs = false;
19931992
if ((rnp->qsmask & mask) == 0) {
19941993
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
19951994
} else {
@@ -2819,6 +2818,7 @@ EXPORT_SYMBOL_GPL(cond_synchronize_rcu);
28192818
*/
28202819
static int rcu_pending(void)
28212820
{
2821+
bool gp_in_progress;
28222822
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
28232823
struct rcu_node *rnp = rdp->mynode;
28242824

@@ -2834,16 +2834,16 @@ static int rcu_pending(void)
28342834
return 0;
28352835

28362836
/* Is the RCU core waiting for a quiescent state from this CPU? */
2837-
if (rdp->core_needs_qs && !rdp->cpu_no_qs.b.norm)
2837+
gp_in_progress = rcu_gp_in_progress();
2838+
if (rdp->core_needs_qs && !rdp->cpu_no_qs.b.norm && gp_in_progress)
28382839
return 1;
28392840

28402841
/* Does this CPU have callbacks ready to invoke? */
28412842
if (rcu_segcblist_ready_cbs(&rdp->cblist))
28422843
return 1;
28432844

28442845
/* Has RCU gone idle with this CPU needing another grace period? */
2845-
if (!rcu_gp_in_progress() &&
2846-
rcu_segcblist_is_enabled(&rdp->cblist) &&
2846+
if (!gp_in_progress && rcu_segcblist_is_enabled(&rdp->cblist) &&
28472847
(!IS_ENABLED(CONFIG_RCU_NOCB_CPU) ||
28482848
!rcu_segcblist_is_offloaded(&rdp->cblist)) &&
28492849
!rcu_segcblist_restempty(&rdp->cblist, RCU_NEXT_READY_TAIL))

0 commit comments

Comments
 (0)