Skip to content

Commit bf0a574

Browse files
Frederic WeisbeckerNeeraj Upadhyay (AMD)
authored andcommitted
rcu/exp: Remove needless CPU up quiescent state report
A CPU coming online checks for an ongoing grace period and reports a quiescent state accordingly if needed. This special treatment that shortcuts the expedited IPI finds its origin as an optimization purpose on the following commit: 338b0f7 (rcu: Better hotplug handling for synchronize_sched_expedited() The point is to avoid an IPI while waiting for a CPU to become online or failing to become offline. However this is pointless and even error prone for several reasons: * If the CPU has been seen offline in the first round scanning offline and idle CPUs, no IPI is even tried and the quiescent state is reported on behalf of the CPU. * This means that if the IPI fails, the CPU just became offline. So it's unlikely to become online right away, unless the cpu hotplug operation failed and rolled back, which is a rare event that can wait a jiffy for a new IPI to be issued. * But then the "optimization" applying on failing CPU hotplug down only applies to !PREEMPT_RCU. * This force reports a quiescent state even if ->cpu_no_qs.b.exp is not set. As a result it can race with remote QS reports on the same rdp. Fortunately it happens to be OK but an accident is waiting to happen. For all those reasons, remove this optimization that doesn't look worthy to keep around. Reported-by: Paul E. McKenney <[email protected]> Signed-off-by: Frederic Weisbecker <[email protected]> Reviewed-by: Paul E. McKenney <[email protected]> Signed-off-by: Joel Fernandes <[email protected]> Signed-off-by: Neeraj Upadhyay (AMD) <[email protected]>
1 parent 4b9432e commit bf0a574

File tree

2 files changed

+2
-45
lines changed

2 files changed

+2
-45
lines changed

kernel/rcu/tree.c

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,6 @@ static void rcu_report_qs_rnp(unsigned long mask, struct rcu_node *rnp,
160160
unsigned long gps, unsigned long flags);
161161
static void invoke_rcu_core(void);
162162
static void rcu_report_exp_rdp(struct rcu_data *rdp);
163-
static void sync_sched_exp_online_cleanup(int cpu);
164163
static void check_cb_ovld_locked(struct rcu_data *rdp, struct rcu_node *rnp);
165164
static bool rcu_rdp_is_offloaded(struct rcu_data *rdp);
166165
static bool rcu_rdp_cpu_online(struct rcu_data *rdp);
@@ -4268,7 +4267,6 @@ int rcutree_online_cpu(unsigned int cpu)
42684267
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
42694268
if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE)
42704269
return 0; /* Too early in boot for scheduler work. */
4271-
sync_sched_exp_online_cleanup(cpu);
42724270

42734271
// Stop-machine done, so allow nohz_full to disable tick.
42744272
tick_dep_clear(TICK_DEP_BIT_RCU);

kernel/rcu/tree_exp.h

Lines changed: 2 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -751,12 +751,8 @@ static void rcu_exp_handler(void *unused)
751751
struct task_struct *t = current;
752752

753753
/*
754-
* First, is there no need for a quiescent state from this CPU,
755-
* or is this CPU already looking for a quiescent state for the
756-
* current grace period? If either is the case, just leave.
757-
* However, this should not happen due to the preemptible
758-
* sync_sched_exp_online_cleanup() implementation being a no-op,
759-
* so warn if this does happen.
754+
* WARN if the CPU is unexpectedly already looking for a
755+
* QS or has already reported one.
760756
*/
761757
ASSERT_EXCLUSIVE_WRITER_SCOPED(rdp->cpu_no_qs.b.exp);
762758
if (WARN_ON_ONCE(!(READ_ONCE(rnp->expmask) & rdp->grpmask) ||
@@ -803,11 +799,6 @@ static void rcu_exp_handler(void *unused)
803799
WARN_ON_ONCE(1);
804800
}
805801

806-
/* PREEMPTION=y, so no PREEMPTION=n expedited grace period to clean up after. */
807-
static void sync_sched_exp_online_cleanup(int cpu)
808-
{
809-
}
810-
811802
/*
812803
* Scan the current list of tasks blocked within RCU read-side critical
813804
* sections, printing out the tid of each that is blocking the current
@@ -885,38 +876,6 @@ static void rcu_exp_handler(void *unused)
885876
rcu_exp_need_qs();
886877
}
887878

888-
/* Send IPI for expedited cleanup if needed at end of CPU-hotplug operation. */
889-
static void sync_sched_exp_online_cleanup(int cpu)
890-
{
891-
unsigned long flags;
892-
int my_cpu;
893-
struct rcu_data *rdp;
894-
int ret;
895-
struct rcu_node *rnp;
896-
897-
rdp = per_cpu_ptr(&rcu_data, cpu);
898-
rnp = rdp->mynode;
899-
my_cpu = get_cpu();
900-
/* Quiescent state either not needed or already requested, leave. */
901-
if (!(READ_ONCE(rnp->expmask) & rdp->grpmask) ||
902-
READ_ONCE(rdp->cpu_no_qs.b.exp)) {
903-
put_cpu();
904-
return;
905-
}
906-
/* Quiescent state needed on current CPU, so set it up locally. */
907-
if (my_cpu == cpu) {
908-
local_irq_save(flags);
909-
rcu_exp_need_qs();
910-
local_irq_restore(flags);
911-
put_cpu();
912-
return;
913-
}
914-
/* Quiescent state needed on some other CPU, send IPI. */
915-
ret = smp_call_function_single(cpu, rcu_exp_handler, NULL, 0);
916-
put_cpu();
917-
WARN_ON_ONCE(ret);
918-
}
919-
920879
/*
921880
* Because preemptible RCU does not exist, we never have to check for
922881
* tasks blocked within RCU read-side critical sections that are

0 commit comments

Comments
 (0)