Skip to content

Commit 8ae0ae6

Browse files
committed
rcu: Provide rcu_irq_exit_preempt()
Interrupts and exceptions invoke rcu_irq_enter() on entry and need to invoke rcu_irq_exit() before they either return to the interrupted code or invoke the scheduler due to preemption. The general assumption is that RCU idle code has to have preemption disabled so that a return from interrupt cannot schedule. So the return from interrupt code invokes rcu_irq_exit() and preempt_schedule_irq(). If there is any imbalance in the rcu_irq/nmi* invocations or RCU idle code had preemption enabled then this goes unnoticed until the CPU goes idle or some other RCU check is executed. Provide rcu_irq_exit_preempt() which can be invoked from the interrupt/exception return code in case that preemption is enabled. It invokes rcu_irq_exit() and contains a few sanity checks in case that CONFIG_PROVE_RCU is enabled to catch such issues directly. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Paul E. McKenney <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 9ea366f commit 8ae0ae6

File tree

3 files changed

+24
-0
lines changed

3 files changed

+24
-0
lines changed

include/linux/rcutiny.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@ static inline void rcu_irq_enter(void) { }
7171
static inline void rcu_irq_exit_irqson(void) { }
7272
static inline void rcu_irq_enter_irqson(void) { }
7373
static inline void rcu_irq_exit(void) { }
74+
static inline void rcu_irq_exit_preempt(void) { }
7475
static inline void exit_rcu(void) { }
7576
static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t)
7677
{

include/linux/rcutree.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ void rcu_idle_enter(void);
4747
void rcu_idle_exit(void);
4848
void rcu_irq_enter(void);
4949
void rcu_irq_exit(void);
50+
void rcu_irq_exit_preempt(void);
5051
void rcu_irq_enter_irqson(void);
5152
void rcu_irq_exit_irqson(void);
5253

kernel/rcu/tree.c

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -743,6 +743,28 @@ void noinstr rcu_irq_exit(void)
743743
rcu_nmi_exit();
744744
}
745745

746+
/**
747+
* rcu_irq_exit_preempt - Inform RCU that current CPU is exiting irq
748+
* towards in kernel preemption
749+
*
750+
* Same as rcu_irq_exit() but has a sanity check that scheduling is safe
751+
* from RCU point of view. Invoked from return from interrupt before kernel
752+
* preemption.
753+
*/
754+
void rcu_irq_exit_preempt(void)
755+
{
756+
lockdep_assert_irqs_disabled();
757+
rcu_nmi_exit();
758+
759+
RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nesting) <= 0,
760+
"RCU dynticks_nesting counter underflow/zero!");
761+
RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nmi_nesting) !=
762+
DYNTICK_IRQ_NONIDLE,
763+
"Bad RCU dynticks_nmi_nesting counter\n");
764+
RCU_LOCKDEP_WARN(rcu_dynticks_curr_cpu_in_eqs(),
765+
"RCU in extended quiescent state!");
766+
}
767+
746768
/*
747769
* Wrapper for rcu_irq_exit() where interrupts are enabled.
748770
*

0 commit comments

Comments
 (0)