Skip to content

Commit 1a90bfd

Browse files
Sebastian Andrzej SiewiorKAGA-KOKO
authored andcommitted
smp: Make softirq handling RT safe in flush_smp_call_function_queue()
flush_smp_call_function_queue() invokes do_softirq() which is not available on PREEMPT_RT. flush_smp_call_function_queue() is invoked from the idle task and the migration task with preemption or interrupts disabled. So RT kernels cannot process soft interrupts in that context as that has to acquire 'sleeping spinlocks' which is not possible with preemption or interrupts disabled and forbidden from the idle task anyway. The currently known SMP function call which raises a soft interrupt is in the block layer, but this functionality is not enabled on RT kernels due to latency and performance reasons. RT could wake up ksoftirqd unconditionally, but this wants to be avoided if there were soft interrupts pending already when this is invoked in the context of the migration task. The migration task might have preempted a threaded interrupt handler which raised a soft interrupt, but did not reach the local_bh_enable() to process it. The "running" ksoftirqd might prevent the handling in the interrupt thread context which is causing latency issues. Add a new function which handles this case explicitely for RT and falls back to do_softirq() on !RT kernels. In the RT case this warns when one of the flushed SMP function calls raised a soft interrupt so this can be investigated. [ tglx: Moved the RT part out of SMP code ] Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/[email protected]
1 parent 16bf5a5 commit 1a90bfd

File tree

3 files changed

+26
-1
lines changed

3 files changed

+26
-1
lines changed

include/linux/interrupt.h

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -607,6 +607,15 @@ struct softirq_action
607607
asmlinkage void do_softirq(void);
608608
asmlinkage void __do_softirq(void);
609609

610+
#ifdef CONFIG_PREEMPT_RT
611+
extern void do_softirq_post_smp_call_flush(unsigned int was_pending);
612+
#else
613+
static inline void do_softirq_post_smp_call_flush(unsigned int unused)
614+
{
615+
do_softirq();
616+
}
617+
#endif
618+
610619
extern void open_softirq(int nr, void (*action)(struct softirq_action *));
611620
extern void softirq_init(void);
612621
extern void __raise_softirq_irqoff(unsigned int nr);

kernel/smp.c

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -696,6 +696,7 @@ static void __flush_smp_call_function_queue(bool warn_cpu_offline)
696696
*/
697697
void flush_smp_call_function_queue(void)
698698
{
699+
unsigned int was_pending;
699700
unsigned long flags;
700701

701702
if (llist_empty(this_cpu_ptr(&call_single_queue)))
@@ -704,9 +705,11 @@ void flush_smp_call_function_queue(void)
704705
cfd_seq_store(this_cpu_ptr(&cfd_seq_local)->idle, CFD_SEQ_NOCPU,
705706
smp_processor_id(), CFD_SEQ_IDLE);
706707
local_irq_save(flags);
708+
/* Get the already pending soft interrupts for RT enabled kernels */
709+
was_pending = local_softirq_pending();
707710
__flush_smp_call_function_queue(true);
708711
if (local_softirq_pending())
709-
do_softirq();
712+
do_softirq_post_smp_call_flush(was_pending);
710713

711714
local_irq_restore(flags);
712715
}

kernel/softirq.c

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -294,6 +294,19 @@ static inline void invoke_softirq(void)
294294
wakeup_softirqd();
295295
}
296296

297+
/*
298+
* flush_smp_call_function_queue() can raise a soft interrupt in a function
299+
* call. On RT kernels this is undesired and the only known functionality
300+
* in the block layer which does this is disabled on RT. If soft interrupts
301+
* get raised which haven't been raised before the flush, warn so it can be
302+
* investigated.
303+
*/
304+
void do_softirq_post_smp_call_flush(unsigned int was_pending)
305+
{
306+
if (WARN_ON_ONCE(was_pending != local_softirq_pending()))
307+
invoke_softirq();
308+
}
309+
297310
#else /* CONFIG_PREEMPT_RT */
298311

299312
/*

0 commit comments

Comments
 (0)