Skip to content

Commit d15121b

Browse files
Paolo AbeniKAGA-KOKO
authored andcommitted
Revert "softirq: Let ksoftirqd do its job"
This reverts the following commits: 4cd13c2 ("softirq: Let ksoftirqd do its job") 3c53776 ("Mark HI and TASKLET softirq synchronous") 1342d80 ("softirq: Don't skip softirq execution when softirq thread is parking") in a single change to avoid known bad intermediate states introduced by a patch series reverting them individually. Due to the mentioned commit, when the ksoftirqd threads take charge of softirq processing, the system can experience high latencies. In the past a few workarounds have been implemented for specific side-effects of the initial ksoftirqd enforcement commit: commit 1ff6882 ("watchdog: core: make sure the watchdog_worker is not deferred") commit 8d5755b ("watchdog: softdog: fire watchdog even if softirqs do not get to run") commit 217f697 ("net: busy-poll: allow preemption in sk_busy_loop()") commit 3c53776 ("Mark HI and TASKLET softirq synchronous") But the latency problem still exists in real-life workloads, see the link below. The reverted commit intended to solve a live-lock scenario that can now be addressed with the NAPI threaded mode, introduced with commit 29863d4 ("net: implement threaded-able napi poll loop support"), which is nowadays in a pretty stable status. While a complete solution to put softirq processing under nice resource control would be preferable, that has proven to be a very hard task. In the short term, remove the main pain point, and also simplify a bit the current softirq implementation. Signed-off-by: Paolo Abeni <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Jason Xing <[email protected]> Reviewed-by: Jakub Kicinski <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Reviewed-by: Sebastian Andrzej Siewior <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/netdev/[email protected] Link: https://lore.kernel.org/r/57e66b364f1b6f09c9bc0316742c3b14f4ce83bd.1683526542.git.pabeni@redhat.com
1 parent ac9a786 commit d15121b

File tree

1 file changed

+2
-20
lines changed

1 file changed

+2
-20
lines changed

kernel/softirq.c

Lines changed: 2 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -80,21 +80,6 @@ static void wakeup_softirqd(void)
8080
wake_up_process(tsk);
8181
}
8282

83-
/*
84-
* If ksoftirqd is scheduled, we do not want to process pending softirqs
85-
* right now. Let ksoftirqd handle this at its own rate, to get fairness,
86-
* unless we're doing some of the synchronous softirqs.
87-
*/
88-
#define SOFTIRQ_NOW_MASK ((1 << HI_SOFTIRQ) | (1 << TASKLET_SOFTIRQ))
89-
static bool ksoftirqd_running(unsigned long pending)
90-
{
91-
struct task_struct *tsk = __this_cpu_read(ksoftirqd);
92-
93-
if (pending & SOFTIRQ_NOW_MASK)
94-
return false;
95-
return tsk && task_is_running(tsk) && !__kthread_should_park(tsk);
96-
}
97-
9883
#ifdef CONFIG_TRACE_IRQFLAGS
9984
DEFINE_PER_CPU(int, hardirqs_enabled);
10085
DEFINE_PER_CPU(int, hardirq_context);
@@ -236,7 +221,7 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
236221
goto out;
237222

238223
pending = local_softirq_pending();
239-
if (!pending || ksoftirqd_running(pending))
224+
if (!pending)
240225
goto out;
241226

242227
/*
@@ -432,9 +417,6 @@ static inline bool should_wake_ksoftirqd(void)
432417

433418
static inline void invoke_softirq(void)
434419
{
435-
if (ksoftirqd_running(local_softirq_pending()))
436-
return;
437-
438420
if (!force_irqthreads() || !__this_cpu_read(ksoftirqd)) {
439421
#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
440422
/*
@@ -468,7 +450,7 @@ asmlinkage __visible void do_softirq(void)
468450

469451
pending = local_softirq_pending();
470452

471-
if (pending && !ksoftirqd_running(pending))
453+
if (pending)
472454
do_softirq_own_stack();
473455

474456
local_irq_restore(flags);

0 commit comments

Comments
 (0)