Skip to content

Commit 52103be

Browse files
Peter ZijlstraIngo Molnar
authored andcommitted
smp: Optimize flush_smp_call_function_queue()
The call_single_queue can contain (two) different callbacks, synchronous and asynchronous. The current interrupt handler runs them in-order, which means that remote CPUs that are waiting for their synchronous call can be delayed by running asynchronous callbacks. Rework the interrupt handler to first run the synchonous callbacks. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 19a1f5e commit 52103be

File tree

1 file changed

+23
-4
lines changed

1 file changed

+23
-4
lines changed

kernel/smp.c

Lines changed: 23 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -209,9 +209,9 @@ void generic_smp_call_function_single_interrupt(void)
209209
*/
210210
static void flush_smp_call_function_queue(bool warn_cpu_offline)
211211
{
212-
struct llist_head *head;
213-
struct llist_node *entry;
214212
call_single_data_t *csd, *csd_next;
213+
struct llist_node *entry, *prev;
214+
struct llist_head *head;
215215
static bool warned;
216216

217217
lockdep_assert_irqs_disabled();
@@ -235,20 +235,39 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)
235235
csd->func);
236236
}
237237

238+
/*
239+
* First; run all SYNC callbacks, people are waiting for us.
240+
*/
241+
prev = NULL;
238242
llist_for_each_entry_safe(csd, csd_next, entry, llist) {
239243
smp_call_func_t func = csd->func;
240244
void *info = csd->info;
241245

242246
/* Do we wait until *after* callback? */
243247
if (csd->flags & CSD_FLAG_SYNCHRONOUS) {
248+
if (prev) {
249+
prev->next = &csd_next->llist;
250+
} else {
251+
entry = &csd_next->llist;
252+
}
244253
func(info);
245254
csd_unlock(csd);
246255
} else {
247-
csd_unlock(csd);
248-
func(info);
256+
prev = &csd->llist;
249257
}
250258
}
251259

260+
/*
261+
* Second; run all !SYNC callbacks.
262+
*/
263+
llist_for_each_entry_safe(csd, csd_next, entry, llist) {
264+
smp_call_func_t func = csd->func;
265+
void *info = csd->info;
266+
267+
csd_unlock(csd);
268+
func(info);
269+
}
270+
252271
/*
253272
* Handle irq works queued remotely by irq_work_queue_on().
254273
* Smp functions above are typically synchronous so they

0 commit comments

Comments
 (0)