Skip to content

Commit 88ca6a7

Browse files
committed
ring-buffer: Handle resize in early boot up
With the new command line option that allows trace event triggers to be added at boot, the "snapshot" trigger will allocate the snapshot buffer very early, when interrupts can not be enabled. Allocating the ring buffer is not the problem, but it also resizes it, which is, as the resize code does synchronization that can not be preformed at early boot. To handle this, first change the raw_spin_lock_irq() in rb_insert_pages() to raw_spin_lock_irqsave(), such that the unlocking of that spin lock will not enable interrupts. Next, where it calls schedule_work_on(), disable migration and check if the CPU to update is the current CPU, and if so, perform the work directly, otherwise re-enable migration and call the schedule_work_on() to the CPU that is being updated. The rb_insert_pages() just needs to be run on the CPU that it is updating, and does not need preemption nor interrupts disabled when calling it. Link: https://lore.kernel.org/lkml/Y5J%[email protected]/ Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Masami Hiramatsu <[email protected]> Cc: Andrew Morton <[email protected]> Fixes: a01fdc8 ("tracing: Add trace_trigger kernel command line option") Reported-by: Ross Zwisler <[email protected]> Signed-off-by: Steven Rostedt <[email protected]> Tested-by: Ross Zwisler <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
1 parent 608c6ed commit 88ca6a7

File tree

1 file changed

+25
-7
lines changed

1 file changed

+25
-7
lines changed

kernel/trace/ring_buffer.c

Lines changed: 25 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2062,8 +2062,10 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
20622062
{
20632063
struct list_head *pages = &cpu_buffer->new_pages;
20642064
int retries, success;
2065+
unsigned long flags;
20652066

2066-
raw_spin_lock_irq(&cpu_buffer->reader_lock);
2067+
/* Can be called at early boot up, where interrupts must not been enabled */
2068+
raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
20672069
/*
20682070
* We are holding the reader lock, so the reader page won't be swapped
20692071
* in the ring buffer. Now we are racing with the writer trying to
@@ -2120,7 +2122,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
21202122
* tracing
21212123
*/
21222124
RB_WARN_ON(cpu_buffer, !success);
2123-
raw_spin_unlock_irq(&cpu_buffer->reader_lock);
2125+
raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
21242126

21252127
/* free pages if they weren't inserted */
21262128
if (!success) {
@@ -2248,8 +2250,16 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
22482250
rb_update_pages(cpu_buffer);
22492251
cpu_buffer->nr_pages_to_update = 0;
22502252
} else {
2251-
schedule_work_on(cpu,
2252-
&cpu_buffer->update_pages_work);
2253+
/* Run directly if possible. */
2254+
migrate_disable();
2255+
if (cpu != smp_processor_id()) {
2256+
migrate_enable();
2257+
schedule_work_on(cpu,
2258+
&cpu_buffer->update_pages_work);
2259+
} else {
2260+
update_pages_handler(&cpu_buffer->update_pages_work);
2261+
migrate_enable();
2262+
}
22532263
}
22542264
}
22552265

@@ -2298,9 +2308,17 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
22982308
if (!cpu_online(cpu_id))
22992309
rb_update_pages(cpu_buffer);
23002310
else {
2301-
schedule_work_on(cpu_id,
2302-
&cpu_buffer->update_pages_work);
2303-
wait_for_completion(&cpu_buffer->update_done);
2311+
/* Run directly if possible. */
2312+
migrate_disable();
2313+
if (cpu_id == smp_processor_id()) {
2314+
rb_update_pages(cpu_buffer);
2315+
migrate_enable();
2316+
} else {
2317+
migrate_enable();
2318+
schedule_work_on(cpu_id,
2319+
&cpu_buffer->update_pages_work);
2320+
wait_for_completion(&cpu_buffer->update_done);
2321+
}
23042322
}
23052323

23062324
cpu_buffer->nr_pages_to_update = 0;

0 commit comments

Comments
 (0)