Skip to content

Commit 51d1579

Browse files
committed
ring-buffer: Protect ring_buffer_reset() from reentrancy
The resetting of the entire ring buffer use to simply go through and reset each individual CPU buffer that had its own protection and synchronization. But this was very slow, due to performing a synchronization for each CPU. The code was reshuffled to do one disabling of all CPU buffers, followed by a single RCU synchronization, and then the resetting of each of the CPU buffers. But unfortunately, the mutex that prevented multiple occurrences of resetting the buffer was not moved to the upper function, and there is nothing to protect from it. Take the ring buffer mutex around the global reset. Cc: [email protected] Fixes: b23d7a5 ("ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU") Reported-by: "Tzvetomir Stoyanov (VMware)" <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
1 parent 67d4f6e commit 51d1579

File tree

1 file changed

+5
-0
lines changed

1 file changed

+5
-0
lines changed

kernel/trace/ring_buffer.c

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5228,6 +5228,9 @@ void ring_buffer_reset(struct trace_buffer *buffer)
52285228
struct ring_buffer_per_cpu *cpu_buffer;
52295229
int cpu;
52305230

5231+
/* prevent another thread from changing buffer sizes */
5232+
mutex_lock(&buffer->mutex);
5233+
52315234
for_each_buffer_cpu(buffer, cpu) {
52325235
cpu_buffer = buffer->buffers[cpu];
52335236

@@ -5246,6 +5249,8 @@ void ring_buffer_reset(struct trace_buffer *buffer)
52465249
atomic_dec(&cpu_buffer->record_disabled);
52475250
atomic_dec(&cpu_buffer->resize_disabled);
52485251
}
5252+
5253+
mutex_unlock(&buffer->mutex);
52495254
}
52505255
EXPORT_SYMBOL_GPL(ring_buffer_reset);
52515256

0 commit comments

Comments
 (0)