Skip to content

Commit 79ef0c0

Browse files
committed
Merge tag 'trace-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing updates from Steven Rostedt: - kprobes: Restructured stack unwinder to show properly on x86 when a stack dump happens from a kretprobe callback. - Fix to bootconfig parsing - Have tracefs allow owner and group permissions by default (only denying others). There's been pressure to allow non root to tracefs in a controlled fashion, and using groups is probably the safest. - Bootconfig memory managament updates. - Bootconfig clean up to have the tools directory be less dependent on changes in the kernel tree. - Allow perf to be traced by function tracer. - Rewrite of function graph tracer to be a callback from the function tracer instead of having its own trampoline (this change will happen on an arch by arch basis, and currently only x86_64 implements it). - Allow multiple direct trampolines (bpf hooks to functions) be batched together in one synchronization. - Allow histogram triggers to add variables that can perform calculations against the event's fields. - Use the linker to determine architecture callbacks from the ftrace trampoline to allow for proper parameter prototypes and prevent warnings from the compiler. - Extend histogram triggers to key off of variables. - Have trace recursion use bit magic to determine preempt context over if branches. - Have trace recursion disable preemption as all use cases do anyway. - Added testing for verification of tracing utilities. - Various small clean ups and fixes. * tag 'trace-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (101 commits) tracing/histogram: Fix semicolon.cocci warnings tracing/histogram: Fix documentation inline emphasis warning tracing: Increase PERF_MAX_TRACE_SIZE to handle Sentinel1 and docker together tracing: Show size of requested perf buffer bootconfig: Initialize ret in xbc_parse_tree() ftrace: do CPU checking after preemption disabled ftrace: disable preemption when recursion locked tracing/histogram: Document expression arithmetic and constants tracing/histogram: Optimize division by a power of 2 tracing/histogram: Covert expr to const if both operands are constants tracing/histogram: Simplify handling of .sym-offset in expressions tracing: Fix operator precedence for hist triggers expression tracing: Add division and multiplication support for hist triggers tracing: Add support for creating hist trigger variables from literal selftests/ftrace: Stop tracing while reading the trace file by default MAINTAINERS: Update KPROBES and TRACING entries test_kprobes: Move it from kernel/ to lib/ docs, kprobes: Remove invalid URL and add new reference samples/kretprobes: Fix return value if register_kretprobe() failed lib/bootconfig: Fix the xbc_get_info kerneldoc ...
2 parents d54f486 + feea69e commit 79ef0c0

File tree

135 files changed

+3008
-1525
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

135 files changed

+3008
-1525
lines changed

Documentation/trace/histogram.rst

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1763,6 +1763,20 @@ using the same key and variable from yet another event::
17631763

17641764
# echo 'hist:key=pid:wakeupswitch_lat=$wakeup_lat+$switchtime_lat ...' >> event3/trigger
17651765

1766+
Expressions support the use of addition, subtraction, multiplication and
1767+
division operators (+-\*/).
1768+
1769+
Note that division by zero always returns -1.
1770+
1771+
Numeric constants can also be used directly in an expression::
1772+
1773+
# echo 'hist:keys=next_pid:timestamp_secs=common_timestamp/1000000 ...' >> event/trigger
1774+
1775+
or assigned to a variable and referenced in a subsequent expression::
1776+
1777+
# echo 'hist:keys=next_pid:us_per_sec=1000000 ...' >> event/trigger
1778+
# echo 'hist:keys=next_pid:timestamp_secs=common_timestamp/$us_per_sec ...' >> event/trigger
1779+
17661780
2.2.2 Synthetic Events
17671781
----------------------
17681782

Documentation/trace/kprobes.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -784,6 +784,6 @@ References
784784

785785
For additional information on Kprobes, refer to the following URLs:
786786

787-
- https://www.ibm.com/developerworks/library/l-kprobes/index.html
787+
- https://lwn.net/Articles/132196/
788788
- https://www.kernel.org/doc/ols/2006/ols2006v2-pages-109-124.pdf
789789

Documentation/trace/timerlat-tracer.rst

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ Timerlat tracer
33
###############
44

55
The timerlat tracer aims to help the preemptive kernel developers to
6-
find souces of wakeup latencies of real-time threads. Like cyclictest,
6+
find sources of wakeup latencies of real-time threads. Like cyclictest,
77
the tracer sets a periodic timer that wakes up a thread. The thread then
88
computes a *wakeup latency* value as the difference between the *current
99
time* and the *absolute time* that the timer was set to expire. The main
@@ -50,14 +50,14 @@ The second is the *timer latency* observed by the thread. The ACTIVATION
5050
ID field serves to relate the *irq* execution to its respective *thread*
5151
execution.
5252

53-
The *irq*/*thread* splitting is important to clarify at which context
53+
The *irq*/*thread* splitting is important to clarify in which context
5454
the unexpected high value is coming from. The *irq* context can be
55-
delayed by hardware related actions, such as SMIs, NMIs, IRQs
56-
or by a thread masking interrupts. Once the timer happens, the delay
55+
delayed by hardware-related actions, such as SMIs, NMIs, IRQs,
56+
or by thread masking interrupts. Once the timer happens, the delay
5757
can also be influenced by blocking caused by threads. For example, by
58-
postponing the scheduler execution via preempt_disable(), by the
59-
scheduler execution, or by masking interrupts. Threads can
60-
also be delayed by the interference from other threads and IRQs.
58+
postponing the scheduler execution via preempt_disable(), scheduler
59+
execution, or masking interrupts. Threads can also be delayed by the
60+
interference from other threads and IRQs.
6161

6262
Tracer options
6363
---------------------
@@ -68,14 +68,14 @@ directory. The timerlat configs are:
6868

6969
- cpus: CPUs at which a timerlat thread will execute.
7070
- timerlat_period_us: the period of the timerlat thread.
71-
- osnoise/stop_tracing_us: stop the system tracing if a
71+
- stop_tracing_us: stop the system tracing if a
7272
timer latency at the *irq* context higher than the configured
7373
value happens. Writing 0 disables this option.
7474
- stop_tracing_total_us: stop the system tracing if a
75-
timer latency at the *thread* context higher than the configured
75+
timer latency at the *thread* context is higher than the configured
7676
value happens. Writing 0 disables this option.
77-
- print_stack: save the stack of the IRQ ocurrence, and print
78-
it afte the *thread context* event".
77+
- print_stack: save the stack of the IRQ occurrence, and print
78+
it after the *thread context* event".
7979

8080
timerlat and osnoise
8181
----------------------------
@@ -95,7 +95,7 @@ For example::
9595
timerlat/5-1035 [005] ....... 548.771104: #402268 context thread timer_latency 39960 ns
9696

9797
In this case, the root cause of the timer latency does not point to a
98-
single cause, but to multiple ones. Firstly, the timer IRQ was delayed
98+
single cause but to multiple ones. Firstly, the timer IRQ was delayed
9999
for 13 us, which may point to a long IRQ disabled section (see IRQ
100100
stacktrace section). Then the timer interrupt that wakes up the timerlat
101101
thread took 7597 ns, and the qxl:21 device IRQ took 7139 ns. Finally,

MAINTAINERS

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10482,10 +10482,13 @@ M: Anil S Keshavamurthy <[email protected]>
1048210482
M: "David S. Miller" <[email protected]>
1048310483
M: Masami Hiramatsu <[email protected]>
1048410484
S: Maintained
10485+
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
1048510486
F: Documentation/trace/kprobes.rst
1048610487
F: include/asm-generic/kprobes.h
1048710488
F: include/linux/kprobes.h
1048810489
F: kernel/kprobes.c
10490+
F: lib/test_kprobes.c
10491+
F: samples/kprobes
1048910492

1049010493
KS0108 LCD CONTROLLER DRIVER
1049110494
M: Miguel Ojeda <[email protected]>
@@ -19026,7 +19029,7 @@ TRACING
1902619029
M: Steven Rostedt <[email protected]>
1902719030
M: Ingo Molnar <[email protected]>
1902819031
S: Maintained
19029-
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
19032+
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
1903019033
F: Documentation/trace/ftrace.rst
1903119034
F: arch/*/*/*/ftrace.h
1903219035
F: arch/*/kernel/ftrace.c

arch/Kconfig

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -191,6 +191,14 @@ config HAVE_OPTPROBES
191191
config HAVE_KPROBES_ON_FTRACE
192192
bool
193193

194+
config ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
195+
bool
196+
help
197+
Since kretprobes modifies return address on the stack, the
198+
stacktrace may see the kretprobe trampoline address instead
199+
of correct one. If the architecture stacktrace code and
200+
unwinder can adjust such entries, select this configuration.
201+
194202
config HAVE_FUNCTION_ERROR_INJECTION
195203
bool
196204

arch/arc/include/asm/kprobes.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ struct kprobe_ctlblk {
4646
};
4747

4848
int kprobe_fault_handler(struct pt_regs *regs, unsigned long cause);
49-
void kretprobe_trampoline(void);
49+
void __kretprobe_trampoline(void);
5050
void trap_is_kprobe(unsigned long address, struct pt_regs *regs);
5151
#else
5252
#define trap_is_kprobe(address, regs)

arch/arc/include/asm/ptrace.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,6 +149,11 @@ static inline long regs_return_value(struct pt_regs *regs)
149149
return (long)regs->r0;
150150
}
151151

152+
static inline void instruction_pointer_set(struct pt_regs *regs,
153+
unsigned long val)
154+
{
155+
instruction_pointer(regs) = val;
156+
}
152157
#endif /* !__ASSEMBLY__ */
153158

154159
#endif /* __ASM_PTRACE_H */

arch/arc/kernel/kprobes.c

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -363,8 +363,9 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
363363

364364
static void __used kretprobe_trampoline_holder(void)
365365
{
366-
__asm__ __volatile__(".global kretprobe_trampoline\n"
367-
"kretprobe_trampoline:\n" "nop\n");
366+
__asm__ __volatile__(".global __kretprobe_trampoline\n"
367+
"__kretprobe_trampoline:\n"
368+
"nop\n");
368369
}
369370

370371
void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
@@ -375,13 +376,13 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
375376
ri->fp = NULL;
376377

377378
/* Replace the return addr with trampoline addr */
378-
regs->blink = (unsigned long)&kretprobe_trampoline;
379+
regs->blink = (unsigned long)&__kretprobe_trampoline;
379380
}
380381

381382
static int __kprobes trampoline_probe_handler(struct kprobe *p,
382383
struct pt_regs *regs)
383384
{
384-
regs->ret = __kretprobe_trampoline_handler(regs, &kretprobe_trampoline, NULL);
385+
regs->ret = __kretprobe_trampoline_handler(regs, NULL);
385386

386387
/* By returning a non zero value, we are telling the kprobe handler
387388
* that we don't want the post_handler to run
@@ -390,7 +391,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
390391
}
391392

392393
static struct kprobe trampoline_p = {
393-
.addr = (kprobe_opcode_t *) &kretprobe_trampoline,
394+
.addr = (kprobe_opcode_t *) &__kretprobe_trampoline,
394395
.pre_handler = trampoline_probe_handler
395396
};
396397

@@ -402,7 +403,7 @@ int __init arch_init_kprobes(void)
402403

403404
int __kprobes arch_trampoline_kprobe(struct kprobe *p)
404405
{
405-
if (p->addr == (kprobe_opcode_t *) &kretprobe_trampoline)
406+
if (p->addr == (kprobe_opcode_t *) &__kretprobe_trampoline)
406407
return 1;
407408

408409
return 0;

arch/arm/Kconfig

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ config ARM
33
bool
44
default y
55
select ARCH_32BIT_OFF_T
6+
select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE if HAVE_KRETPROBES && FRAME_POINTER && !ARM_UNWIND
67
select ARCH_HAS_BINFMT_FLAT
78
select ARCH_HAS_DEBUG_VIRTUAL if MMU
89
select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE

arch/arm/include/asm/stacktrace.h

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
#define __ASM_STACKTRACE_H
44

55
#include <asm/ptrace.h>
6+
#include <linux/llist.h>
67

78
struct stackframe {
89
/*
@@ -13,6 +14,10 @@ struct stackframe {
1314
unsigned long sp;
1415
unsigned long lr;
1516
unsigned long pc;
17+
#ifdef CONFIG_KRETPROBES
18+
struct llist_node *kr_cur;
19+
struct task_struct *tsk;
20+
#endif
1621
};
1722

1823
static __always_inline
@@ -22,6 +27,10 @@ void arm_get_current_stackframe(struct pt_regs *regs, struct stackframe *frame)
2227
frame->sp = regs->ARM_sp;
2328
frame->lr = regs->ARM_lr;
2429
frame->pc = regs->ARM_pc;
30+
#ifdef CONFIG_KRETPROBES
31+
frame->kr_cur = NULL;
32+
frame->tsk = current;
33+
#endif
2534
}
2635

2736
extern int unwind_frame(struct stackframe *frame);

0 commit comments

Comments
 (0)