Skip to content

Commit af0f835

Browse files
committed
bin/bpftrace-tools/dist: Import bpftrace tools v0.12.1
Skipped bashreadline as we link to external library, whereas script instruments the internal implementation. Skipped xfsdist as we ship without XFS support in kernel or userspace. Skipped CMake file as it's not needed. --HG-- branch : bpftrace-tools
0 parents  commit af0f835

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+2966
-0
lines changed
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
#!/usr/bin/env bpftrace
2+
/*
3+
* biolatency.bt Block I/O latency as a histogram.
4+
* For Linux, uses bpftrace, eBPF.
5+
*
6+
* This is a bpftrace version of the bcc tool of the same name.
7+
*
8+
* Copyright 2018 Netflix, Inc.
9+
* Licensed under the Apache License, Version 2.0 (the "License")
10+
*
11+
* 13-Sep-2018 Brendan Gregg Created this.
12+
*/
13+
14+
BEGIN
15+
{
16+
printf("Tracing block device I/O... Hit Ctrl-C to end.\n");
17+
}
18+
19+
kprobe:blk_account_io_start
20+
{
21+
@start[arg0] = nsecs;
22+
}
23+
24+
kprobe:blk_account_io_done
25+
/@start[arg0]/
26+
{
27+
@usecs = hist((nsecs - @start[arg0]) / 1000);
28+
delete(@start[arg0]);
29+
}
30+
31+
END
32+
{
33+
clear(@start);
34+
}
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
Demonstrations of biolatency, the Linux BPF/bpftrace version.
2+
3+
4+
This traces block I/O, and shows latency as a power-of-2 histogram. For example:
5+
6+
# ./biolatency.bt
7+
Attaching 3 probes...
8+
Tracing block device I/O... Hit Ctrl-C to end.
9+
^C
10+
11+
@usecs:
12+
[256, 512) 2 | |
13+
[512, 1K) 10 |@ |
14+
[1K, 2K) 426 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
15+
[2K, 4K) 230 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
16+
[4K, 8K) 9 |@ |
17+
[8K, 16K) 128 |@@@@@@@@@@@@@@@ |
18+
[16K, 32K) 68 |@@@@@@@@ |
19+
[32K, 64K) 0 | |
20+
[64K, 128K) 0 | |
21+
[128K, 256K) 10 |@ |
22+
23+
While tracing, this shows that 426 block I/O had a latency of between 1K and 2K
24+
usecs (1024 and 2048 microseconds), which is between 1 and 2 milliseconds.
25+
There are also two modes visible, one between 1 and 2 milliseconds, and another
26+
between 8 and 16 milliseconds: this sounds like cache hits and cache misses.
27+
There were also 10 I/O with latency 128 to 256 ms: outliers. Other tools and
28+
instrumentation, like biosnoop.bt, can shed more light on those outliers.
29+
30+
31+
There is another version of this tool in bcc: https://github.com/iovisor/bcc
32+
The bcc version provides options to customize the output.
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
#!/usr/bin/env bpftrace
2+
#include <linux/blkdev.h>
3+
/*
4+
* biosnoop.bt Block I/O tracing tool, showing per I/O latency.
5+
* For Linux, uses bpftrace, eBPF.
6+
*
7+
* TODO: switch to block tracepoints. Add offset and size columns.
8+
*
9+
* This is a bpftrace version of the bcc tool of the same name.
10+
*
11+
* 15-Nov-2017 Brendan Gregg Created this.
12+
*/
13+
14+
BEGIN
15+
{
16+
printf("%-12s %-7s %-16s %-6s %7s\n", "TIME(ms)", "DISK", "COMM", "PID", "LAT(ms)");
17+
}
18+
19+
kprobe:blk_account_io_start
20+
{
21+
@start[arg0] = nsecs;
22+
@iopid[arg0] = pid;
23+
@iocomm[arg0] = comm;
24+
@disk[arg0] = ((struct request *)arg0)->rq_disk->disk_name;
25+
}
26+
27+
kprobe:blk_account_io_done
28+
/@start[arg0] != 0 && @iopid[arg0] != 0 && @iocomm[arg0] != ""/
29+
30+
{
31+
$now = nsecs;
32+
printf("%-12u %-7s %-16s %-6d %7d\n",
33+
elapsed / 1e6, @disk[arg0], @iocomm[arg0], @iopid[arg0],
34+
($now - @start[arg0]) / 1e6);
35+
36+
delete(@start[arg0]);
37+
delete(@iopid[arg0]);
38+
delete(@iocomm[arg0]);
39+
delete(@disk[arg0]);
40+
}
41+
42+
END
43+
{
44+
clear(@start);
45+
clear(@iopid);
46+
clear(@iocomm);
47+
clear(@disk);
48+
}
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
Demonstrations of biosnoop, the Linux BPF/bpftrace version.
2+
3+
4+
This traces block I/O, and shows the issuing process (at least, the process
5+
that was on-CPU at the time of queue insert) and the latency of the I/O:
6+
7+
# ./biosnoop.bt
8+
Attaching 4 probes...
9+
TIME(ms) DISK COMM PID LAT(ms)
10+
611 nvme0n1 bash 4179 10
11+
611 nvme0n1 cksum 4179 0
12+
627 nvme0n1 cksum 4179 15
13+
641 nvme0n1 cksum 4179 13
14+
644 nvme0n1 cksum 4179 3
15+
658 nvme0n1 cksum 4179 13
16+
673 nvme0n1 cksum 4179 14
17+
686 nvme0n1 cksum 4179 13
18+
701 nvme0n1 cksum 4179 14
19+
710 nvme0n1 cksum 4179 8
20+
717 nvme0n1 cksum 4179 6
21+
728 nvme0n1 cksum 4179 10
22+
735 nvme0n1 cksum 4179 6
23+
751 nvme0n1 cksum 4179 10
24+
758 nvme0n1 cksum 4179 17
25+
783 nvme0n1 cksum 4179 12
26+
796 nvme0n1 cksum 4179 25
27+
802 nvme0n1 cksum 4179 32
28+
[...]
29+
30+
This output shows the cksum process was issuing block I/O, which were
31+
completing with around 12 milliseconds of latency. Each block I/O event is
32+
printed out, with a completion time as the first column, measured from
33+
program start.
34+
35+
36+
An example of some background flushing:
37+
38+
# ./biosnoop.bt
39+
Attaching 4 probes...
40+
TIME(ms) DISK COMM PID LAT(ms)
41+
2966 nvme0n1 jbd2/nvme0n1-8 615 0
42+
2967 nvme0n1 jbd2/nvme0n1-8 615 0
43+
[...]
44+
45+
46+
There is another version of this tool in bcc: https://github.com/iovisor/bcc
47+
The bcc version provides more fields.
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
#!/usr/bin/env bpftrace
2+
/*
3+
* biostacks - Show disk I/O latency with initialization stacks.
4+
*
5+
* See BPF Performance Tools, Chapter 9, for an explanation of this tool.
6+
*
7+
* Copyright (c) 2019 Brendan Gregg.
8+
* Licensed under the Apache License, Version 2.0 (the "License").
9+
* This was originally created for the BPF Performance Tools book
10+
* published by Addison Wesley. ISBN-13: 9780136554820
11+
* When copying or porting, include this comment.
12+
*
13+
* 19-Mar-2019 Brendan Gregg Created this.
14+
*/
15+
16+
BEGIN
17+
{
18+
printf("Tracing block I/O with init stacks. Hit Ctrl-C to end.\n");
19+
}
20+
21+
kprobe:blk_account_io_start
22+
{
23+
@reqstack[arg0] = kstack;
24+
@reqts[arg0] = nsecs;
25+
}
26+
27+
kprobe:blk_start_request,
28+
kprobe:blk_mq_start_request
29+
/@reqts[arg0]/
30+
{
31+
@usecs[@reqstack[arg0]] = hist(nsecs - @reqts[arg0]);
32+
delete(@reqstack[arg0]);
33+
delete(@reqts[arg0]);
34+
}
35+
36+
END
37+
{
38+
clear(@reqstack); clear(@reqts);
39+
}
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
Demonstrations of biostacks, the Linux BCC/eBPF version.
2+
3+
4+
This tool shows block I/O latency as a histogram, with the kernel stack trace
5+
that initiated the I/O. This can help explain disk I/O that is not directly
6+
requested by applications (eg, metadata reads on writes, resilvering, etc).
7+
For example:
8+
9+
# ./biostacks.bt
10+
Attaching 5 probes...
11+
Tracing block I/O with init stacks. Hit Ctrl-C to end.
12+
^C
13+
14+
@usecs[
15+
blk_account_io_start+1
16+
blk_mq_make_request+1102
17+
generic_make_request+292
18+
submit_bio+115
19+
_xfs_buf_ioapply+798
20+
xfs_buf_submit+101
21+
xlog_bdstrat+43
22+
xlog_sync+705
23+
xlog_state_release_iclog+108
24+
_xfs_log_force+542
25+
xfs_log_force+44
26+
xfsaild+428
27+
kthread+289
28+
ret_from_fork+53
29+
]:
30+
[64K, 128K) 1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
31+
32+
[...]
33+
34+
@usecs[
35+
blk_account_io_start+1
36+
blk_mq_make_request+707
37+
generic_make_request+292
38+
submit_bio+115
39+
xfs_add_to_ioend+455
40+
xfs_do_writepage+758
41+
write_cache_pages+524
42+
xfs_vm_writepages+190
43+
do_writepages+75
44+
__writeback_single_inode+69
45+
writeback_sb_inodes+481
46+
__writeback_inodes_wb+103
47+
wb_writeback+625
48+
wb_workfn+384
49+
process_one_work+478
50+
worker_thread+50
51+
kthread+289
52+
ret_from_fork+53
53+
]:
54+
[8K, 16K) 560 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
55+
[16K, 32K) 218 |@@@@@@@@@@@@@@@@@@@@ |
56+
[32K, 64K) 26 |@@ |
57+
[64K, 128K) 2 | |
58+
[128K, 256K) 53 |@@@@ |
59+
[256K, 512K) 60 |@@@@@ |
60+
61+
This output shows the most frequent stack was XFS writeback, with latencies
62+
between 8 and 512 microseconds. The other stack included here shows an XFS
63+
log sync.
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
#!/usr/bin/env bpftrace
2+
/*
3+
* bitesize Show disk I/O size as a histogram.
4+
* For Linux, uses bpftrace and eBPF.
5+
*
6+
* USAGE: bitesize.bt
7+
*
8+
* This is a bpftrace version of the bcc tool of the same name.
9+
*
10+
* Copyright 2018 Netflix, Inc.
11+
* Licensed under the Apache License, Version 2.0 (the "License")
12+
*
13+
* 07-Sep-2018 Brendan Gregg Created this.
14+
*/
15+
16+
BEGIN
17+
{
18+
printf("Tracing block device I/O... Hit Ctrl-C to end.\n");
19+
}
20+
21+
tracepoint:block:block_rq_issue
22+
{
23+
@[args->comm] = hist(args->bytes);
24+
}
25+
26+
END
27+
{
28+
printf("\nI/O size (bytes) histograms by process name:");
29+
}
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
Demonstrations of bitesize, the Linux bpftrace/eBPF version.
2+
3+
4+
This traces disk I/O via the block I/O interface, and prints a summary of I/O
5+
sizes as histograms for each process name. For example:
6+
7+
# ./bitesize.bt
8+
Attaching 3 probes...
9+
Tracing block device I/O... Hit Ctrl-C to end.
10+
^C
11+
I/O size (bytes) histograms by process name:
12+
13+
@[cleanup]:
14+
[4K, 8K) 2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
15+
16+
@[postdrop]:
17+
[4K, 8K) 2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
18+
19+
@[jps]:
20+
[4K, 8K) 1 |@@@@@@@@@@@@@@@@@@@@@@@@@@ |
21+
[8K, 16K) 0 | |
22+
[16K, 32K) 0 | |
23+
[32K, 64K) 2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
24+
25+
@[kworker/2:1H]:
26+
[0] 3 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
27+
[1] 0 | |
28+
[2, 4) 0 | |
29+
[4, 8) 0 | |
30+
[8, 16) 0 | |
31+
[16, 32) 0 | |
32+
[32, 64) 0 | |
33+
[64, 128) 0 | |
34+
[128, 256) 0 | |
35+
[256, 512) 0 | |
36+
[512, 1K) 0 | |
37+
[1K, 2K) 0 | |
38+
[2K, 4K) 0 | |
39+
[4K, 8K) 0 | |
40+
[8K, 16K) 0 | |
41+
[16K, 32K) 0 | |
42+
[32K, 64K) 0 | |
43+
[64K, 128K) 1 |@@@@@@@@@@@@@@@@@ |
44+
45+
@[jbd2/nvme0n1-8]:
46+
[4K, 8K) 3 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
47+
[8K, 16K) 0 | |
48+
[16K, 32K) 0 | |
49+
[32K, 64K) 2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
50+
[64K, 128K) 1 |@@@@@@@@@@@@@@@@@ |
51+
52+
@[dd]:
53+
[16K, 32K) 921 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
54+
55+
The most active process while tracing was "dd", which issues 921 I/O between
56+
16 Kbytes and 32 Kbytes in size.
57+
58+
59+
There is another version of this tool in bcc: https://github.com/iovisor/bcc

0 commit comments

Comments
 (0)