Optimizing eBPF I/O latency accounting when running 37M IOPS, on 384 CPUs | Tanel Poder Consulting #51
Replies: 1 comment
-
Really interesting work here. 😀 Hope you are well. Mat. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Optimizing eBPF I/O latency accounting when running 37M IOPS, on 384 CPUs | Tanel Poder Consulting
In this post I will introduce a much more efficient method for accounting block I/O latencies with eBPF on Linux. In my stress test, the “new biolatency” accounting method had 59x lower CPU and probe latency overhead compared to the current biolatency approach.
So I couldn’t help it and ended up putting 21 NVMe SSDs into one of my homelab servers. 8 of them are PCIe5 and the remaining 13 are PCIe4. - Linux, Oracle, SQL performance tuning and troubleshooting - consulting & training.
https://tanelpoder.com/posts/optimizing-ebpf-biolatency-accounting/
Beta Was this translation helpful? Give feedback.
All reactions