Skip to content

Commit 0c7b1f1

Browse files
committed
fixup! Add docs
1 parent c28b0e0 commit 0c7b1f1

File tree

1 file changed

+40
-12
lines changed

1 file changed

+40
-12
lines changed

Doc/library/profile.rst

Lines changed: 40 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -85,17 +85,44 @@ The Python standard library provides three different profiling implementations:
8585
What Is Statistical Profiling?
8686
==============================
8787

88-
:dfn:`Statistical profiling` works by periodically interrupting a running program to capture its current call stack. Rather than monitoring every function entry and exit like deterministic profilers, it takes snapshots at regular intervals to build a statistical picture of where the program spends its time.
89-
90-
The sampling profiler uses process memory reading (via system calls like `process_vm_readv` on Linux, `vm_read` on macOS, and `ReadProcessMemory` on Windows) to attach to a running Python process and extract stack trace information without requiring any code modification or restart of the target process. This approach provides several key advantages over traditional profiling methods.
91-
92-
The fundamental principle is that if a function appears frequently in the collected stack samples, it is likely consuming significant CPU time. By analyzing thousands of samples, the profiler can accurately estimate the relative time spent in different parts of the program. The statistical nature means that while individual measurements may vary, the aggregate results converge to represent the true performance characteristics of the application.
93-
94-
Since statistical profiling operates externally to the target process, it introduces virtually no overhead to the running program. The profiler process runs separately and reads the target process memory without interrupting its execution. This makes it suitable for profiling production systems where performance impact must be minimized.
95-
96-
The accuracy of statistical profiling improves with the number of samples collected. Short-lived functions may be missed or underrepresented, while long-running functions will be captured proportionally to their execution time. This characteristic makes statistical profiling particularly effective for identifying the most significant performance bottlenecks rather than providing exhaustive coverage of all function calls.
97-
98-
Statistical profiling excels at answering questions like "which functions consume the most CPU time?" and "where should I focus optimization efforts?" rather than "exactly how many times was this function called?" The trade-off between precision and practicality makes it an invaluable tool for performance analysis in real-world applications.
88+
:dfn:`Statistical profiling` works by periodically interrupting a running
89+
program to capture its current call stack. Rather than monitoring every
90+
function entry and exit like deterministic profilers, it takes snapshots at
91+
regular intervals to build a statistical picture of where the program spends
92+
its time.
93+
94+
The sampling profiler uses process memory reading (via system calls like
95+
`process_vm_readv` on Linux, `vm_read` on macOS, and `ReadProcessMemory` on
96+
Windows) to attach to a running Python process and extract stack trace
97+
information without requiring any code modification or restart of the target
98+
process. This approach provides several key advantages over traditional
99+
profiling methods.
100+
101+
The fundamental principle is that if a function appears frequently in the
102+
collected stack samples, it is likely consuming significant CPU time. By
103+
analyzing thousands of samples, the profiler can accurately estimate the
104+
relative time spent in different parts of the program. The statistical nature
105+
means that while individual measurements may vary, the aggregate results
106+
converge to represent the true performance characteristics of the application.
107+
108+
Since statistical profiling operates externally to the target process, it
109+
introduces virtually no overhead to the running program. The profiler process
110+
runs separately and reads the target process memory without interrupting its
111+
execution. This makes it suitable for profiling production systems where
112+
performance impact must be minimized.
113+
114+
The accuracy of statistical profiling improves with the number of samples
115+
collected. Short-lived functions may be missed or underrepresented, while
116+
long-running functions will be captured proportionally to their execution time.
117+
This characteristic makes statistical profiling particularly effective for
118+
identifying the most significant performance bottlenecks rather than providing
119+
exhaustive coverage of all function calls.
120+
121+
Statistical profiling excels at answering questions like "which functions
122+
consume the most CPU time?" and "where should I focus optimization efforts?"
123+
rather than "exactly how many times was this function called?" The trade-off
124+
between precision and practicality makes it an invaluable tool for performance
125+
analysis in real-world applications.
99126

100127
.. _profile-instant:
101128

@@ -206,7 +233,8 @@ Profile with custom interval and duration, save to file::
206233

207234
python -m profile.sample -i 50 -d 30 -o profile.stats 1234
208235

209-
Generate collapsed stacks for flamegraph::
236+
Generate collapsed stacks to use with tools like `flamegraph.pl
237+
<https://github.com/brendangregg/FlameGraph>`_::
210238

211239
python -m profile.sample --collapsed 1234
212240

0 commit comments

Comments
 (0)