|
| 1 | +--- |
| 2 | +icon: material/chart-timeline |
| 3 | +description: Exporting traces for viewing in Perfetto and the Chrome tracing view. |
| 4 | +--- |
| 5 | + |
| 6 | +# Perfetto traces |
| 7 | + |
| 8 | +<!-- md:version 0.9.131 --> |
| 9 | + |
| 10 | +Nextest supports exporting traces in the [Chrome Trace Event format](https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU/preview?tab=t.0#heading=h.yr4qxyxotyw). Chrome traces can be imported into [Perfetto](https://ui.perfetto.dev/) for visualization and analysis. Traces can also be loaded in Chrome's built-in viewer at `chrome://tracing`, though Perfetto is recommended for its richer UI and SQL query support. |
| 11 | + |
| 12 | +Traces cover the test execution phase only; the build phase is not included. |
| 13 | + |
| 14 | +## Use cases |
| 15 | + |
| 16 | +Perfetto traces can be used to observe the timeline of a test run, and to find bottlenecks or long-pole tests that might be affecting test execution speed. For example, Perfetto traces can be used to identify slow tests holding up test runs, in order to [prioritize them first](../../configuration/test-priorities.md). |
| 17 | + |
| 18 | +Perfetto also has a built-in query language called [PerfettoSQL](https://perfetto.dev/docs/analysis/perfetto-sql-getting-started). For example queries, see [_Example queries_](#example-queries) below. |
| 19 | + |
| 20 | +## Exporting traces |
| 21 | + |
| 22 | +To export the latest recording as a Chrome trace: |
| 23 | + |
| 24 | +```bash |
| 25 | +cargo nextest store export-chrome-trace latest |
| 26 | +``` |
| 27 | + |
| 28 | +By default, this prints the trace to standard output. To instead write the trace to a file, use the `-o`/`--output` option: |
| 29 | + |
| 30 | +```bash |
| 31 | +cargo nextest store export-chrome-trace latest -o trace.json |
| 32 | +``` |
| 33 | + |
| 34 | +Traces can also be exported from a [portable recording](portable-recordings.md), e.g., a recording generated in CI: |
| 35 | + |
| 36 | +```bash |
| 37 | +cargo nextest store export-chrome-trace my-run.zip |
| 38 | +``` |
| 39 | + |
| 40 | +By default, data is produced in a compact JSON format. For prettified JSON, use `--message-format json-pretty`: |
| 41 | + |
| 42 | +```bash |
| 43 | +cargo nextest store export-chrome-trace latest --message-format json-pretty |
| 44 | +``` |
| 45 | + |
| 46 | +An example of a trace loaded into the Perfetto web UI: [`ui.perfetto.dev`](https://ui.perfetto.dev/?url=https://nexte.st/static/example-chrome-trace.json). |
| 47 | + |
| 48 | +<figure markdown="span"> |
| 49 | + [{ width="80%" }](../../../static/perfetto-test-view.png)<figcaption markdown="span">Viewing a test in the Perfetto UI ([interactive view](https://ui.perfetto.dev/?url=https://nexte.st/static/example-chrome-trace.json))</figcaption> |
| 50 | +</figure> |
| 51 | + |
| 52 | +### Trace dimension mapping |
| 53 | + |
| 54 | +Each test binary is considered a "process", and each global slot number (see [_Slot numbers_](../../configuration/test-groups.md#slot-numbers)) is considered a "thread". Tests are shown as blocks within each slot. |
| 55 | + |
| 56 | +- `pid` is a synthetic, numerically increasing ID unique to a binary ID. |
| 57 | +- `tid` is the global slot number (starting from 0) plus 10 000. |
| 58 | +- `name` is the name of the test. |
| 59 | +- `cat` is typically `"test"` or `"run"`. For [setup scripts](../../configuration/setup-scripts.md) it is `"setup-script"`. For [stress sub-runs](../../features/stress-tests.md) it is `"stress"`. |
| 60 | +- `ts` is the actual timestamp of each event. |
| 61 | +- `args` contains metadata for each test, such as whether it passed or failed. |
| 62 | + |
| 63 | +There is also a global "nextest run" process with a bar for the overall run, as well as several time-series (counter) plots: |
| 64 | + |
| 65 | +- `concurrency running_scripts` shows the number of setup scripts running at a given time. |
| 66 | +- `concurrency running_tests` shows the number of tests running at a given time. |
| 67 | +- `test results passed`, `test results failed`, and `test results flaky` contain the number of tests passed, failed, and flaky respectively. |
| 68 | + |
| 69 | +### Grouping by slot |
| 70 | + |
| 71 | +By default, each test binary is treated as a separate process. To combine all test binaries into a single process grouped by concurrency slot, use `--group-by slot`: |
| 72 | + |
| 73 | +``` |
| 74 | +cargo nextest store export-chrome-trace latest --group-by slot |
| 75 | +``` |
| 76 | + |
| 77 | +This is useful when you care about how well concurrency slots are packed rather than which binary a test belongs to; for example, to see if slots are sitting idle between tests. |
| 78 | + |
| 79 | +## PerfettoSQL queries |
| 80 | + |
| 81 | +Perfetto has a powerful query language, [PerfettoSQL](https://perfetto.dev/docs/analysis/perfetto-sql-getting-started), that can be used to analyze test runs. Queries can be run in the "Query (SQL)" view, or via the omnibox at the top of the page. |
| 82 | + |
| 83 | +Try running the example queries below against the [interactive example](https://ui.perfetto.dev/?url=https://nexte.st/static/example-chrome-trace.json). |
| 84 | + |
| 85 | +### Example queries |
| 86 | + |
| 87 | +Print a list of the top 20 slowest tests: |
| 88 | + |
| 89 | +```sql |
| 90 | +INCLUDE PERFETTO MODULE slices.with_context; |
| 91 | + |
| 92 | +SELECT |
| 93 | + name AS test_name, |
| 94 | + process_name AS binary, |
| 95 | + dur / 1e6 AS duration_ms |
| 96 | +FROM thread_or_process_slice |
| 97 | +WHERE category = 'test' |
| 98 | +ORDER BY dur DESC |
| 99 | +LIMIT 20; |
| 100 | +``` |
| 101 | + |
| 102 | +Total test time per binary: |
| 103 | + |
| 104 | +```sql |
| 105 | +INCLUDE PERFETTO MODULE slices.with_context; |
| 106 | + |
| 107 | +SELECT |
| 108 | + process_name AS binary, |
| 109 | + COUNT(*) AS test_count, |
| 110 | + SUM(dur) / 1e6 AS total_ms, |
| 111 | + AVG(dur) / 1e6 AS avg_ms, |
| 112 | + MAX(dur) / 1e6 AS max_ms, |
| 113 | + MIN(dur) / 1e6 AS min_ms |
| 114 | +FROM thread_or_process_slice |
| 115 | +WHERE category = 'test' |
| 116 | +GROUP BY process_name |
| 117 | +ORDER BY total_ms DESC; |
| 118 | +``` |
| 119 | + |
| 120 | +Duration distribution histogram: |
| 121 | + |
| 122 | +```sql |
| 123 | +SELECT |
| 124 | + CASE |
| 125 | + WHEN dur < 1e8 THEN '< 100ms' |
| 126 | + WHEN dur < 1e9 THEN '100ms - 1s' |
| 127 | + WHEN dur < 5e9 THEN '1s - 5s' |
| 128 | + WHEN dur < 10e9 THEN '5s - 10s' |
| 129 | + WHEN dur < 30e9 THEN '10s - 30s' |
| 130 | + ELSE '> 30s' |
| 131 | + END AS bucket, |
| 132 | + COUNT(*) AS count |
| 133 | +FROM slice |
| 134 | +WHERE category = 'test' |
| 135 | +GROUP BY bucket |
| 136 | +ORDER BY MIN(dur); |
| 137 | +``` |
| 138 | + |
| 139 | +Setup script statuses and durations: |
| 140 | + |
| 141 | +```sql |
| 142 | +SELECT |
| 143 | + name AS script_name, |
| 144 | + dur / 1e6 AS duration_ms, |
| 145 | + EXTRACT_ARG(arg_set_id, 'args.result.status') AS status |
| 146 | +FROM slice |
| 147 | +WHERE category = 'setup-script' |
| 148 | +ORDER BY dur DESC; |
| 149 | +``` |
| 150 | + |
| 151 | +Tests in a non-default test group: |
| 152 | + |
| 153 | +```sql |
| 154 | +INCLUDE PERFETTO MODULE slices.with_context; |
| 155 | + |
| 156 | +SELECT |
| 157 | + name AS test_name, |
| 158 | + process_name AS binary, |
| 159 | + EXTRACT_ARG(arg_set_id, 'args.test_group') AS test_group, |
| 160 | + dur / 1e6 AS duration_ms |
| 161 | +FROM thread_or_process_slice |
| 162 | +WHERE category = 'test' |
| 163 | + AND EXTRACT_ARG(arg_set_id, 'args.test_group') != '@global' |
| 164 | +ORDER BY dur DESC; |
| 165 | +``` |
| 166 | + |
| 167 | +Retried tests: |
| 168 | + |
| 169 | +```sql |
| 170 | +SELECT |
| 171 | + s.name AS test_name, |
| 172 | + EXTRACT_ARG(s.arg_set_id, 'args.attempt') AS attempt, |
| 173 | + EXTRACT_ARG(s.arg_set_id, 'args.total_attempts') AS total_attempts, |
| 174 | + EXTRACT_ARG(s.arg_set_id, 'args.result.status') AS status, |
| 175 | + s.dur / 1e6 AS duration_ms |
| 176 | +FROM slice s |
| 177 | +WHERE s.category = 'test' |
| 178 | + AND EXTRACT_ARG(s.arg_set_id, 'args.total_attempts') > 1 |
| 179 | +ORDER BY s.name, attempt; |
| 180 | +``` |
| 181 | + |
| 182 | +Slot utilization (how busy each concurrency slot was): |
| 183 | + |
| 184 | +```sql |
| 185 | +SELECT |
| 186 | + t.name AS slot, |
| 187 | + COUNT(*) AS tests_run, |
| 188 | + SUM(s.dur) / 1e6 AS busy_ms |
| 189 | +FROM slice s |
| 190 | +JOIN thread_track tt ON s.track_id = tt.id |
| 191 | +JOIN thread t USING (utid) |
| 192 | +WHERE s.category = 'test' |
| 193 | +GROUP BY t.name |
| 194 | +ORDER BY busy_ms DESC; |
| 195 | +``` |
| 196 | + |
| 197 | +### Discovering available metadata |
| 198 | + |
| 199 | +Nextest embeds test metadata (binary ID, result status, attempt count, test group, etc.) as args on each trace event. To see all available keys for test events, run: |
| 200 | + |
| 201 | +```sql |
| 202 | +SELECT DISTINCT flat_key |
| 203 | +FROM args |
| 204 | +WHERE arg_set_id IN ( |
| 205 | + SELECT arg_set_id FROM slice WHERE category = 'test' LIMIT 1 |
| 206 | +); |
| 207 | +``` |
| 208 | + |
| 209 | +These keys can then be accessed with `EXTRACT_ARG(arg_set_id, '<key>')` in queries. |
| 210 | + |
| 211 | +## Learn more |
| 212 | + |
| 213 | +- [_What is Perfetto?_](https://perfetto.dev/docs/) |
| 214 | +- [Perfetto UI documentation](https://perfetto.dev/docs/visualization/perfetto-ui) |
| 215 | +- [_Getting started with PerfettoSQL_](https://perfetto.dev/docs/analysis/perfetto-sql-getting-started) |
| 216 | + |
| 217 | +## Options and arguments |
| 218 | + |
| 219 | +TODO |
0 commit comments