|
| 1 | +--- |
| 2 | +name: trx-analysis |
| 3 | +description: Parse and analyze Visual Studio TRX test result files. Use when asked about slow tests, test durations, test frequency, flaky tests, failure analysis, or test execution patterns from TRX files. |
| 4 | +--- |
| 5 | + |
| 6 | +# TRX Test Results Analysis |
| 7 | + |
| 8 | +Parse `.trx` files (Visual Studio Test Results XML) to answer questions about test performance, frequency, failures, and patterns. |
| 9 | + |
| 10 | +## TRX File Format |
| 11 | + |
| 12 | +TRX files use XML namespace `http://microsoft.com/schemas/VisualStudio/TeamTest/2010`. Key elements: |
| 13 | + |
| 14 | +- `TestRun.Results.UnitTestResult` — individual test executions with `testName`, `duration` (HH:mm:ss.fffffff), `outcome` (Passed/Failed/NotExecuted) |
| 15 | +- `TestRun.TestDefinitions.UnitTest` — test metadata including class and method info |
| 16 | +- `TestRun.ResultSummary` — aggregate pass/fail/skip counts |
| 17 | + |
| 18 | +## Loading a TRX File |
| 19 | + |
| 20 | +```powershell |
| 21 | +[xml]$trx = Get-Content "path/to/file.trx" |
| 22 | +$results = $trx.TestRun.Results.UnitTestResult |
| 23 | +``` |
| 24 | + |
| 25 | +## Common Queries |
| 26 | + |
| 27 | +### Top N slowest tests |
| 28 | + |
| 29 | +```powershell |
| 30 | +$results | ForEach-Object { |
| 31 | + [PSCustomObject]@{ |
| 32 | + Test = $_.testName |
| 33 | + Seconds = [TimeSpan]::Parse($_.duration).TotalSeconds |
| 34 | + Outcome = $_.outcome |
| 35 | + } |
| 36 | +} | Sort-Object Seconds -Descending | Select-Object -First 25 | |
| 37 | + Format-Table @{L='Sec';E={'{0,6:N1}' -f $_.Seconds}}, Outcome, Test -AutoSize |
| 38 | +``` |
| 39 | + |
| 40 | +### Slowest test from each distinct class (top N) |
| 41 | + |
| 42 | +```powershell |
| 43 | +$results | ForEach-Object { |
| 44 | + $parts = $_.testName -split '\.' |
| 45 | + [PSCustomObject]@{ |
| 46 | + Test = $_.testName |
| 47 | + ClassName = ($parts[0..($parts.Length-2)] -join '.') |
| 48 | + Seconds = [TimeSpan]::Parse($_.duration).TotalSeconds |
| 49 | + } |
| 50 | +} | Sort-Object Seconds -Descending | |
| 51 | + Group-Object ClassName | ForEach-Object { $_.Group | Select-Object -First 1 } | |
| 52 | + Sort-Object Seconds -Descending | Select-Object -First 10 | |
| 53 | + Format-Table @{L='Sec';E={'{0,6:N1}' -f $_.Seconds}}, ClassName, Test -AutoSize |
| 54 | +``` |
| 55 | + |
| 56 | +### Most-executed tests (parameterization frequency) |
| 57 | + |
| 58 | +Extract the base method name before parameterization and count runs: |
| 59 | + |
| 60 | +```powershell |
| 61 | +$results | ForEach-Object { |
| 62 | + $name = $_.testName |
| 63 | + if ($name -match '^(\S+?)[\s(]') { $base = $Matches[1] } else { $base = $name } |
| 64 | + [PSCustomObject]@{ Base = $base; Seconds = [TimeSpan]::Parse($_.duration).TotalSeconds } |
| 65 | +} | Group-Object Base | ForEach-Object { |
| 66 | + [PSCustomObject]@{ |
| 67 | + Runs = $_.Count |
| 68 | + TotalSec = ($_.Group | Measure-Object Seconds -Sum).Sum |
| 69 | + Test = $_.Name |
| 70 | + } |
| 71 | +} | Sort-Object TotalSec -Descending | Select-Object -First 20 | |
| 72 | + Format-Table @{L='Runs';E={$_.Runs}}, @{L='TotalSec';E={'{0,7:N1}' -f $_.TotalSec}}, Test -AutoSize |
| 73 | +``` |
| 74 | + |
| 75 | +### Failed tests |
| 76 | + |
| 77 | +```powershell |
| 78 | +$results | Where-Object { $_.outcome -eq 'Failed' } | ForEach-Object { |
| 79 | + [PSCustomObject]@{ |
| 80 | + Test = $_.testName |
| 81 | + Seconds = [TimeSpan]::Parse($_.duration).TotalSeconds |
| 82 | + Error = $_.Output.ErrorInfo.Message |
| 83 | + } |
| 84 | +} | Format-Table -Wrap |
| 85 | +``` |
| 86 | + |
| 87 | +### Summary statistics |
| 88 | + |
| 89 | +```powershell |
| 90 | +$summary = $trx.TestRun.ResultSummary.Counters |
| 91 | +[PSCustomObject]@{ |
| 92 | + Total = $summary.total |
| 93 | + Passed = $summary.passed |
| 94 | + Failed = $summary.failed |
| 95 | + Skipped = $summary.notExecuted |
| 96 | + Duration = $trx.TestRun.Times.finish |
| 97 | +} | Format-List |
| 98 | +``` |
| 99 | + |
| 100 | +## Cross-File Duplicate Analysis |
| 101 | + |
| 102 | +Compare two TRX files to find tests that appear in both and ran (were not skipped) in both. Useful for identifying redundant CI work across different configurations (e.g., net9.0 x64 vs net48 x86). |
| 103 | + |
| 104 | +### Load and find duplicates that ran in both files |
| 105 | + |
| 106 | +```powershell |
| 107 | +[xml]$trx1 = Get-Content "path/to/file1.trx" |
| 108 | +[xml]$trx2 = Get-Content "path/to/file2.trx" |
| 109 | +
|
| 110 | +$r1 = $trx1.TestRun.Results.UnitTestResult |
| 111 | +$r2 = $trx2.TestRun.Results.UnitTestResult |
| 112 | +
|
| 113 | +# Build lookup: testName -> (outcome, duration) keeping best outcome per name |
| 114 | +function Get-TestLookup($results) { |
| 115 | + $lookup = @{} |
| 116 | + foreach ($r in $results) { |
| 117 | + $name = $r.testName |
| 118 | + $outcome = $r.outcome |
| 119 | + $dur = [TimeSpan]::Parse($r.duration) |
| 120 | + if (-not $lookup.ContainsKey($name) -or ($lookup[$name].Outcome -eq 'NotExecuted' -and $outcome -ne 'NotExecuted')) { |
| 121 | + $lookup[$name] = [PSCustomObject]@{ Outcome = $outcome; Duration = $dur } |
| 122 | + } |
| 123 | + } |
| 124 | + $lookup |
| 125 | +} |
| 126 | +
|
| 127 | +$t1 = Get-TestLookup $r1 |
| 128 | +$t2 = Get-TestLookup $r2 |
| 129 | +
|
| 130 | +$skipped = @('NotExecuted','Pending','Disconnected','Warning','InProgress','Inconclusive') |
| 131 | +$common = $t1.Keys | Where-Object { $t2.ContainsKey($_) -and $t1[$_].Outcome -notin $skipped -and $t2[$_].Outcome -notin $skipped } |
| 132 | +``` |
| 133 | + |
| 134 | +### Separate non-parametrized vs parametrized duplicates |
| 135 | + |
| 136 | +Parametrized tests contain `(` in their name (e.g., `RunAllTests (Row: 0, Runner = net10.0, ...)`). The base method name is everything before the first `(`. |
| 137 | + |
| 138 | +```powershell |
| 139 | +$nonParam = $common | Where-Object { $_ -notmatch '\(' } |
| 140 | +$param = $common | Where-Object { $_ -match '\(' } |
| 141 | +``` |
| 142 | + |
| 143 | +### Non-parametrized duplicates ordered by duration |
| 144 | + |
| 145 | +```powershell |
| 146 | +$nonParam | ForEach-Object { |
| 147 | + $d1 = $t1[$_].Duration; $d2 = $t2[$_].Duration |
| 148 | + [PSCustomObject]@{ |
| 149 | + Test = $_ |
| 150 | + File1Sec = $d1.TotalSeconds |
| 151 | + File2Sec = $d2.TotalSeconds |
| 152 | + TotalSec = $d1.TotalSeconds + $d2.TotalSeconds |
| 153 | + } |
| 154 | +} | Sort-Object TotalSec -Descending | |
| 155 | + Format-Table @{L='File1';E={'{0,6:N1}' -f $_.File1Sec}}, |
| 156 | + @{L='File2';E={'{0,6:N1}' -f $_.File2Sec}}, |
| 157 | + @{L='Total';E={'{0,6:N1}' -f $_.TotalSec}}, Test -AutoSize |
| 158 | +``` |
| 159 | + |
| 160 | +### Parametrized duplicates squashed by base method |
| 161 | + |
| 162 | +Tests with `(Row: ...)` or other parameterization are instances of the same test. Squash them into one row per base method, showing variant count, max single-instance duration, and total duration across all instances in both files. |
| 163 | + |
| 164 | +```powershell |
| 165 | +$param | ForEach-Object { |
| 166 | + if ($_ -match '^(.+?)\s*\(') { $base = $Matches[1] } else { $base = $_ } |
| 167 | + $d1 = $t1[$_].Duration; $d2 = $t2[$_].Duration |
| 168 | + [PSCustomObject]@{ Base = $base; D1 = $d1.TotalSeconds; D2 = $d2.TotalSeconds; Max = [Math]::Max($d1.TotalSeconds, $d2.TotalSeconds) } |
| 169 | +} | Group-Object Base | ForEach-Object { |
| 170 | + [PSCustomObject]@{ |
| 171 | + Test = $_.Name |
| 172 | + Variants = $_.Count |
| 173 | + OneInstance = ($_.Group | Measure-Object Max -Maximum).Maximum |
| 174 | + AllInstances = ($_.Group | Measure-Object { $_.D1 + $_.D2 } -Sum).Sum |
| 175 | + } |
| 176 | +} | Sort-Object AllInstances -Descending | |
| 177 | + Format-Table @{L='Variants';E={$_.Variants}}, |
| 178 | + @{L='1 Instance';E={'{0,7:N1}s' -f $_.OneInstance}}, |
| 179 | + @{L='All Instances';E={'{0,7:N1}s' -f $_.AllInstances}}, Test -AutoSize |
| 180 | +``` |
| 181 | + |
| 182 | +## Tips |
| 183 | + |
| 184 | +- Parameterized tests appear as separate `UnitTestResult` entries. Use regex `'^(\S+?)[\s(]'` to extract the base method name. |
| 185 | +- Sort by **TotalSec** (runs × avg duration) to find tests that consume the most CI time overall, even if each individual run is fast. |
| 186 | +- When comparing files, filter out `NotExecuted` tests — many parameterized tests are skipped in one configuration but not the other, so raw name overlap overstates true duplication. |
| 187 | +- TRX files from CI are typically found in `TestResults/` or as pipeline artifacts. |
0 commit comments