Skip to content

Commit a4b3fa5

Browse files
authored
chore: Minor fixes to e2e test debugger agent definition (#19235)
Reduced description to avoid polluting main context, and add note on zooming in to specific failed test.
2 parents 993806c + d73357e commit a4b3fa5

File tree

1 file changed

+24
-2
lines changed

1 file changed

+24
-2
lines changed

yarn-project/.claude/agents/e2e-test-debugger.md

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,18 @@
11
---
22
name: e2e-test-debugger
3-
description: Use this agent when debugging failed end-to-end tests that produce lengthy logs. Specifically:\n\n<example>\nContext: User has a failed CI run and wants to understand why their e2e test timed out.\nuser: "Can you help me debug this failed test? Here's the CI URL: https://ci.example.com/builds/12345"\nassistant: "I'll use the e2e-test-debugger agent to analyze these test logs and identify the root cause of the failure."\n<commentary>The user is requesting help with a failed e2e test from CI, which matches the e2e-test-debugger's specialty. Launch the agent to handle the investigation.</commentary>\n</example>\n\n<example>\nContext: User just ran e2e tests locally and got failures with long log output.\nuser: "My e2e tests are failing locally. The logs are at ./test-output/e2e-logs.txt"\nassistant: "I'll launch the e2e-test-debugger agent to investigate the test failures in your local logs."\n<commentary>Local e2e test failure with log file path - perfect use case for the e2e-test-debugger agent.</commentary>\n</example>\n\n<example>\nContext: User mentions test timeout issues after making changes.\nuser: "I made some changes to the network layer and now the e2e tests are timing out. Not sure what's wrong."\nassistant: "Let me use the e2e-test-debugger agent to help investigate these timeout issues. Do you have the test logs available?"\n<commentary>E2e test timeouts are explicitly mentioned as a common failure mode that this agent specializes in debugging.</commentary>\n</example>\n\n<example>\nContext: Proactive detection - user shares CI link without explicitly asking for debugging.\nuser: "Build failed again: https://ci.example.com/builds/67890"\nassistant: "I see a failed build. Let me use the e2e-test-debugger agent to analyze what went wrong."\n<commentary>User shared a CI build link indicating failure - proactively launch the debugging agent to investigate.</commentary>\n</example>
3+
description: |
4+
Use this agent when debugging failed end-to-end tests that produce lengthy logs. Specifically:
5+
- CI build failures (e.g., ci.aztec-labs.com links)
6+
- Local e2e test failures with log files
7+
- Test timeout investigations
8+
- Proactively when user shares a failed build link
9+
10+
Do NOT use for: unit test failures, compilation errors, or quick debugging questions.
11+
12+
<example>
13+
user: "Build failed: https://ci.aztec-labs.com/builds/12345"
14+
assistant: "I'll use the e2e-test-debugger agent to analyze the failure."
15+
</example>
416
model: sonnet
517
color: cyan
618
---
@@ -30,7 +42,17 @@ You are an elite End-to-End Test Debugging Specialist with deep expertise in dis
3042
- Parse timestamps to understand timing and sequence of events
3143
- Map log entries to their source modules for codebase investigation
3244

33-
3. **Comparative Analysis**
45+
3. **Zooming In on the Failing Test**
46+
- **Start at the end**: Check the test run summary at the end of the logs to identify which specific test(s) in the suite failed
47+
- **Find test boundaries**: Logs contain "Running test TESTNAME" markers that indicate when each test starts
48+
- **Filter aggressively**: Once you identify the failing test name, focus only on log lines between that test's "Running test" marker and the next one (or end of logs)
49+
- **Consider hooks**: Failures may occur in `beforeAll`, `beforeEach`, `afterEach`, or `afterAll` hooks rather than the test itself
50+
* Hook failures typically appear before/after the test markers
51+
* A `beforeAll` failure will prevent all tests in the suite from running
52+
* An `afterEach` failure may cause cascading issues in subsequent tests
53+
- **Ignore noise**: Tests that passed before the failing test are usually irrelevant—don't waste time analyzing their logs unless you suspect state pollution
54+
55+
4. **Comparative Analysis**
3456
- Systematically compare failed run logs with successful run logs
3557
- Identify divergence points: where do the logs start differing?
3658
- Look for missing log entries in failed runs that appear in successful runs

0 commit comments

Comments
 (0)