Skip to content

Commit 1f76576

Browse files
committed
chore: rewrite integration tester mode
1 parent f476e27 commit 1f76576

File tree

6 files changed

+926
-113
lines changed

6 files changed

+926
-113
lines changed
Lines changed: 198 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,198 @@
1+
<workflow>
2+
<step number="1">
3+
<name>Understand Test Requirements</name>
4+
<instructions>
5+
Use ask_followup_question to determine what type of integration test is needed:
6+
7+
<ask_followup_question>
8+
<question>What type of integration test would you like me to create or work on?</question>
9+
<follow_up>
10+
<suggest>New E2E test for a specific feature or workflow</suggest>
11+
<suggest>Fix or update an existing integration test</suggest>
12+
<suggest>Create test utilities or helpers for common patterns</suggest>
13+
<suggest>Debug failing integration tests</suggest>
14+
</follow_up>
15+
</ask_followup_question>
16+
</instructions>
17+
</step>
18+
19+
<step number="2">
20+
<name>Gather Test Specifications</name>
21+
<instructions>
22+
Based on the test type, gather detailed requirements:
23+
24+
For New E2E Tests:
25+
- What specific user workflow or feature needs testing?
26+
- What are the expected inputs and outputs?
27+
- What edge cases or error scenarios should be covered?
28+
- Are there specific API interactions to validate?
29+
- What events should be monitored during the test?
30+
31+
For Existing Test Issues:
32+
- Which test file is failing or needs updates?
33+
- What specific error messages or failures are occurring?
34+
- What changes in the codebase might have affected the test?
35+
36+
For Test Utilities:
37+
- What common patterns are being repeated across tests?
38+
- What helper functions would improve test maintainability?
39+
40+
Use multiple ask_followup_question calls if needed to gather complete information.
41+
</instructions>
42+
</step>
43+
44+
<step number="3">
45+
<name>Explore Existing Test Patterns</name>
46+
<instructions>
47+
Use codebase_search FIRST to understand existing test patterns and similar functionality:
48+
49+
For New Tests:
50+
- Search for similar test scenarios in apps/vscode-e2e/src/suite/
51+
- Find existing test utilities and helpers
52+
- Identify patterns for the type of functionality being tested
53+
54+
For Test Fixes:
55+
- Search for the failing test file and related code
56+
- Find similar working tests for comparison
57+
- Look for recent changes that might have broken the test
58+
59+
Example searches:
60+
- "file creation test mocha" for file operation tests
61+
- "task completion waitUntilCompleted" for task monitoring patterns
62+
- "api message validation" for API interaction tests
63+
64+
After codebase_search, use:
65+
- read_file on relevant test files to understand structure
66+
- list_code_definition_names on test directories
67+
- search_files for specific test patterns or utilities
68+
</instructions>
69+
</step>
70+
71+
<step number="4">
72+
<name>Analyze Test Environment and Setup</name>
73+
<instructions>
74+
Examine the test environment configuration:
75+
76+
1. Read the test runner configuration:
77+
- apps/vscode-e2e/package.json for test scripts
78+
- apps/vscode-e2e/src/runTest.ts for test setup
79+
- Any test configuration files
80+
81+
2. Understand the test workspace setup:
82+
- How test workspaces are created
83+
- What files are available during tests
84+
- How the extension API is accessed
85+
86+
3. Review existing test utilities:
87+
- Helper functions for common operations
88+
- Event listening patterns
89+
- Assertion utilities
90+
- Cleanup procedures
91+
92+
Document findings including:
93+
- Test environment structure
94+
- Available utilities and helpers
95+
- Common patterns and best practices
96+
</instructions>
97+
</step>
98+
99+
<step number="5">
100+
<name>Design Test Structure</name>
101+
<instructions>
102+
Plan the test implementation based on gathered information:
103+
104+
For New Tests:
105+
- Define test suite structure with describe/it blocks
106+
- Plan setup and teardown procedures
107+
- Identify required test data and fixtures
108+
- Design event listeners and validation points
109+
- Plan for both success and failure scenarios
110+
111+
For Test Fixes:
112+
- Identify the root cause of the failure
113+
- Plan the minimal changes needed to fix the issue
114+
- Consider if the test needs to be updated due to code changes
115+
- Plan for improved error handling or debugging
116+
117+
Create a detailed test plan including:
118+
- Test file structure and organization
119+
- Required setup and cleanup
120+
- Specific assertions and validations
121+
- Error handling and edge cases
122+
</instructions>
123+
</step>
124+
125+
<step number="6">
126+
<name>Implement Test Code</name>
127+
<instructions>
128+
Implement the test following established patterns:
129+
130+
CRITICAL: Never write a test file with a single write_to_file call.
131+
Always implement tests in parts:
132+
133+
1. Start with the basic test structure (suite, setup, teardown)
134+
2. Add individual test cases one by one
135+
3. Implement helper functions separately
136+
4. Add event listeners and validation logic incrementally
137+
138+
Follow these implementation guidelines:
139+
- Use suite() and test() blocks following Mocha TDD style
140+
- Always use the global api object for extension interactions
141+
- Implement proper async/await patterns with waitFor utility
142+
- Use waitUntilCompleted and waitUntilAborted helpers for task monitoring
143+
- Listen to and validate appropriate events (message, taskCompleted, etc.)
144+
- Test both positive flows and error scenarios
145+
- Validate message content using proper type assertions
146+
- Create reusable test utilities when patterns emerge
147+
- Use meaningful test descriptions that explain the scenario
148+
- Always clean up tasks with cancelCurrentTask or clearCurrentTask
149+
- Ensure tests are independent and can run in any order
150+
</instructions>
151+
</step>
152+
153+
<step number="7">
154+
<name>Run and Validate Tests</name>
155+
<instructions>
156+
Execute the tests to ensure they work correctly:
157+
158+
ALWAYS use the correct working directory and commands:
159+
- Working directory: apps/vscode-e2e
160+
- Test command: npm run test:run
161+
- For specific tests: TEST_FILE="filename.test" npm run test:run
162+
- Example: cd apps/vscode-e2e && TEST_FILE="apply-diff.test" npm run test:run
163+
164+
Test execution process:
165+
1. Run the specific test file first
166+
2. Check for any failures or errors
167+
3. Analyze test output and logs
168+
4. Debug any issues found
169+
5. Re-run tests after fixes
170+
171+
If tests fail:
172+
- Add console.log statements to track execution flow
173+
- Log important events like task IDs, file paths, and AI responses
174+
- Check test output carefully for error messages and stack traces
175+
- Verify file creation in correct workspace directories
176+
- Ensure proper event handling and timeouts
177+
</instructions>
178+
</step>
179+
180+
<step number="8">
181+
<name>Document and Complete</name>
182+
<instructions>
183+
Finalize the test implementation:
184+
185+
1. Add comprehensive comments explaining complex test logic
186+
2. Document any new test utilities or patterns created
187+
3. Ensure test descriptions clearly explain what is being tested
188+
4. Verify all cleanup procedures are in place
189+
5. Confirm tests can run independently and in any order
190+
191+
Provide the user with:
192+
- Summary of tests created or fixed
193+
- Instructions for running the tests
194+
- Any new patterns or utilities that can be reused
195+
- Recommendations for future test improvements
196+
</instructions>
197+
</step>
198+
</workflow>

0 commit comments

Comments
 (0)