Skip to content
Merged
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
c5def1a
feat(tests): add apply_diff tool tests
daniel-lxs Jun 6, 2025
efcf9d9
feat(tests): add tests for write_to_file tool functionality
daniel-lxs Jun 6, 2025
6df1125
feat(tests): add comprehensive tests for read_file tool functionality
daniel-lxs Jun 6, 2025
e3aef5d
feat(tests): add tests for execute_command tool functionality
daniel-lxs Jun 6, 2025
0b58f3a
feat(integration-tester): add integration testing role with comprehen…
daniel-lxs Jun 6, 2025
3f79ea9
feat(tests): enhance test runner with grep and specific file filtering
daniel-lxs Jun 6, 2025
2495534
feat(tests): add comprehensive tests for search_files tool functionality
daniel-lxs Jun 9, 2025
e1f1079
feat(tests): add comprehensive tests for list_files tool functionality
daniel-lxs Jun 9, 2025
2404516
feat(tests): add tests for insert_content tool functionality
daniel-lxs Jun 9, 2025
620887f
feat(tests): add comprehensive tests for search_and_replace tool func…
daniel-lxs Jun 9, 2025
c616f52
feat(tests): add comprehensive tests for use_mcp_tool functionality
daniel-lxs Jun 9, 2025
5b64bff
feat(tests): increase timeout values for various tool tests to improv…
daniel-lxs Jun 10, 2025
605608f
fix(tests): add non-null assertion for workspaceDir assignment in mul…
daniel-lxs Jun 10, 2025
c9a74e0
feat(tests): enhance read_file tool tests with increased timeouts and…
daniel-lxs Jun 10, 2025
77b2a0d
feat(tests): enhance read_file tool tests to extract and verify tool …
daniel-lxs Jun 10, 2025
9400dba
feat(tests): enhance execute_command tool tests with additional conte…
daniel-lxs Jun 10, 2025
a2f539c
refactor(tests): remove script execution test and related setup for e…
daniel-lxs Jun 10, 2025
1e8121b
fix(tests): increase timeout for task start and completion in apply_d…
daniel-lxs Jun 10, 2025
c2d447d
fix(tests): clarify error handling message in command execution test
daniel-lxs Jun 10, 2025
b5761bc
refactor(tests): remove error handling test and related setup for exe…
daniel-lxs Jun 10, 2025
ba962c5
fix: update openRouterModelId to use anthropic/claude-3.5-sonnet
daniel-lxs Jun 10, 2025
7a3dc24
fix: update openRouterModelId to use openai/gpt-4.1
daniel-lxs Jun 10, 2025
0f53c8e
fix(tests): increase timeouts for apply_diff, execute_command, and se…
daniel-lxs Jun 10, 2025
f476e27
fix(tests): disable terminal shell integration for execute_command to…
daniel-lxs Jun 11, 2025
1f76576
chore: rewrite integration tester mode
daniel-lxs Jun 11, 2025
229a243
Update .roo/rules-integration-tester/1_workflow.xml
daniel-lxs Jun 11, 2025
541a6a6
Merge branch 'main' into wip_integration_testing
cte Jun 11, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
198 changes: 198 additions & 0 deletions .roo/rules-integration-tester/1_workflow.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
<workflow>
<step number="1">
<name>Understand Test Requirements</name>
<instructions>
Use ask_followup_question to determine what type of integration test is needed:

<ask_followup_question>
<question>What type of integration test would you like me to create or work on?</question>
<follow_up>
<suggest>New E2E test for a specific feature or workflow</suggest>
<suggest>Fix or update an existing integration test</suggest>
<suggest>Create test utilities or helpers for common patterns</suggest>
<suggest>Debug failing integration tests</suggest>
</follow_up>
</ask_followup_question>
</instructions>
</step>

<step number="2">
<name>Gather Test Specifications</name>
<instructions>
Based on the test type, gather detailed requirements:

For New E2E Tests:
- What specific user workflow or feature needs testing?
- What are the expected inputs and outputs?
- What edge cases or error scenarios should be covered?
- Are there specific API interactions to validate?
- What events should be monitored during the test?

For Existing Test Issues:
- Which test file is failing or needs updates?
- What specific error messages or failures are occurring?
- What changes in the codebase might have affected the test?

For Test Utilities:
- What common patterns are being repeated across tests?
- What helper functions would improve test maintainability?

Use multiple ask_followup_question calls if needed to gather complete information.
</instructions>
</step>

<step number="3">
<name>Explore Existing Test Patterns</name>
<instructions>
Use codebase_search FIRST to understand existing test patterns and similar functionality:

For New Tests:
- Search for similar test scenarios in apps/vscode-e2e/src/suite/
- Find existing test utilities and helpers
- Identify patterns for the type of functionality being tested

For Test Fixes:
- Search for the failing test file and related code
- Find similar working tests for comparison
- Look for recent changes that might have broken the test

Example searches:
- "file creation test mocha" for file operation tests
- "task completion waitUntilCompleted" for task monitoring patterns
- "api message validation" for API interaction tests

After codebase_search, use:
- read_file on relevant test files to understand structure
- list_code_definition_names on test directories
- search_files for specific test patterns or utilities
</instructions>
</step>

<step number="4">
<name>Analyze Test Environment and Setup</name>
<instructions>
Examine the test environment configuration:

1. Read the test runner configuration:
- apps/vscode-e2e/package.json for test scripts
- apps/vscode-e2e/src/runTest.ts for test setup
- Any test configuration files

2. Understand the test workspace setup:
- How test workspaces are created
- What files are available during tests
- How the extension API is accessed

3. Review existing test utilities:
- Helper functions for common operations
- Event listening patterns
- Assertion utilities
- Cleanup procedures

Document findings including:
- Test environment structure
- Available utilities and helpers
- Common patterns and best practices
</instructions>
</step>

<step number="5">
<name>Design Test Structure</name>
<instructions>
Plan the test implementation based on gathered information:

For New Tests:
- Define test suite structure with describe/it blocks
- Plan setup and teardown procedures
- Identify required test data and fixtures
- Design event listeners and validation points
- Plan for both success and failure scenarios

For Test Fixes:
- Identify the root cause of the failure
- Plan the minimal changes needed to fix the issue
- Consider if the test needs to be updated due to code changes
- Plan for improved error handling or debugging

Create a detailed test plan including:
- Test file structure and organization
- Required setup and cleanup
- Specific assertions and validations
- Error handling and edge cases
</instructions>
</step>

<step number="6">
<name>Implement Test Code</name>
<instructions>
Implement the test following established patterns:

CRITICAL: Never write a test file with a single write_to_file call.
Always implement tests in parts:

1. Start with the basic test structure (suite, setup, teardown)
2. Add individual test cases one by one
3. Implement helper functions separately
4. Add event listeners and validation logic incrementally

Follow these implementation guidelines:
- Use suite() and test() blocks following Mocha TDD style
- Always use the global api object for extension interactions
- Implement proper async/await patterns with waitFor utility
- Use waitUntilCompleted and waitUntilAborted helpers for task monitoring
- Listen to and validate appropriate events (message, taskCompleted, etc.)
- Test both positive flows and error scenarios
- Validate message content using proper type assertions
- Create reusable test utilities when patterns emerge
- Use meaningful test descriptions that explain the scenario
- Always clean up tasks with cancelCurrentTask or clearCurrentTask
- Ensure tests are independent and can run in any order
</instructions>
</step>

<step number="7">
<name>Run and Validate Tests</name>
<instructions>
Execute the tests to ensure they work correctly:

ALWAYS use the correct working directory and commands:
- Working directory: apps/vscode-e2e
- Test command: npm run test:run
- For specific tests: TEST_FILE="filename.test" npm run test:run
- Example: cd apps/vscode-e2e && TEST_FILE="apply-diff.test" npm run test:run

Test execution process:
1. Run the specific test file first
2. Check for any failures or errors
3. Analyze test output and logs
4. Debug any issues found
5. Re-run tests after fixes

If tests fail:
- Add console.log statements to track execution flow
- Log important events like task IDs, file paths, and AI responses
- Check test output carefully for error messages and stack traces
- Verify file creation in correct workspace directories
- Ensure proper event handling and timeouts
</instructions>
</step>

<step number="8">
<name>Document and Complete</name>
<instructions>
Finalize the test implementation:

1. Add comprehensive comments explaining complex test logic
2. Document any new test utilities or patterns created
3. Ensure test descriptions clearly explain what is being tested
4. Verify all cleanup procedures are in place
5. Confirm tests can run independently and in any order

Provide the user with:
- Summary of tests created or fixed
- Instructions for running the tests
- Any new patterns or utilities that can be reused
- Recommendations for future test improvements
</instructions>
</step>
</workflow>
Loading