-
Notifications
You must be signed in to change notification settings - Fork 1
Home
Maciej Zieniuk edited this page Sep 8, 2025
·
6 revisions
A productivity tool to automatically generate and execute ephemeral testing scenarios based on GitHub PR change, assisted by AI.
Improve Dev’s teams productivity:
- by eliminating the need to write testing notes , scenarios from scratch
- by eliminating almost all manual testing during the development and testing phases
- Not replacing any existing tooling and E2E automation.
- Not replacing dev’s responsibility to review and correct the test scenarios / QA notes.
Improve QA’s teams productivity
- by providing consistency between known test scenarios in QA and Dev’s testing notes.
- by eliminating most of the manual testing during the testing phases
That this project solves partially or completely:
- Developers have to do the same amount of testing, as QA’s later do, but often multiple times because of the new change in PR.
- QA’s have to manually figure our and map which existing test scenarios applies to the written dev's test notes, which can be run automatically with regression testing and which have to be executed manually.
- Developers often do not test everything and since test notes writing is usually done at the very last step in development process (just before handling to QA's) and it’s easy to miss a test scenario.
- It’s easy to miss a test scenario or platform to test on when writing test notes by hand. This is especially true for test scenarios for backwards compatibility (like you need a certain state first, like logged in)
Download latest https://github.com/mzieniukbw/tap/releases
A CLI tool (tap) that leverages Anthropic Claude AI to:
- Generate the test scenarios, based on knowledge base from Github PR, linked Jira issue, linked Confluence pages and knowledge base from the product (like help documentation)
tap generate-tests <url-link-to-PR-number>- Creates output directory with all context and test scenario
- (Optional) Human assisted review and correction of the generated tests scenarios with Claude Cli.
- Opens Claude CLI where tests scenarios can can be refined by prompting:
./<output-dir>/claude-refine.sh
- Opens Claude CLI where tests scenarios can can be refined by prompting:
- Execute the test scenarios on the current desktop OS. This takes over your computer to execute them.
tap execute-scenarios --file ./test-pr-{PR-number}-{commit-sha}/generated-scenarios.json
- Require Claude CLI, without requiring the Claude API key, since most dev’s don’t have the latter.
- Require GitHub API key to access PR details
- Require Atlassian API key to access Jira and Confluence
- (Optional, Recommended) Provide your application setup and/or assertion instructions.
- (Optional) Provide Anthropic API key, required for
exeucte-scenarioscommand. - (Optional, Recommended) Provide Onyx AI api key to gather more accurate knowledge base on the product.
- (Optional) Installs open-interpreter, required for
exeucte-scenarioscommand.
Generate accurate test scenarios for a PR:
- Pulls relevant context from GitHub PR
- Pulls relevant context from Jira ticket, linked in GitHub PR
- Pulls relevant context from Confluence documentations, linked in Jira ticket
- (Optional) Pulls relevant context from Onyx AI to gather product knowledge base
- Generate test scenarios using all the gathered knowledge with Claude Anthropic AI
- Writes the test scenarios and context into files(s) in the output directory (Defaults to
test-pr-{PR-number}-{commit-sha}) - (Optional) Claude CLI human assisted review through a shell script
clause-refine.shfound in output directory
Execute test scenarios on the computer - takes over screen, keyboard, mouse
- Executes the test scenarios one by one (from the
generated-scenarios.jsonfound in output directory) - Uses open-interpreter cli with Claude AI help, to takes over screen, keyboard, mouse, screenshot, to be able to execute a test scenario
- Writes a prompt and result into output directory for each test execution (see
interpreter-promptsandinterpreter-resultsfound in output directory) - Writes a summary of all test executions in QA report (see
qa-report.mdfile found in output directory)