An AI-powered tool for evaluating QA practices in software repositories. Perfect for assessing automation test repositories, QA candidate submissions, and code quality analysis.
This tool provides comprehensive evaluation of QA repositories across four key categories:
- Test Automation (30%) - Framework usage, test organization, coverage
- Technical Skills (25%) - API/UI testing, design patterns, assertions
- Quality Process (25%) - Testing strategy, documentation, collaboration
- CI Pipeline (20%) - Automation integration, deployment practices
# Clone the repository
git clone <repository-url>
cd qa-repo-eval-tool
# Install dependencies
uv sync
# Set your OpenAI API key
export OPENAI_API_KEY='your-api-key-here'
# Check environment setup
python main.py check
# Evaluate a single repository
python main.py evaluate "https://github.com/user/automation-repo"
# Batch evaluation from file
echo "https://github.com/user/repo1" > repos.txt
echo "https://github.com/user/repo2" >> repos.txt
python main.py batch repos.txt
Alternative: You can also use
python -m src.qa_repo_eval_tool.cli <command>
if you prefer the module syntax.
- Comprehensive QA assessment with detailed scoring
- Visual progress tracking and rich console output
- Strengths and improvement recommendations
- Pass/Conditional Pass/Fail verdicts
- Evaluate multiple repositories from input file
- Generate JSON, CSV, and summary reports
- Batch analytics and success rates
- Continue-on-error for resilient processing
- JSON - Complete structured data for analysis
- CSV - Spreadsheet-compatible format
- Summary - Human-readable statistics and insights
- Framework detection (Selenium, Cypress, pytest, etc.)
- BDD implementation assessment (Cucumber, Gherkin)
- Page Object Model evaluation
- CI/CD pipeline analysis
Perfect for evaluating automation test submissions like:
- Selenium WebDriver projects
- Cypress E2E tests
- API automation suites
- BDD/Cucumber implementations
- Repository health checks
- Best practices compliance
- Framework usage assessment
- Test coverage analysis
- Batch assessment of team repositories
- Standardization compliance
- Training needs identification
python main.py check
Validates API key and dependencies.
python main.py evaluate "https://github.com/user/repo" [OPTIONS]
Options:
--shallow/--full Use shallow clone (faster) [default: shallow]
--keep-clone Keep cloned repository for inspection
--verbose/--quiet Progress detail level [default: verbose]
python main.py batch INPUT_FILE [OPTIONS]
Options:
--output-dir PATH Output directory [default: qa_reports]
--shallow/--full Clone type for all repositories
--continue/--stop Continue if a repository fails [default: continue]
--verbose/--quiet Progress detail level
python main.py version
π QA Evaluation Results: https://github.com/user/automation-repo
π― Overall QA Score: 78/100
π QA Level: Advanced
βοΈ Final Verdict: PASS
π» Primary Language: java
π§ͺ Test Files: 15/28
π§ Test Frameworks: selenium, cucumber, testng
π Category Scores:
Test Automation ββββββββββ 8.2/10
CI Pipeline ββββββββββ 6.0/10
Quality Process ββββββββββ 7.5/10
Technical Skills ββββββββββ 8.0/10
β
Strengths:
β’ Excellent BDD implementation with Cucumber
β’ Strong Page Object Model structure
β’ Good test coverage for core flows
β οΈ Areas for Improvement:
β’ Missing CI/CD pipeline configuration
β’ Limited error handling in tests
β’ Could improve test data management
src/qa_repo_eval_tool/
βββ cli.py # Command-line interface
βββ metrics.py # Core evaluation orchestration
βββ qa_analysis.py # AI-powered analysis engine
βββ reporter.py # Report generation (JSON/CSV/text)
βββ git_utils.py # Repository operations
βββ types.py # Data models and metrics
βββ metrics_calculator.py # Scoring and verdict logic
βββ utils/
βββ prompts.py # AI evaluation prompts
OPENAI_API_KEY=your-openai-api-key
# QA repositories for evaluation
https://github.com/user/selenium-tests
https://github.com/user/api-automation
https://github.com/user/cypress-e2e
# Comments start with #
# Empty lines are ignored
- Test coverage and organization
- Framework usage quality
- Assertion patterns
- Test data management
- Test design patterns (Page Object, Builder, etc.)
- API testing implementation
- UI testing practices
- Performance and security considerations
- Testing strategy evidence
- Documentation quality
- Collaboration indicators
- Process maturity
- Pipeline configuration
- Test integration
- Deployment automation
- Environment management
- Expert (85-100): Advanced QA practices, production-ready
- Advanced (70-84): Strong QA skills, minor improvements needed
- Intermediate (50-69): Good foundation, development areas identified
- Beginner (0-49): Basic skills, significant improvement needed
- PASS (β₯70): Ready for QA roles
- CONDITIONAL_PASS (50-69): Potential with development
- FAIL (<50): Needs significant improvement
- Fork the repository
- Create a feature branch
- Implement your changes
- Add tests if applicable
- Submit a pull request