A command-line tool that uses OpenAI's o4-mini model to detect AAA (Arrange-Act-Assert) pattern issues in unit tests.
- π Smart Detection: Identifies 7 types of AAA pattern issues using AI
- π Batch Processing: Process multiple test cases and generate CSV reports
- π° Cost Tracking: Track token usage and API costs with detailed breakdowns (enabled by default)
- π Easy to Use: Simple command-line interface with cross-platform support
- π Detailed Reports: Provides analysis results and improvement suggestions
With uv (Recommended):
# Install uv if needed
pip install uv
# Clone and run (auto-manages dependencies)
git clone https://github.com/your-username/AAA-Issue-Scanner.git
cd AAA-Issue-Scanner
# Set API key
export OPENAI_API_KEY='your-openai-api-key'Traditional method:
git clone https://github.com/your-username/AAA-Issue-Scanner.git
cd AAA-Issue-Scanner
pip install -e .
export OPENAI_API_KEY='your-openai-api-key'Single file analysis:
# With uv (cost tracking enabled by default)
uv run python -m aaa_issue_scanner single test.json --verbose
# Traditional (cost tracking enabled by default)
python -m aaa_issue_scanner single test.json --verbose
# Disable cost tracking if desired
python -m aaa_issue_scanner single test.json --no-costBatch processing:
# With uv (cost tracking enabled by default)
uv run python -m aaa_issue_scanner batch project_folder
# Traditional (cost tracking enabled by default)
python -m aaa_issue_scanner batch project_folder
# Disable cost tracking if desired
python -m aaa_issue_scanner batch project_folder --no-cost| Method | Command | Pros |
|---|---|---|
| uv run | uv run python -m aaa_issue_scanner |
β
Auto-dependency management β No installation needed |
| uvx | uvx aaa-issue-scanner |
β
Global access β Always latest version |
| pip install | pip install -e . |
β
Traditional workflow β System integration |
Get your OpenAI API key from OpenAI Platform.
Environment Variable:
# Unix/Linux/macOS
export OPENAI_API_KEY='your-key'
# Windows (Command Prompt)
set OPENAI_API_KEY=your-key
# Windows (PowerShell)
$env:OPENAI_API_KEY='your-key'Or use CLI parameter:
--api-key 'your-key'JSON files with the following structure:
{
"parsedStatementsSequence": ["statement sequences"],
"productionFunctionImplementations": ["production code"],
"testCaseSourceCode": "test code string",
"testClassName": "TestClass",
"testCaseName": "testMethod",
"projectName": "project-name",
"beforeMethods": ["@Before methods"],
"beforeAllMethods": ["@BeforeAll methods"],
"afterMethods": ["@After methods"],
"afterAllMethods": ["@AfterAll methods"]
}Project Structure:
your_project/
βββ AAA/ # Required folder name
β βββ test1.json # Test files
β βββ test2.json
β βββ test3.json
βββ other_files...
Enhanced Features:
- β Multi-processing: Concurrent processing with configurable workers
- β Smart Caching: Automatically enabled to avoid reprocessing identical test cases
- β Smart Resume: Automatically continues from where you left off (default behavior)
- β Rate Limiting: Respect API limits with configurable requests per minute
- β Real-time Progress: Live updates with detailed statistics
Concurrent Processing:
- Auto-Detection: Automatically uses multi-threading for batches with 2+ files
- Configurable Workers: Use
--max-workersto control concurrency (default: 5) - Thread-Safe: Rate limiting, cost tracking, and CSV writing are all thread-safe
- Smart Fallback: Falls back to single-threading for small batches or when
--max-workers 1
Usage Examples:
# Default: 5 concurrent workers
python -m aaa_issue_scanner batch project_folder --verbose
# Use 8 workers for faster processing
python -m aaa_issue_scanner batch project_folder --max-workers 8 --verbose
# Force single-threaded processing
python -m aaa_issue_scanner batch project_folder --max-workers 1 --verbosePerformance Benefits:
- π Faster Processing: Up to N times faster with N workers (limited by API rate limits)
- π Rate Limit Aware: Respects OpenAI API rate limits even with multiple threads
- πΎ Thread-Safe Caching: Multiple threads can safely share cache without conflicts
- π Accurate Tracking: Cost and progress tracking works correctly in multi-threaded mode
Configuration Options:
# Basic usage (caching and resume enabled by default)
python -m aaa_issue_scanner batch project_folder --verbose
# Advanced usage with custom settings
python -m aaa_issue_scanner batch project_folder \
--max-workers 8 \
--requests-per-minute 100 \
--cache-dir .my_cache \
--verbose
# Disable caching if needed (rare case)
python -m aaa_issue_scanner batch project_folder --no-cache --verbose
# Force restart from beginning (ignores previous progress)
python -m aaa_issue_scanner batch project_folder --restart --verboseSmart Caching (Default Enabled):
- π§ Content-based hashing: Only identical test cases are cached
- πΎ Persistent cache: Survives between runs and projects
- π Instant results: Cached cases return immediately (no API call)
- π° Cost savings: Avoid redundant API charges for duplicate content
- π Custom location: Use
--cache-dirfor custom cache folder - β Override: Use
--no-cacheonly when you want fresh analysis for everything
Simple Workflow (All Defaults Work Great):
# Run once - caching and resume automatically enabled
python -m aaa_issue_scanner batch my_project --verbose
# If interrupted, just re-run the same command
python -m aaa_issue_scanner batch my_project --verbose
# β
Automatically resumes from where it left off
# β
Uses cache for any duplicate test cases
# β
No additional flags needed!Output: CSV file with columns: project, class_name, test_case_name, issue_type, sequence, focal_method, reasoning
The tool provides comprehensive cost tracking for OpenAI API usage:
| Model | Input ($/M tokens) | Cached Input ($/M tokens) | Output ($/M tokens) |
|---|---|---|---|
| o4-mini | $1.10 | $0.275 | $4.40 |
| gpt-4.1 | $2.00 | $0.50 | $8.00 |
| gpt-4.1-mini | $0.40 | $0.10 | $1.60 |
Single file with cost tracking (default behavior):
python -m aaa_issue_scanner single test.jsonOutput includes:
π° Cost Information:
Input tokens: 1,127
Output tokens: 764
Total tokens: 1,891
Total cost: $0.004601
Batch processing with cost tracking (default behavior):
python -m aaa_issue_scanner batch project_folderShows per-file costs and final summary:
π° Cost Summary:
Total API calls: 5
Total tokens: 8,450
- Input tokens: 5,635
- Cached tokens: 1,024
- Output tokens: 2,815
Cost breakdown:
- Input cost: $0.006199
- Cached input cost: $0.000282
- Output cost: $0.012386
- Total cost: $0.018867
- Cache savings: $0.000845
Disable cost tracking if needed:
python -m aaa_issue_scanner single test.json --no-cost
python -m aaa_issue_scanner batch project_folder --no-cost- Smart Caching: Identical test cases are cached, avoiding duplicate API calls
- Cache Savings: Shows how much money was saved through caching
- Real-time Tracking: See costs accumulate during batch processing
- Model Flexibility: Easily switch between models to optimize cost vs. quality
The tool automatically maintains detailed project logs for tracking analysis history:
Each project generates a <project-name>-log.json file in the AAA folder with:
{
"projectName": "commons-cli",
"tasks": [
{
"taskName": "AAA-Pattern-Analysis",
"model": "o4-mini",
"timestamp": "2025-06-02T02:31:49.621123",
"totalTestCases": 2,
"processedTestCases": 2,
"failedTestCases": 0,
"cacheHits": 2,
"apiCalls": 0,
"tokenUsage": {
"totalTokens": 0,
"inputTokens": 0,
"cachedTokens": 0,
"outputTokens": 0,
"avgTokensPerCall": 0
},
"costInfo": {
"totalCost": 0.0,
"inputCost": 0.0,
"cachedInputCost": 0.0,
"outputCost": 0.0,
"cacheSavings": 0.0
},
"status": "COMPLETED"
}
],
"lastUpdated": "2025-06-02T02:31:49.621123"
}- Path:
<project-root>/AAA/<project-name>-log.json - Example:
my-project/AAA/my-project-log.json - Automatic: Creates or updates existing log files in the AAA folder
- Multi-Project: Each project maintains its own separate log file
- Automatic Creation: Creates or updates existing log files
- Multi-Task Support: Preserves other task entries (e.g., ParseTestCaseToLlmContext)
- Complete Metrics: Records test counts, success/failure rates, and cost details
- Model Tracking: Shows which AI model was used for analysis
- Status Tracking:
COMPLETED,COMPLETED_WITH_ERRORS, orIN_PROGRESS - Historical Data: Maintains analysis history across multiple runs
Smart Interruption Handling:
- β Incremental Logging: Updates project log every 3 processed files (not just at the end)
- β Cost Preservation: Token usage and costs are saved even if the process is interrupted
- β Resume with Cost Accumulation: When resuming, previous costs are loaded and accumulated
- β No Cost Loss: Your API spending is tracked accurately across interruptions
How it Works:
# Start processing
python -m aaa_issue_scanner batch project_folder --verbose
# If interrupted (Ctrl+C, system crash, etc.), costs are already saved
# Resume processing - costs from previous session are automatically loaded
python -m aaa_issue_scanner batch project_folder --verbose
# Shows: "πΎ Loaded previous session: X API calls, $Y.ZZ cost"Progress Tracking:
- π Progress Files:
.aaa_progress.jsontracks which files are completed - π Project Logs:
<project-name>-log.jsonpreserves cost and token information - π Smart Resume: Skips processed files and accumulates previous costs
- πΎ Frequent Saves: Progress and costs updated every 3 files (prevents data loss)
| Issue Type | Description |
|---|---|
| Good AAA | Proper AAA pattern |
| Multiple AAA | Multiple complete AAA sequences |
| Missing Assert | Test without assertions |
| Assert Pre-condition | Assertions before actions |
| Obscure Assert | Complex assertion logic |
| Arrange & Quit | Conditional early returns |
| Multiple Acts | Multiple sequential actions |
| Suppressed Exception | Exception suppression |
python -m aaa_issue_scanner single [OPTIONS] JSON_FILE
Options:
--api-key TEXT OpenAI API key
--model TEXT Model to use [default: o4-mini]
--reasoning-effort [low|medium|high] Reasoning level [default: medium]
-o, --output PATH Output file
--no-cost Disable cost and token usage information
-v, --verbose Verbose modepython -m aaa_issue_scanner batch [OPTIONS] PROJECT_ROOT
Options:
--api-key TEXT OpenAI API key
--model TEXT Model to use [default: o4-mini]
--reasoning-effort [low|medium|high] Reasoning level [default: medium]
--max-workers INTEGER Maximum concurrent workers [default: 5]
--no-cache Disable caching (caching enabled by default)
--restart Restart from beginning (ignore previous progress)
--cache-dir PATH Custom cache directory [default: .aaa_cache]
--requests-per-minute INTEGER Rate limit for API requests [default: 60]
--no-cost Disable cost and token usage information
-v, --verbose Verbose mode- Path Support: Automatically handles Windows path separators and long paths
- File Names: Automatically sanitizes CSV filenames for Windows compatibility
- Excel Compatibility: CSV files use UTF-8 with BOM for proper Excel display
- Cross-Platform Paths: Uses
pathlib.Pathfor full Windows/Unix path compatibility - Commands: Use
pyinstead ofpythonif needed - PowerShell: Run
Set-ExecutionPolicy RemoteSigned -Scope CurrentUserif needed - File Permissions: Ensure CSV output files aren't open in Excel when running batch processing
- Python 3.9 or higher
- Works with virtual environments and conda
- Automatically handles different line endings (LF, CRLF)
Common Issues:
| Problem | Solution |
|---|---|
python: command not found |
Use python3 or py |
Module not found |
Run pip install -e . in project directory |
Invalid API key |
Check key format (starts with sk- or sk-proj-) |
AAA folder not found |
Ensure AAA/ folder exists in project root |
Permission denied (Windows) |
Close Excel/CSV files, run as Administrator |
Path too long (Windows) |
Enable long path support in Windows settings |
CSV garbled text (Windows) |
Tool uses UTF-8 with BOM automatically |
Get Help:
# Check installation
python -m aaa_issue_scanner --help
# Test with example
python -m aaa_issue_scanner single example_test.json --verboseProject Structure:
src/aaa_issue_scanner/
βββ cli.py # Command-line interface
βββ analyzer.py # AAA analyzer
βββ formatter.py # Data formatter
βββ batch_processor.py # Batch processing
Run Tests:
# Single file test
uv run python -m aaa_issue_scanner single example_test.json
# Batch test
uv run python -m aaa_issue_scanner batch test_project --verboseMIT License - see LICENSE file for details.
- Fork the project
- Create feature branch (
git checkout -b feature/AmazingFeature) - Commit changes (
git commit -m 'Add AmazingFeature') - Push to branch (
git push origin feature/AmazingFeature) - Open Pull Request
Get your OpenAI API key: OpenAI Platform
Report issues: GitHub Issues