Releases: anurag03/testcato
v.1.2.5
v.1.2.4
testcato Release Notes
Date: Thursday, September 25, 2025
🚀 New Feature: Multi-AI Provider and Agent Support
We are excited to announce that testcato now supports multiple AI providers and agents,
bringing enhanced flexibility and customization for automated test result debugging.
Highlights
-
Flexible AI Integration: Easily configure and switch between AI providers such as OpenAI GPT,
Azure OpenAI, Anthropic, and more as they become available. -
Multiple Agent Support: Define multiple AI agents in your configuration file, each with
distinct API keys, models, and endpoints, enabling tailored AI assistance per project or environment. -
Modular Architecture: Our design via
llm_provider.pyallows seamless addition of new AI providers
and agents without modifying core logic. -
Improved Debugging: Leverage the best AI tools suited for your needs to get faster, more accurate
insights into test failures.
How to Use
-
Update your
testcato_config.yamlto include multiple AI agents and specify your preferred provider. -
Select or switch agents dynamically based on your workflow requirements.
-
Extend support by implementing new agents in
llm_provider.pyfollowing the provided interface.
Benefits
-
Greater choice and control over AI-powered debugging.
-
Ability to leverage the latest AI innovations from diverse providers.
-
Enhanced reliability and customization for your CI/CD pipelines.
For detailed configuration and adding new AI providers, please refer to the updated documentation
in the repository.
Thank you for using testcato! We continue to improve your testing and debugging experience with
cutting-edge AI technology.
Just now
v1.2.3
Release Notes Highlights
- GPT Debugging Support: Automated test result debugging using GPT (OpenAI) models. Failed test results are analyzed and explained by the AI agent.
- Configurable AI Agent: Easily set up your GPT agent in testcato_config.yaml. Future support for other models is planned.
- Output Format Update: All test results and debug outputs are now stored in JSONL format (not XML).
- Human-Readable Reports: HTML debug reports are generated for easy review of AI explanations.
- Improved CLI Output: Important messages (debug results, HTML report location) are highlighted in green for visibility.
- Pre-commit Hooks: Added pre-commit configuration for black and flake8 (with E501 ignored) to enforce code style and linting.
- Codebase Cleanup: Fixed flake8 and black issues, removed unused imports, and improved comment styles.
How to Use
- Configure your GPT agent in testcato_config.yaml.
- Run your tests with the --testcato option.
- Review JSONL and HTML debug reports in the testcato_result folder.
- Use pre-commit hooks to maintain code quality.
v1.2.2
Release Notes: GPT Integration for Test Result Debugging
New Feature: AI-Powered Test Debugging with GPT
- Automated Debugging: Testcato now integrates with OpenAI GPT models to provide automated analysis and debugging of failed test results.
- Configurable Agent: Easily configure your GPT agent and API credentials in testcato_config.yaml.
- Seamless Workflow: After running your tests, failed test results are sent to GPT for first-level debugging. The AI agent analyzes tracebacks and provides actionable suggestions
- Human-Readable Reports: Debug results are saved in both JSONL and HTML formats for easy review and sharing.
- CLI Highlighting: Errors and agent responses are clearly highlighted in the CLI for better visibility.
- Extensible Design: The integration is designed to support future AI agents and models.
How to Use:
- Set up your GPT API credentials in [testcato_config.yaml]
- Run your tests as usual.
- Review AI-generated debug suggestions in the output files and HTML report.