Skip to content

Releases: anurag03/testcato

v.1.2.5

27 Sep 19:16
741e661

Choose a tag to compare

What's Changed

Full Changelog: v1.2.4...v1.2.5

v.1.2.4

25 Sep 18:28
68a7043

Choose a tag to compare

testcato Release Notes

Date: Thursday, September 25, 2025

🚀 New Feature: Multi-AI Provider and Agent Support

We are excited to announce that testcato now supports multiple AI providers and agents,
bringing enhanced flexibility and customization for automated test result debugging.

Highlights

  • Flexible AI Integration: Easily configure and switch between AI providers such as OpenAI GPT,
    Azure OpenAI, Anthropic, and more as they become available.

  • Multiple Agent Support: Define multiple AI agents in your configuration file, each with
    distinct API keys, models, and endpoints, enabling tailored AI assistance per project or environment.

  • Modular Architecture: Our design via llm_provider.py allows seamless addition of new AI providers
    and agents without modifying core logic.

  • Improved Debugging: Leverage the best AI tools suited for your needs to get faster, more accurate
    insights into test failures.

How to Use

  • Update your testcato_config.yaml to include multiple AI agents and specify your preferred provider.

  • Select or switch agents dynamically based on your workflow requirements.

  • Extend support by implementing new agents in llm_provider.py following the provided interface.

Benefits

  • Greater choice and control over AI-powered debugging.

  • Ability to leverage the latest AI innovations from diverse providers.

  • Enhanced reliability and customization for your CI/CD pipelines.


For detailed configuration and adding new AI providers, please refer to the updated documentation
in the repository.

Thank you for using testcato! We continue to improve your testing and debugging experience with
cutting-edge AI technology.
Just now

v1.2.3

23 Sep 16:47
af2ee79

Choose a tag to compare

Release Notes Highlights

  • GPT Debugging Support: Automated test result debugging using GPT (OpenAI) models. Failed test results are analyzed and explained by the AI agent.
  • Configurable AI Agent: Easily set up your GPT agent in testcato_config.yaml. Future support for other models is planned.
  • Output Format Update: All test results and debug outputs are now stored in JSONL format (not XML).
  • Human-Readable Reports: HTML debug reports are generated for easy review of AI explanations.
  • Improved CLI Output: Important messages (debug results, HTML report location) are highlighted in green for visibility.
  • Pre-commit Hooks: Added pre-commit configuration for black and flake8 (with E501 ignored) to enforce code style and linting.
  • Codebase Cleanup: Fixed flake8 and black issues, removed unused imports, and improved comment styles.

How to Use

  1. Configure your GPT agent in testcato_config.yaml.
  2. Run your tests with the --testcato option.
  3. Review JSONL and HTML debug reports in the testcato_result folder.
  4. Use pre-commit hooks to maintain code quality.

v1.2.2

23 Sep 13:36
bc62368

Choose a tag to compare

Release Notes: GPT Integration for Test Result Debugging
New Feature: AI-Powered Test Debugging with GPT

  • Automated Debugging: Testcato now integrates with OpenAI GPT models to provide automated analysis and debugging of failed test results.
  • Configurable Agent: Easily configure your GPT agent and API credentials in testcato_config.yaml.
  • Seamless Workflow: After running your tests, failed test results are sent to GPT for first-level debugging. The AI agent analyzes tracebacks and provides actionable suggestions
  • Human-Readable Reports: Debug results are saved in both JSONL and HTML formats for easy review and sharing.
  • CLI Highlighting: Errors and agent responses are clearly highlighted in the CLI for better visibility.
  • Extensible Design: The integration is designed to support future AI agents and models.

How to Use:

  1. Set up your GPT API credentials in [testcato_config.yaml]
  2. Run your tests as usual.
  3. Review AI-generated debug suggestions in the output files and HTML report.

v1.2.1

22 Sep 17:59
9e36220

Choose a tag to compare

Merge pull request #23 from anurag03/ansinha/ai_agent_debug

Ansinha/ai agent debug

v1.2.0

22 Sep 16:37
9cf673a

Choose a tag to compare

Merge pull request #22 from anurag03/ansinha/entrypoint

Updating entrypoint

v1.1.9

22 Sep 16:22
101a120

Choose a tag to compare

Merge pull request #21 from anurag03/ansinha/ai_conf

Adding config details and updated README

v1.1.8

21 Sep 17:52
b582501

Choose a tag to compare

Merge pull request #19 from anurag03/ansinha/python-fix5

Versoin

v1.1.7

21 Sep 17:48
f58d11c

Choose a tag to compare

Merge pull request #18 from anurag03/ansinha/python-fix4

Versoin

v1.1.6

21 Sep 17:42
7240c8b

Choose a tag to compare

Merge pull request #17 from anurag03/ansinha/python-fix3

Versoin