Releases: smkrv/ha-text-ai
v2.4.0
What's New in 2.4.0
🔧 Architecture & Code Quality
- Coordinator refactoring: Extracted
HistoryManagerandMetricsManagerinto separate modules via composition pattern - Provider registry: Centralized provider dispatch through
providers.pywithcheck_path,auth_header,auth_prefix - Utility extraction: Shared helpers moved to
utils.py(normalize_name, etc.) - Removed dead code: Eliminated unused
is_anthropicflag,CONF_IS_ANTHROPIC,DEFAULT_TIMEOUT,API_TIMEOUTconstants - Modern Python: Added
from __future__ import annotationsto all modules
🐛 Bug Fixes
- HA 2024.11+ compatibility: Removed deprecated
OptionsFlow.__init__, switchedFlowResult→ConfigFlowResult - HA 2024.8+ compatibility: Pass
config_entryparameter toDataUpdateCoordinator.__init__ - Gemini timeout fix: Added
asyncio.timeoutspecifically for Gemini (uses sync SDK viaasyncio.to_thread, not aiohttp) - Removed dual timeout stacking: Eliminated redundant
asyncio.timeoutwrapper in_send_to_api— timeout now handled per-provider - Shallow copy mutation: Fixed
async_get_historywithinclude_metadata=Truemutating original conversation history - Config flow dedup: Added
async_set_unique_id()+_abort_if_unique_id_configured()for config entry deduplication - Instance lookup:
get_coordinator_by_instancenow matches bynormalized_nametoo - Merged duplicate methods: Combined
async_ask_question/async_process_questioninto single method - History dir creation: Added defensive
os.makedirsin write test to prevent ERRNO 2 on fresh installs - Sensor attributes > 16KB: Reduced attribute sizes to fit within HA Recorder limit (16384 bytes). Conversation history preview: 3 entries × 256 chars; last response/question: 2048 chars; system prompt: 512 chars. Full data available via
get_historyservice
🔒 Security
- Credential sanitization: Added regex patterns to strip Bearer tokens, sk- keys, x-api-key headers, and Gemini AIza patterns from error messages
- Disk exhaustion prevention: Per-entry 32KB storage cap + archive cleanup (max 3 archive files)
- SSRF validation: Provider endpoint validation via registry
📝 Documentation
- Updated README: Current model names (Claude 4.6, GPT-5, Gemini 3.1, DeepSeek-V3)
- Removed outdated YAML config: Integration is
config_entry_only - Fixed sensor attributes: Removed deleted
Api status, updated conversation history display count
📜 License
- Changed license from CC BY-NC-SA 4.0 to PolyForm Noncommercial 1.0.0
🌐 Translations
- Added
api_key_requirederror key across all 8 languages - Removed dead translation keys (
invalid_characters,queued) - Added missing sensor state attribute translations
v2.3.0 Pre-release
What's New in 2.3.0
🚀 New Features
- Structured JSON Output: New
structured_outputandjson_schemaparameters for theask_questionservice - Google Gemini Support: Full integration with Google Gemini API via google-genai library
- Enhanced Service Response:
ask_questionservice now returns structured response data including tokens used, model info, and timestamps - Edit Integration Settings: You can now change provider, API key, endpoint, and model for existing integrations without recreating them
- Two-step options flow: first select provider, then configure connection settings
- Automatic default endpoint and model when switching providers
- Integration auto-reloads after saving changes
🐛 Bug Fixes
- Fixed config flow error: Resolved '500 Internal Server Error' when editing existing integrations (HA 2024.1+ compatibility)
- Fixed OptionsFlowHandler initialization for modern Home Assistant versions
📦 Dependencies
- Added
google-genai>=1.16.0for Gemini support
⚠️ Pre-release Notice
This is a pre-release version for testing. Please report any issues on GitHub.
v2.2.0
Release v2.2.0: Configurable API Timeout
🚀 New Feature
Configurable API Timeout (#8)
- Custom Timeout Setting: Added new
api_timeoutconfiguration option allowing users to set API request timeout from 5 to 600 seconds (default: 30 seconds) - Local LLM Support: Enables use of slower local LLM instances like Ollama that require more than 30 seconds to generate responses
- Full UI Integration: Timeout can be configured during initial setup and modified via Options Flow without reconfiguration
🔧 Technical Details
Configuration Options
| Parameter | Type | Range | Default |
|---|---|---|---|
api_timeout |
integer | 5-600 seconds | 30 seconds |
Files Modified
const.py- Added timeout constants (CONF_API_TIMEOUT, DEFAULT_API_TIMEOUT, MIN_API_TIMEOUT, MAX_API_TIMEOUT)config_flow.py- Added timeout field to provider form and options flowapi_client.py- Parameterized timeout in API clientcoordinator.py- Use configurable timeout in message processing__init__.py- Read and pass timeout from configuration
Breaking Changes
- None - this release maintains full backward compatibility
- Existing configurations will use the default 30-second timeout
🌐 Internationalization
Updated Translations
- 8 Language Support: Added translations for new timeout setting in English, Russian, German, Spanish, Italian, Hindi, Serbian, and Chinese
📋 Usage Example
# In Home Assistant configuration or via UI
api_timeout: 120 # 2 minutes for slower local models🙏 Community
Thanks to @Skyview79 for the feature request (#8)!
Full Changelog: v2.1.9...v2.2.0
v2.1.9
Release v2.1.9: Response Variables Support and Production Audit
🚀 Major Features
Response Variables Support
- Direct AI Response Access: Eliminates the 255-character sensor limitation by returning response data directly from service calls
- Enhanced Service Architecture: Improved async processing with metadata-rich responses including tokens used, model information, and timestamps
- Backward Compatibility: Existing sensor-based workflows continue to work seamlessly
Enhanced Service Framework
- Metadata-Rich Responses: Services now return comprehensive data including response text, token usage, model used, processing time, and success indicators
- Improved Error Handling: Better exception handling with detailed error messages and proper error propagation
🔒 Security & Performance
Security Enhancements
- API Key Protection: Enhanced API key handling with improved sanitization in logs
- Secure Logging: Removed sensitive data from debug logs while maintaining useful debugging information
- Input Validation: Enhanced parameter validation with type checking
Performance Optimizations
- Context Managers: Added proper resource management with async context managers
- Atomic File Operations: Implemented atomic file writes with backup and corruption handling
- Memory Management: Improved memory usage monitoring and cleanup
- Async Improvements: Better async/await patterns with proper semaphore usage
🌐 Internationalization
Updated Translations
- 8 Language Support: Enhanced localization for English, Russian, German, Spanish, Italian, Hindi, Serbian, and Chinese
- Response Variables Documentation: Updated all language files with new functionality descriptions
- Consistent Terminology: Standardized technical terms across all translations
✅ Compliance & Quality
Home Assistant Compliance
- Hassfest Validation: Fixed all hassfest validation errors in services.yaml
- Service Schema Compliance: Added required target configurations for all services
- Standards Adherence: Ensured full compliance with Home Assistant integration standards
Production Readiness
- Comprehensive Code Audit: Full security and performance review of all components
- Quality Improvements: Enhanced code quality with better error handling, logging, and resource management
- Testing: Improved reliability through better exception handling and edge case coverage
📋 Technical Details
Breaking Changes
- None - this release maintains full backward compatibility
New Service Response Format
Services now return structured data:
response_text: "AI response content"
tokens_used: 150
model_used: "gpt-4"
processing_time: 2.3
success: true
timestamp: "2025-01-09T17:30:00Z"Enhanced get_history service now supports advanced filtering and sorting options:
service: ha_text_ai.get_history
data:
instance: sensor.ha_text_ai_my_assistant
limit: 10
filter_model: "gemini-2.0-flash"
start_date: "2025-01-01T00:00:00Z"
include_metadata: true
sort_order: "newest"Migration Guide
- Existing Users: No action required - existing automations continue to work
- New Features: Use response variables in automations to access AI responses directly
- Documentation: Updated README with comprehensive examples and migration guide
🔧 Developer Notes
API Changes
- Enhanced service return values with metadata
- Improved error response structure
- Better async resource management
Dependencies
- Updated minimum requirements for better security and performance
- All dependencies verified for compatibility
Full Changelog: v2.1.8...v2.1.9
v2.1.8
Release v2.1.8: Response Variables Support and Production Audit
🚀 Major Features
Response Variables Support
- Direct AI Response Access: Eliminates the 255-character sensor limitation by returning response data directly from service calls
- Enhanced Service Architecture: Improved async processing with metadata-rich responses including tokens used, model information, and timestamps
- Backward Compatibility: Existing sensor-based workflows continue to work seamlessly
Enhanced Service Framework
- Metadata-Rich Responses: Services now return comprehensive data including response text, token usage, model used, processing time, and success indicators
- Improved Error Handling: Better exception handling with detailed error messages and proper error propagation
🔒 Security & Performance
Security Enhancements
- API Key Protection: Enhanced API key handling with improved sanitization in logs
- Secure Logging: Removed sensitive data from debug logs while maintaining useful debugging information
- Input Validation: Enhanced parameter validation with type checking
Performance Optimizations
- Context Managers: Added proper resource management with async context managers
- Atomic File Operations: Implemented atomic file writes with backup and corruption handling
- Memory Management: Improved memory usage monitoring and cleanup
- Async Improvements: Better async/await patterns with proper semaphore usage
🌐 Internationalization
Updated Translations
- 8 Language Support: Enhanced localization for English, Russian, German, Spanish, Italian, Hindi, Serbian, and Chinese
- Response Variables Documentation: Updated all language files with new functionality descriptions
- Consistent Terminology: Standardized technical terms across all translations
✅ Compliance & Quality
Home Assistant Compliance
- Hassfest Validation: Fixed all hassfest validation errors in services.yaml
- Service Schema Compliance: Added required target configurations for all services
- Standards Adherence: Ensured full compliance with Home Assistant integration standards
Production Readiness
- Comprehensive Code Audit: Full security and performance review of all components
- Quality Improvements: Enhanced code quality with better error handling, logging, and resource management
- Testing: Improved reliability through better exception handling and edge case coverage
📋 Technical Details
Breaking Changes
- None - this release maintains full backward compatibility
New Service Response Format
Services now return structured data:
response_text: "AI response content"
tokens_used: 150
model_used: "gpt-4"
processing_time: 2.3
success: true
timestamp: "2025-01-09T17:30:00Z"Migration Guide
- Existing Users: No action required - existing automations continue to work
- New Features: Use response variables in automations to access AI responses directly
- Documentation: Updated README with comprehensive examples and migration guide
🔧 Developer Notes
API Changes
- Enhanced service return values with metadata
- Improved error response structure
- Better async resource management
Dependencies
- Updated minimum requirements for better security and performance
- All dependencies verified for compatibility
Full Changelog: v2.1.7...v2.1.8
v2.1.7
v2.1.7 Release Notes:
This release was accidentally deleted and has now been restored. Apologies for any inconvenience.
New Features
- Added Google Gemini support
Google Gemini Integration Update
- Fixed issue #6 by upgrading to google-genai 1.16.0
- Improved endpoint configuration with abstract API and custom endpoint option
- Refactored internal Gemini integration classes for better stability
- Added Google Gemini support, thanks to DJAMIRSAM for testing and bug reports
Bug Fixes
- Google Gemini Integration: Fixed compatibility issues with Google Gemini API (#6)
For more details on the integration, check out the discussion on the Home Assistant Community forum
🚀 Google Gemini Integration
- Support for Google's Gemini
- Enhanced natural language processing for Home Assistant automations
- Optimized for smart home control and information queries
🔧 Technical Implementation
- Seamless integration with existing HA Text AI conversation flows
Getting Started with Gemini
To configure the new Gemini provider:
- Update directly to v2.1.7 or through HACS
- Configure your Google Gemini API key in the integration settings
- Select Gemini as your preferred model in HA Text AI configuration
What's Changed
- Add Gemini API provider support to HA Text AI integration
Full Changelog: v2.1.6...v2.1.7
v2.1.6
v2.1.6 Release Notes:
New Features
- Added Google Gemini support, thanks to (@Azzedde)
- Enhanced AI capabilities through Google's latest language model
Bug Fixes
- Google Gemini Integration: Fixed compatibility issues with Google Gemini API by correcting field name format and improving message handling to meet API requirements. This should resolve error responses when using Gemini models. (#6)
For more details on the integration, check out the discussion on the Home Assistant Community forum
🚀 Google Gemini Integration
- Support for Google's Gemini
- Enhanced natural language processing for Home Assistant automations
- Optimized for smart home control and information queries
🔧 Technical Implementation
- Seamless integration with existing HA Text AI conversation flows
Getting Started with Gemini
To configure the new Gemini provider:
- Update directly to v2.1.6 or through HACS
- Configure your Google Gemini API key in the integration settings
- Select Gemini as your preferred model in HA Text AI configuration
What's Changed
- Add Gemini API provider support to HA Text AI integration
Full Changelog: v2.1.5...v2.1.6
v2.1.5
v2.1.5 Release Notes:
New Features
- Added Google Gemini support, thanks to (@Azzedde)
- Enhanced AI capabilities through Google's latest language model
Bug Fixes
- Google Gemini Integration: Fixed compatibility issues with Google Gemini API by correcting field name format from camelCase to snake_case and improving message handling to meet API requirements. This resolves error responses when using Gemini models. (#6)
For more details on the integration, check out the discussion on the Home Assistant Community forum
🚀 Google Gemini Integration
- Support for Google's Gemini
- Enhanced natural language processing for Home Assistant automations
- Optimized for smart home control and information queries
🔧 Technical Implementation
- Seamless integration with existing HA Text AI conversation flows
Getting Started with Gemini
To configure the new Gemini provider:
- Update directly to v2.1.5 or through HACS
- Configure your Google Gemini API key in the integration settings
- Select Gemini as your preferred model in HA Text AI configuration
What's Changed
- Add Gemini API provider support to HA Text AI integration
Full Changelog: v2.1.4...v2.1.5
v2.1.4
v2.1.4 Release Notes:
New Features
- Added Google Gemini support, thanks to (@Azzedde)
- Enhanced AI capabilities through Google's latest language model
Bug Fixes
- Maintains full functional compatibility with v2.1.2
Previous Version Features (v2.1.2)
- Resolved UI-level token limit calculation bug
- Maintained full functional compatibility with v2.1.1
For more details on the integration, check out the discussion on the Home Assistant Community forum
🚀 Google Gemini Integration
- Support for Google's Gemini
- Enhanced natural language processing for Home Assistant automations
- Optimized for smart home control and information queries
🔧 Technical Implementation
- Seamless integration with existing HA Text AI conversation flows
Getting Started with Gemini
To configure the new Gemini provider:
- Update directly to v2.1.3 or through HACS
- Configure your Google Gemini API key in the integration settings
- Select Gemini as your preferred model in HA Text AI configuration
What's Changed
New Contributors
Full Changelog: v2.1.2...v2.1.4
What's Changed
New Contributors
Full Changelog: v2.1.2...v2.1.4
v2.1.2
v2.1.2 Release Notes:
Bug Fixes
- Resolved UI-level token limit calculation bug
- Maintains full functional compatibility with v2.1.1
Previous Version Features (v2.1.1)
For more details on the integration, check out the discussion on the Home Assistant Community forum
🔄 Major Architectural Changes
- Complete refactoring of token handling mechanism
- Elimination of custom token calculation approach
- Direct
max_tokensparameter passing to LLM APIs
🎯 Key Technical Improvements
- Enhanced cross-provider compatibility
- Expanded support for large-context language models
- Robust and predictable token limit management
- Significant codebase simplification
- DeepSeek provider full integration
Provider Updates
DeepSeek — NEW Integration
DeepSeek is a cutting-edge AI provider specializing in advanced language models optimized for both conversational and reasoning tasks. This integration brings:
- High-performance model inference
- Cost-effective API endpoints
- Enterprise-grade reliability
- Flexible deployment options
Full Changelog: v2.1.1...v2.1.2