Releases: FlorinPeter/am-todos
v2.1.4
fix: Increase proxy mode timeout from 30s to 4 minutes for AI operations - Server proxy timeout: 30s → 4 minutes (configurable via PROXY_AI_TIMEOUT_MS) - Frontend AI timeout: 2 minutes → 4 minutes for all AI operations - Handles 3+ minute AI responses in proxy mode without timing out - Added environment variable PROXY_AI_TIMEOUT_MS for deployment flexibility - Enhanced timeout error messages with duration details - Updated tests to match new timeout values 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
v2.1.3
🚀 Agentic Markdown Todos v2.1.3
🐛 Critical Bug Fix: Proxy Mode Timeout Issue
Problem Resolved
Fixed a critical timeout issue where proxy mode AI requests were timing out at 30 seconds while AI processing could take up to 3+ minutes, causing request failures in local AI proxy configurations.
Key Fixes
- Server Proxy Timeout: Increased from 30 seconds → 4 minutes (configurable via
PROXY_AI_TIMEOUT_MS) - Frontend AI Timeout: Increased from 2 minutes → 4 minutes for all AI operations
- Environment Variable Support: Added
PROXY_AI_TIMEOUT_MSfor deployment flexibility - Enhanced Error Messages: Timeout errors now include duration details for better debugging
Technical Details
# Configure custom timeout (optional)
export PROXY_AI_TIMEOUT_MS=300000 # 5 minutesBefore: Browser → Server → Proxy (30s timeout) ❌ AI Response (3+ minutes)
After: Browser → Server → Proxy (4min timeout) ✅ AI Response (3+ minutes)
Impact
- ✅ Local AI Proxy: Now handles long-running AI operations without timeout
- ✅ Large Model Support: Compatible with slower local AI models
- ✅ Enterprise Deployments: Configurable timeouts for different environments
- ✅ Backward Compatibility: No breaking changes for existing users
🏠 Major Feature: Local AI Proxy Implementation
Revolutionary AI Independence
This release includes the complete Local AI Proxy as a first-class AI provider, enabling complete data sovereignty and AI processing independence:
- 🔐 Complete Data Control: Process ALL AI requests locally without external service dependency
- 💰 Cost Elimination: Avoid external API costs with local AI infrastructure
- 🏢 Enterprise Compliance: Meet strict data governance requirements
- 🛠️ Multi-Platform Support: Compatible with LMStudio, Ollama, and custom OpenAI endpoints
Key Technical Features
- Provider-Based Architecture: Simple, reliable AI request routing
- Real-Time Connection Monitoring: Live status validation with 30-second health checks
- WebSocket Infrastructure: Efficient proxy communication system
- User-Specific Authentication: UUID + localToken security model
- Enhanced Setup UI: Tabbed configuration wizard with comprehensive guidance
Docker Integration
# LMStudio Configuration
docker run -d --name am_todos_proxy \
-v ./proxy-config:/config \
-e LOCAL_PROXY_MODE=true \
-e MAIN_SERVER_TOKEN=<your-token> \
-e MAIN_SERVER_URL=wss://your-app.run.app/proxy-ws \
-e LOCAL_AI_ENDPOINT=http://host.docker.internal:1234 \
-e LOCAL_AI_MODEL=mistralai/mistral-small-3.2 \
ghcr.io/florinpeter/am-todos:v2.1.3📊 Quality Assurance
Comprehensive Testing
- 1,017 Tests Passing: Complete test suite validation
- 79.4% Code Coverage: Extensive coverage across all components
- Timeout Fix Validation: All fetchWithTimeout tests updated and passing
- Multi-Platform Testing: Node.js 20.x and 22.x validation
Production Ready
- Enhanced Error Handling: Comprehensive timeout error states with recovery guidance
- Performance Optimized: Proper timeout handling without request race conditions
- Mobile Responsive: Complete mobile-first design with touch-friendly interface
🌟 Business Impact
For Organizations
- Data Sovereignty: Complete control over AI processing and data
- Compliance Ready: Meet strict enterprise security requirements
- Cost Control: Eliminate external AI API expenses
- Infrastructure Independence: Use existing local AI investments
- Configurable Timeouts: Adapt to different deployment scenarios
For Users
- Seamless Experience: Local proxy works identically to cloud providers
- Real-Time Feedback: Live connection status and troubleshooting guidance
- Flexible Setup: Support for popular local AI platforms
- Enhanced Security: User-specific proxy routing with credential validation
- Reliable AI Processing: No more timeout failures on complex AI tasks
🔄 Migration Notes
Automatic Updates
- Timeout Improvements: Automatic benefit from increased timeout values
- Settings Migration: Automatic localStorage updates
- Test Validation: All existing functionality preserved and improved
New Users
- Select AI Provider: Choose "Local Proxy" alongside Gemini and OpenRouter
- Deploy Proxy: Use provided Docker commands for your AI platform
- Configure Credentials: Copy UUID and localToken from proxy settings
- Test Connection: Validate setup with real-time status monitoring
- Start Using: All AI features work seamlessly with local processing
🏆 Complete Success: This release fixes critical timeout issues while maintaining all the revolutionary Local AI Proxy features, providing unprecedented control over AI processing with enterprise-grade reliability and security.
📈 Next Steps: Enhanced monitoring, multi-proxy support, and advanced local AI model management coming in future releases.
v2.1.2
🚀 Agentic Markdown Todos v2.1.2
🏠 Major Feature: Local AI Proxy Implementation
Revolutionary AI Independence
This release introduces Local AI Proxy as a first-class AI provider, enabling complete data sovereignty and AI processing independence:
- 🔐 Complete Data Control: Process ALL AI requests locally without external service dependency
- 💰 Cost Elimination: Avoid external API costs with local AI infrastructure
- 🏢 Enterprise Compliance: Meet strict data governance requirements
- 🛠️ Multi-Platform Support: Compatible with LMStudio, Ollama, and custom OpenAI endpoints
Key Technical Features
- Provider-Based Architecture: Simple, reliable AI request routing
- Real-Time Connection Monitoring: Live status validation with 30-second health checks
- WebSocket Infrastructure: Efficient proxy communication system
- User-Specific Authentication: UUID + localToken security model
- Enhanced Setup UI: Tabbed configuration wizard with comprehensive guidance
Docker Integration
# LMStudio Configuration
docker run -d --name am_todos_proxy \
-v ./proxy-config:/config \
-e LOCAL_PROXY_MODE=true \
-e MAIN_SERVER_TOKEN=<your-token> \
-e MAIN_SERVER_URL=wss://your-app.run.app/proxy-ws \
-e LOCAL_AI_ENDPOINT=http://host.docker.internal:1234 \
-e LOCAL_AI_MODEL=mistralai/mistral-small-3.2 \
ghcr.io/florinpeter/am-todos:latest🐛 Critical Bug Fixes
Local Proxy Configuration Loop Fix
- Fixed: Infinite loop during "Proxy UUID" entry that prevented "Local Token" input
- Root Cause: Cascading useEffect dependencies creating dependency chain loops
- Solution: Enhanced validation guards and proper dependency management
Component Naming Consistency
- Updated: GitSettings.tsx → GeneralSettings.tsx for semantic accuracy
- Fixed: TypeScript import conflicts between component and type names
- Improved: All references updated across 13+ files for consistency
🏗️ Architecture Improvements
Simplified & Efficient Design
- 99% Reduction: Eliminated complex content analysis overhead
- Net Code Reduction: -89 lines across 17 files with enhanced functionality
- Modular Structure: New server architecture with dedicated route modules
- Enhanced Security: Multi-layer authentication and validation system
New Server Structure
server/
├── config/ # Application and proxy configuration
├── middleware/ # Authentication and security middleware
├── routes/ # Modular API routes (ai, git, proxy, search)
├── services/ # Local proxy and proxy manager services
├── utils/ # Authentication utilities
└── websocket/ # WebSocket communication handlers
📊 Quality Assurance
Comprehensive Testing
- 1,017 Tests Passing: Complete test suite validation
- 79.4% Code Coverage: Extensive coverage across all components
- CI/CD Integration: Automated testing with GitHub Actions
- Multi-Platform Testing: Node.js 20.x and 22.x validation
Production Ready
- Zero Breaking Changes: Full backward compatibility maintained
- Enhanced Error Handling: Comprehensive error states with recovery guidance
- Performance Optimized: Faster routing decisions without content analysis
- Mobile Responsive: Complete mobile-first design with touch-friendly interface
🌟 Business Impact
For Organizations
- Data Sovereignty: Complete control over AI processing and data
- Compliance Ready: Meet strict enterprise security requirements
- Cost Control: Eliminate external AI API expenses
- Infrastructure Independence: Use existing local AI investments
For Users
- Seamless Experience: Local proxy works identically to cloud providers
- Real-Time Feedback: Live connection status and troubleshooting guidance
- Flexible Setup: Support for popular local AI platforms
- Enhanced Security: User-specific proxy routing with credential validation
📚 Documentation
New Documentation
- Local AI Proxy Guide: Complete implementation documentation
- Enhanced Development Setup: Improved restart scripts with test tokens
- Architecture Overview: Detailed technical implementation guide
- Security Model: Multi-layer authentication and authorization documentation
🔄 Migration Notes
Automatic Updates
- Seamless Upgrade: No breaking changes for existing users
- Settings Migration: Automatic localStorage updates
- Frontmatter Simplification: Reduced to tags-only structure
- Test Validation: All existing functionality preserved
New Users
- Select AI Provider: Choose "Local Proxy" alongside Gemini and OpenRouter
- Deploy Proxy: Use provided Docker commands for your AI platform
- Configure Credentials: Copy UUID and localToken from proxy settings
- Test Connection: Validate setup with real-time status monitoring
- Start Using: All AI features work seamlessly with local processing
🏆 Complete Success: This release transforms AM-Todos into a truly data-sovereign AI-powered task management system, providing unprecedented control over AI processing while maintaining seamless user experience and enterprise-grade security.
📈 Next Steps: Enhanced monitoring, multi-proxy support, and advanced local AI model management coming in future releases.
v2.1.1
🚀 Agentic Markdown Todos v2.1.1
🏠 Major Feature: Local AI Proxy Implementation
Revolutionary AI Independence
This release introduces Local AI Proxy as a first-class AI provider, enabling complete data sovereignty and AI processing independence:
- 🔐 Complete Data Control: Process ALL AI requests locally without external service dependency
- 💰 Cost Elimination: Avoid external API costs with local AI infrastructure
- 🏢 Enterprise Compliance: Meet strict data governance requirements
- 🛠️ Multi-Platform Support: Compatible with LMStudio, Ollama, and custom OpenAI endpoints
Key Technical Features
- Provider-Based Architecture: Simple, reliable AI request routing
- Real-Time Connection Monitoring: Live status validation with 30-second health checks
- WebSocket Infrastructure: Efficient proxy communication system
- User-Specific Authentication: UUID + localToken security model
- Enhanced Setup UI: Tabbed configuration wizard with comprehensive guidance
Docker Integration
# LMStudio Configuration
docker run -d --name am_todos_proxy \
-v ./proxy-config:/config \
-e LOCAL_PROXY_MODE=true \
-e MAIN_SERVER_TOKEN=<your-token> \
-e MAIN_SERVER_URL=wss://your-app.run.app/proxy-ws \
-e LOCAL_AI_ENDPOINT=http://host.docker.internal:1234 \
-e LOCAL_AI_MODEL=mistralai/mistral-small-3.2 \
ghcr.io/florinpeter/am-todos:latest🐛 Critical Bug Fixes
Local Proxy Configuration Loop Fix
- Fixed: Infinite loop during "Proxy UUID" entry that prevented "Local Token" input
- Root Cause: Cascading useEffect dependencies creating dependency chain loops
- Solution: Enhanced validation guards and proper dependency management
Component Naming Consistency
- Updated: GitSettings.tsx → GeneralSettings.tsx for semantic accuracy
- Fixed: TypeScript import conflicts between component and type names
- Improved: All references updated across 13+ files for consistency
🏗️ Architecture Improvements
Simplified & Efficient Design
- 99% Reduction: Eliminated complex content analysis overhead
- Net Code Reduction: -89 lines across 17 files with enhanced functionality
- Modular Structure: New server architecture with dedicated route modules
- Enhanced Security: Multi-layer authentication and validation system
New Server Structure
server/
├── config/ # Application and proxy configuration
├── middleware/ # Authentication and security middleware
├── routes/ # Modular API routes (ai, git, proxy, search)
├── services/ # Local proxy and proxy manager services
├── utils/ # Authentication utilities
└── websocket/ # WebSocket communication handlers
📊 Quality Assurance
Comprehensive Testing
- 1,017 Tests Passing: Complete test suite validation
- 79.4% Code Coverage: Extensive coverage across all components
- CI/CD Integration: Automated testing with GitHub Actions
- Multi-Platform Testing: Node.js 20.x and 22.x validation
Production Ready
- Zero Breaking Changes: Full backward compatibility maintained
- Enhanced Error Handling: Comprehensive error states with recovery guidance
- Performance Optimized: Faster routing decisions without content analysis
- Mobile Responsive: Complete mobile-first design with touch-friendly interface
🌟 Business Impact
For Organizations
- Data Sovereignty: Complete control over AI processing and data
- Compliance Ready: Meet strict enterprise security requirements
- Cost Control: Eliminate external AI API expenses
- Infrastructure Independence: Use existing local AI investments
For Users
- Seamless Experience: Local proxy works identically to cloud providers
- Real-Time Feedback: Live connection status and troubleshooting guidance
- Flexible Setup: Support for popular local AI platforms
- Enhanced Security: User-specific proxy routing with credential validation
📚 Documentation
New Documentation
- Local AI Proxy Guide: Complete implementation documentation
- Enhanced Development Setup: Improved restart scripts with test tokens
- Architecture Overview: Detailed technical implementation guide
- Security Model: Multi-layer authentication and authorization documentation
🔄 Migration Notes
Automatic Updates
- Seamless Upgrade: No breaking changes for existing users
- Settings Migration: Automatic localStorage updates
- Frontmatter Simplification: Reduced to tags-only structure
- Test Validation: All existing functionality preserved
New Users
- Select AI Provider: Choose "Local Proxy" alongside Gemini and OpenRouter
- Deploy Proxy: Use provided Docker commands for your AI platform
- Configure Credentials: Copy UUID and localToken from proxy settings
- Test Connection: Validate setup with real-time status monitoring
- Start Using: All AI features work seamlessly with local processing
🏆 Complete Success: This release transforms AM-Todos into a truly data-sovereign AI-powered task management system, providing unprecedented control over AI processing while maintaining seamless user experience and enterprise-grade security.
📈 Next Steps: Enhanced monitoring, multi-proxy support, and advanced local AI model management coming in future releases.
v2.0.0
🚀 Agentic Markdown Todos v2.0.0
🎯 Major Features
✨ Enhanced Todo Creation with Templates
- Template System: 6 predefined templates for different task types (General, Project Planning, Bug Investigation, Feature Development, Research, Personal Goals)
- Structured AI Output: All AI actions now use consistent JSON format for reliable parsing
- Enhanced Input Interface: Title + optional description + template selection for better AI generation
- Backward Compatibility: Maintains compatibility with existing todo creation workflow
🏗️ Advanced UI & Performance Improvements
- Perfect Checkbox Alignment: Fixed checkbox positioning with CSS transforms for mobile and desktop
- Enhanced Markdown Rendering: GitHub-style display with improved visual consistency
- Filename-Based Metadata: 99%+ reduction in API requests through smart filename encoding
- Consolidated Test Suite: Reduced from 109 → 33 test files (69.7% reduction) while achieving 81% coverage
🛠️ Development Experience
- PostToolUse Hook System: Automated code quality enforcement with TypeScript and ESLint checks
- Comprehensive Testing: 1079 tests with anti-cluttering guidelines and coverage analysis
- Developer Tools: Enhanced scripts and hooks for quality assurance
📋 Complete Feature Set
Core Functionality
- ✅ AI-Powered Task Generation with multi-provider support (Google Gemini, OpenRouter)
- ✅ Interactive Markdown Editor with real-time Git sync
- ✅ Multi-Folder Support for project organization
- ✅ Intelligent Search with real-time results and keyboard shortcuts
- ✅ Configuration Sharing with QR codes for cross-device setup
- ✅ Mobile-Responsive Design with touch-friendly interface
Technical Excellence
- ✅ Type Safety: Comprehensive TypeScript implementation
- ✅ Performance Optimized: Smart caching and API request reduction
- ✅ Security Focused: Fine-grained GitHub/GitLab permissions
- ✅ Extensible Architecture: Plugin-ready template system
🔧 Technical Improvements
Enhanced AI Integration
- Multi-Provider Support: Choose between Google Gemini or OpenRouter (400+ models)
- Structured Output: Consistent JSON responses with fallback mechanisms
- Template-Guided Generation: Specialized AI prompts for different task categories
- Improved Reliability: Better error handling and response parsing
UI/UX Enhancements
- Perfect Alignment: Checkbox positioning fixed with
translateY(4px)transform - Template Selection: Intuitive dropdown with category descriptions
- Progressive Disclosure: Optional description field with toggle
- Visual Consistency: GitHub-style markdown rendering
Performance & Reliability
- Filename Metadata: Instant todo listing without individual file fetches
- Smart Caching: Optimized data loading and storage
- Error Recovery: Comprehensive error handling with user-friendly messages
- Test Coverage: 81.08% coverage with focused, high-quality tests
📈 Metrics
- Test Files: 109 → 33 (69.7% reduction, anti-cluttering success)
- Test Coverage: 81.08% across 1079 tests
- API Performance: 99%+ reduction in listing operations
- TypeScript: Zero compilation errors with strict checking
- Code Quality: All ESLint rules satisfied
🆕 What's New
- Enhanced Todo Creation: Template-guided AI generation with structured output
- Perfect Mobile Experience: Fixed checkbox alignment and responsive design
- Advanced Development Tools: PostToolUse hooks and quality automation
- Improved Performance: Filename-based metadata and smart caching
- Better Testing: Consolidated test suite with comprehensive coverage
🔄 Migration Guide
No migration required! This release is fully backward compatible:
- Existing todos continue to work without changes
- Old creation workflow remains functional
- All existing features and functionality preserved
- New template features available immediately
📚 Documentation
🎉 Ready for Production: All features tested, documented, and production-ready!
💫 You Own Your Data: Tasks remain as .md files in your Git repository, editable with any tool.
v1.9.1
🎉 Release v1.9.1: Enhanced Git History with Priority Restoration Fixes
This release delivers comprehensive improvements to the Git History feature, combining critical bug fixes for priority restoration with rich metadata display capabilities.
🐛 Critical Bug Fixes
Priority Restoration Issues Resolved ✅
- Fixed priority loss during Git History restoration - Priority information now properly preserved when restoring from commit history
- Fixed priority loss during Checkpoint restoration - AI chat checkpoint restoration now maintains task priority
- Fixed priority loss during Draft restoration - Auto-saved drafts now preserve priority changes
- Enhanced frontmatter handling - All restoration mechanisms now properly handle YAML frontmatter
✨ New Features
Enhanced Git History with Rich Metadata Display
- Priority Badges (P1-P5) with intuitive color coding in commit list:
- 🔴 P1: Critical (Red)
- 🟠 P2: High (Orange)
- 🟡 P3: Medium (Yellow)
- 🔵 P4: Low (Blue)
- ⚫ P5: Very Low (Gray)
- Archive Indicators (📁 ARCHIVED) for completed tasks
- Smart Date Display showing creation date when different from commit date
- Loading Indicators with smooth animations for better UX
Enhanced Preview Panel
- Metadata Section displaying priority, archive status, and task title
- Mobile-Responsive Design with consistent experience across devices
- Rich Context for making informed restoration decisions
⚡ Performance Improvements
Intelligent Caching System
- Frontmatter Caching with 5-minute expiry to avoid redundant parsing
- Upfront Loading of first 5 commits for immediate metadata display
- Memoized Helper Functions to prevent unnecessary re-renders
- Smart Cache Cleanup to prevent memory bloat (auto-cleanup after 100 entries)
🧪 Testing Enhancements
Comprehensive Test Coverage
- 19 total tests across priority restoration and metadata display
- Integration tests for complete priority workflows
- Performance tests for caching and optimization
- Mobile responsive tests for UI consistency
- Error handling tests for edge cases and malformed data
📱 User Experience Improvements
Before vs After
Before:
- Git History showed only basic commit information
- Priority information lost during restoration operations
- Users had to preview each commit to understand context
- No indication of task priority or archive status
After:
- At-a-glance context with priority and archive status in commit list
- Priority preservation across all restoration mechanisms
- Rich metadata in preview panel with task details
- Performance optimized with smart caching and loading strategies
🔧 Technical Implementation
Architecture Enhancements
- Extended
GitCommitinterface with optional frontmatter and loading state tracking - Implemented
getCachedFrontmatter()with content hash + commit SHA caching - Added batch processing with
loadFrontmatterForCommits()for efficient API usage - Created memoized UI components for optimal rendering performance
New Components
getPriorityBadge(): Color-coded priority indicatorsgetArchiveBadge(): Archive status indicatorsformatCreatedDate(): Smart date formattingrenderCommitMetadata(): Unified metadata rendering
🎯 Impact
This release transforms the Git History view from a basic commit browser into a powerful restoration tool that provides users with all the context they need to make informed decisions, while ensuring that priority information is never lost during any restoration operation.
📦 What's Included
- ✅ Priority restoration bug fixes
- ✅ Enhanced Git History with metadata display
- ✅ Performance optimizations with intelligent caching
- ✅ Mobile-responsive design improvements
- ✅ Comprehensive test coverage (19 tests)
- ✅ Full backward compatibility
🔗 Related
- Pull Request: #48 Enhanced Git History with metadata display and performance optimization
- Previous Release: v1.9.0 CodeMirror Editor
Full Changelog: v1.9.0...v1.9.1
🤖 Generated with Claude Code
v1.9.0
🎉 Agentic Markdown Todos v1.9.0
🚀 Major New Feature: Advanced Markdown Editor
This release introduces a completely redesigned markdown editing experience powered by CodeMirror 6, replacing the basic textarea with a professional code editor.
✨ New Editor Features
🔤 CodeMirror 6 Integration
- Syntax Highlighting: Full GitHub-flavored markdown syntax highlighting
- Dark Theme: Custom dark theme matching the app's design (gray-900 background)
- Auto-Completion: Intelligent markdown completion and auto-list continuation
- Enhanced Selection: Proper text selection with blue highlighting
- Responsive Design: Optimized for both desktop and mobile usage
😊 Slack-Style Emoji Support
- Shortcode Completion: Type
:rocket:→ 🚀,:+1:→ 👍,:heart:→ ❤️ - Popular Emojis: Instant access to commonly used emojis with autocomplete
- Smart Search: Emoji search with fallback handling for special characters
- Real-time Preview: See emoji suggestions as you type with character preview
🛠 Technical Improvements
- Better Performance: Optimized rendering and interaction handling
- Accessibility: Improved keyboard navigation and screen reader support
- Mobile Optimization: Touch-friendly interface with responsive design
- Error Handling: Robust error handling for edge cases and special characters
🔧 Under the Hood
Dependencies Added
@codemirror/view: ^6.34.3 - Core editor view functionality@codemirror/state: ^6.4.1 - Editor state management@codemirror/lang-markdown: ^6.3.1 - Markdown language support@codemirror/autocomplete: ^6.18.3 - Autocompletion system@uiw/react-codemirror: ^4.23.8 - React integration wrappernode-emoji: ^2.1.3 - Emoji search and conversion
Testing Infrastructure
- 100% Test Coverage: Comprehensive test suite for all editor functionality
- JSDOM Compatibility: Custom DOM Range API mocking for CodeMirror in test environment
- CI/CD Integration: All tests passing on Node.js 20.x and 22.x
- 67+ New Tests: Covering editor functionality, emoji completion, and integration
Code Quality
- TypeScript Support: Full type safety for all new components
- Component Architecture: Clean separation between editor and markdown viewer
- Extension System: Modular emoji extension for future expandability
- Error Boundaries: Comprehensive error handling and fallback mechanisms
🎯 User Experience Enhancements
Improved Editing Workflow
- Visual Feedback: Real-time unsaved changes indicator
- Smart Auto-Lists: Automatic todo list continuation (
- [ ]when pressing Enter) - Keyboard Shortcuts: Standard editor shortcuts for copy/paste/select
- Undo/Redo: Built-in undo/redo functionality
Mobile & Accessibility
- Touch Support: Optimized touch interaction for mobile devices
- Screen Reader: Proper ARIA labels and semantic markup
- High Contrast: Improved color contrast ratios for better visibility
- Keyboard Navigation: Full keyboard accessibility support
🔄 Migration & Compatibility
- Seamless Upgrade: Existing markdown content works without changes
- Backward Compatible: All existing features and functionality preserved
- No Data Loss: Safe migration from textarea to CodeMirror editor
- Settings Preserved: All user preferences and configurations maintained
🐛 Bug Fixes & Improvements
- Text Selection: Fixed selection highlighting issues in edit mode
- Focus Management: Improved focus handling during mode switches
- Draft Persistence: Enhanced draft saving and restoration
- Error Recovery: Better error handling for edge cases
📈 Performance Improvements
- Faster Rendering: Optimized editor initialization and updates
- Memory Efficiency: Reduced memory footprint with efficient DOM handling
- Network Optimization: Bundled dependencies for faster loading
- Responsive UI: Smoother interactions and transitions
🛠 Development & Testing
This release includes a comprehensive testing infrastructure with 1,096+ passing tests across:
- Unit tests for all new components
- Integration tests for editor functionality
- End-to-end testing for user workflows
- Cross-platform compatibility testing (Node.js 20.x & 22.x)
🙏 Acknowledgments
Special thanks to the CodeMirror team for creating such an excellent editor framework, and to the open-source community for the emoji and testing libraries that make this release possible.
Full Changelog: v1.7.2...v1.9.0
v1.8.2
🚨 AM-Todos v1.8.2: Critical Hotfix - Cloud Run Deployment Fix
🐛 Critical Issue Fixed
This hotfix resolves a critical deployment failure introduced in v1.8.1 that prevented the application from starting in Cloud Run production environments.
⚡ What Was Broken
v1.8.1 Cloud Run deployments failed with:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable
🔍 Root Cause Analysis
Through Cloud Run logs investigation, we identified two separate issues:
1. Missing Module Error
Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/app/server/utils/redosProtection.js' imported from /app/server/server.js
Cause: Dockerfile was only copying individual server files but missing the utils/ subdirectory
2. Server Startup Condition Failure
Cause: Path resolution inconsistency between import.meta.url and process.argv[1] in containerized environments
✅ Fixes Applied
📦 Fix 1: Dockerfile - Missing Utils Directory
# Added missing utils directory copy
COPY --chown=nextjs:nodejs server/utils/ ./server/utils/🔧 Fix 2: Server Startup - Robust Path Resolution
// Before (broken in containers)
if (import.meta.url === `file://${process.argv[1]}`) {
// After (works everywhere)
import { pathToFileURL } from 'url';
if (import.meta.url === pathToFileURL(path.resolve(process.argv[1])).href) {🧪 Verification
- ✅ Local development: Server starts correctly
- ✅ Container build: All required modules included
- ✅ Path resolution: Works in both development and production
- ✅ Cloud Run ready: Proper port binding and health checks
📊 Impact
- Severity: P1 Critical - Blocked all v1.8.1 production deployments
- Affected versions: v1.8.1 only
- Resolution: Complete - deployments now work correctly
🚀 Deployment Ready
This hotfix ensures:
- ✅ Container starts successfully - no missing modules
- ✅ Server listens on PORT - robust startup condition
- ✅ Health checks pass - proper application initialization
- ✅ Full functionality - all features work as expected
🔗 Technical Details
- PR: #45 - Cloud Run deployment failure fix
- Files changed:
Dockerfile,server/server.js - Commits: 2 targeted fixes for both root causes
- Backward compatibility: Maintained for development environments
📋 Upgrade Instructions
Deploy the new v1.8.2 image directly - no configuration changes needed:
export SOURCE_IMAGE="ghcr.io/florinpeter/am-todos:v1.8.2"
./hack/deploy-all.shFull Changelog: v1.8.1...v1.8.2
v1.8.1
🚀 AM-Todos v1.8.1: Structured AI Responses for Commit Messages
🎯 What's New
Robust AI Response Parsing for Commit Messages
- Multi-layer parsing strategy handles any AI response format
- Markdown code block extraction for responses like:
Sure, here's a conventional commit message:fix: Update "Claude Code Setup" todo list item - JSON extraction from ```json code blocks
- Smart cleanup removes AI prefixes automatically
- Enhanced logging shows exactly how responses are processed
Structured Response Benefits
- Consistent API patterns across all AI functions (
generateInitialPlan,generateCommitMessage,processChatMessage) - Richer user feedback during commit operations with meaningful descriptions
- Better debugging capabilities with comprehensive logging
- Future-proof adapts to AI model changes automatically
🔧 Technical Improvements
6-Step Parsing Strategy
- Direct JSON parsing -
{"message": "...", "description": "..."} - JSON from markdown - Extract from ```json blocks
- Commit from code blocks - Extract commit messages from ``` blocks
- Pattern matching - Find conventional commit patterns in text
- General matching - Flexible commit-like pattern detection
- Smart cleanup - Remove AI prefixes and clean response
Enhanced Logging & Monitoring
- Raw response logging (first 200 chars) for debugging
- Parsing method tracking shows which strategy succeeded
- Provider/model information for pattern analysis
- Backward compatibility maintained for existing functionality
Comprehensive Testing
- 8 new test cases covering all response formats
- Real-world scenarios tested including exact user cases
- Backward compatibility verified with plain text responses
- Integration tests for both Gemini and OpenRouter providers
📊 Statistics
- 9/9 generateCommitMessage tests passing
- 60/60 feature validation tests passing
- Zero breaking changes - full backward compatibility
- 290+ lines of new robust parsing logic
🎨 User Experience Improvements
Always Works
No more failed commit message generation - the system gracefully handles:
- Structured JSON responses
- Markdown with code blocks
- Plain text with AI prefixes
- Malformed or unexpected responses
Better Feedback
Users now see meaningful descriptions like:
- "Generated conventional commit message for new todo creation"
- "Extracted commit message from markdown response"
- "Used cleaned AI response as commit message"
🔗 Related Issues
- Fixes markdown response handling for AI commit messages
- Resolves parsing failures with AI prefix text
- Improves debugging capabilities for AI integrations
- Enhances consistency across AI service functions
Full Changelog: v1.8.0...v1.8.1
v1.8.0
🚀 Major Features
🤖 Structured AI Responses
- Meaningful Feedback: AI now provides detailed descriptions instead of generic "Task updated successfully" messages
- JSON Mode: Both Gemini and OpenRouter providers return structured responses with content + description
- Better UX: Users see exactly what AI did (e.g., "Added authentication step to task 3")
💬 Context-Aware AI Chat
- Natural Follow-ups: Use pronouns like "it", "that", "the last one" in conversations
- Iterative Refinement: Chain requests like "make it shorter" → "now make it formal"
- Conversation Memory: AI receives last 6 messages for optimal context understanding
- Stateless Backend: Chat history managed in frontend for privacy and performance
🧪 Testing & Quality Improvements
Performance
- 24% Faster Tests: New fast test configuration for development
- CI Stability: Fixed test timeouts and hanging issues
- 138 New Tests: Comprehensive coverage for new features
Coverage
- Improved Metrics: Better coverage for modified components
- Clean Reporting: Excluded TypeScript interface files from coverage
- Error Handling: Comprehensive tests for edge cases and error scenarios
🛠️ Technical Enhancements
Type Safety
- Structured Interfaces: TypeScript interfaces for AI responses
- Backward Compatibility: Graceful fallback to plain text responses
- Error Handling: Robust error handling with meaningful user feedback
Development Experience
- Fast Test Config: 24% performance improvement for development testing
- Clean Coverage: Focused coverage metrics excluding interface files
- Stable CI: Resolved hanging and timeout issues
🔧 Bug Fixes
- Fixed test timeouts in CI environment
- Resolved package dependency synchronization issues
- Improved error handling for network and API failures
- Enhanced markdown rendering edge cases
📋 Complete Changelog
- feat: implement structured AI responses for better user feedback
- feat: implement context-aware AI chat with conversation history
- test: improve coverage for files modified in this PR
- chore: exclude TypeScript interface files from coverage reporting
- fix: update AIChat test for new context-aware signature
- chore: clean up documentation and exclude fast config from coverage
- revert: restore original vitest.config.mjs to remove verbose output
- fix: update package-lock.json for happy-dom dependency
- feat: add fast test command with 24% performance improvement
- fix: update test expectations for structured AI responses
🎯 Migration Notes
- Automatic: All changes are backward compatible
- No Action Required: Existing configurations continue to work
- Enhanced UX: Users immediately benefit from improved AI feedback
This release significantly enhances the AI interaction experience while maintaining full backward compatibility and improving the development workflow with better testing and CI stability.