Conversation
- Add short-term memory support with ConversationMessage schema - Implement FileContent schema for base64 and signed URL file handling - Create agentic file processing workflow using UPEE pattern with retry logic - Add intelligent streaming completion detection to prevent response cutoffs - Fix schema compatibility between FileContent and FileContext formats - Implement debug API endpoints for troubleshooting file reception issues - Add comprehensive logging with structured request tracking - Create test utilities for file processing and response validation - Enhance OpenAI provider with smart completion detection and safety limits - Add database models, event bus, job scheduler, and plugin architecture 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Update debug API to support multipart form data for better file upload testing - Add max_tokens capping to prevent OpenAI API limits (16384 tokens) - Remove overly aggressive streaming completion detection - Add detailed execution parameter logging for debugging - Improve file processing agent with better error handling - Add comprehensive core agent specification documentation - Enhance plan phase with better token estimation and model selection - Update all LLM providers with consistent token limit handling - Clean up temporary documentation files 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add detailed installation and setup steps with environment configuration - Include quick test commands for immediate validation - Clarify minimum requirements and optional dependencies - Add debug tools endpoint information - Update feature status to reflect production-ready core functionality - Remove database dependencies from requirements (not needed for core features) - Update environment variables table with proper requirements 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Fix missing newline at end of file - Update multi-provider LLM status to reflect current implementation (OpenAI, Anthropic) - Maintain clean documentation formatting 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Remove database connection checks and initialization from main.py - Remove database-dependent routers (workers, bridge, plugins, jobs) - Keep only core functionality: chat, health, and debug endpoints - Simplify application lifespan to only include essential components - Remove database test endpoint - Core UPEE functionality now works without any database setup This allows the application to start and run with just LLM API keys, making setup much simpler for core chat functionality. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Update start script to check only core dependencies (fastapi, uvicorn, pydantic, httpx, openai) - Add comprehensive test script to verify no-database functionality - Confirm all SQLAlchemy references are in unused files not imported by main app - Core application (chat, health, debug) works completely without database Test results: ✅ All 3 tests passed - Import test: Core components import successfully - App creation test: FastAPI app created with core routes - Server startup test: Server starts and health check passes The application now runs entirely without database dependencies while maintaining full UPEE functionality, multi-LLM support, and file processing. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
* Fix dependency resolution issues in requirements.txt - Relaxed version constraints to avoid 'resolution-too-deep' error - Improved Python 3.13 compatibility - Backed up original requirements to requirements.txt.original - Core packages now install successfully * Add GitHub Actions workflow for automated testing - Created .github/workflows/test.yml with comprehensive test pipeline - Added python test_no_db.py execution as requested - Included pytest, flake8 linting, and mypy type checking - Support for Python 3.11, 3.12, 3.13 matrix testing - Added pip caching for faster builds - Triggers on push/PR to main/develop branches * Add fix-requirements branch to GitHub Actions triggers - Added fix-requirements to push trigger branches - This will allow testing the current branch * Configure GitHub Actions to run on all branches - Removed branch restrictions for push and pull_request triggers - Now fires on any push to any branch and any PR - Simplifies workflow management and ensures all code is tested * Simplify GitHub Actions workflow - Run only on pull_request (not on every push) - Use single Python version (3.13) instead of matrix - Reduces unnecessary test runs and CI overhead - Still runs python test_no_db.py as core test * Rename GitHub Actions workflow to test_no_db - Changed workflow name from 'Tests' to 'test_no_db' - Better reflects the core test being executed * Fix GitHub Actions dependency resolution issues - Pin boto3, aioboto3, and jmespath versions to avoid resolution conflicts - Add multi-stage pip install strategy with fallback options - Use legacy resolver as backup for complex dependency graphs - Add timeout and no-deps options for edge cases * fire action * Fix GitHub Actions for fork PRs - Add pull_request_target trigger for fork PRs - Add debug information to troubleshoot workflow execution - Add push trigger for branch testing - Create separate fork-specific workflow - Add proper checkout configuration for PR head - Set timeout to prevent hanging jobs * remove fork-pr-test * Simplify GitHub Actions triggers to PR events only Remove redundant push and pull_request_target triggers to prevent duplicate workflow runs and eliminate potential security risks from pull_request_target. * Consolidate flake8 configuration for better maintainability - Create .flake8 configuration file with all linting options - Simplify GitHub Actions workflow to use single flake8 command - Centralize linting rules for consistency across development and CI
Implements a comprehensive agent discovery and communication framework that enables the PAF Core Agent to leverage external specialized agents for enhanced processing. Key features: - Agent discovery via 'pixell list' command with periodic refresh - Intelligent agent selection based on context and complexity - Multi-protocol support (HTTP/REST, gRPC, WebSocket) - Integration with UPEE engine phases (Understand and Plan) - Health monitoring and usage statistics tracking - Graceful fallback to local processing when agents unavailable New components: - app/agents/: Core agent management module - models.py: Data models for agent communication - manager.py: Agent orchestration and decision logic - discovery.py: Agent discovery and registry management - client.py: Multi-protocol communication client - app/api/agents.py: REST API endpoints for agent management - app/api/dependencies.py: Singleton UPEE engine management Modified components: - app/core/upee_engine.py: Integrated AgentManager into UPEE processing - app/core/understand.py: Added agent enhancement support - app/core/plan.py: Added agent enhancement support - app/main.py: Included agents router This enables the PAF Core Agent to distribute complex or domain-specific tasks to specialized agents while maintaining local processing capabilities. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add language detection using LLM in understanding phase - Pass language information through UPEE phases properly - Add language instructions to system prompt for non-English languages - Fix bug where Korean requests were getting English responses - Handle JSON parsing with markdown code blocks in LLM responses 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit fixes the issue where conversation history was not being preserved between messages in the chat. The core agent now properly maintains context across multi-turn conversations. Key changes: - Added LLMMessage class to represent individual conversation messages - Updated LLMRequest to support messages array alongside legacy prompt field - Modified execute phase to build messages array from conversation history - Updated all LLM providers (OpenAI, Claude, Bedrock) to handle messages format - Added comprehensive tests for conversation memory preservation - Enhanced CSV file processing with better table formatting support - Added language-aware response generation - Added table generation detection and formatting The system now passes the full conversation history to LLM providers, allowing the assistant to maintain context and reference previous messages in the conversation. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.