Advanced AI Multi-Agent System with Persistent Memory, Real-time Chat, and Specialized Agent Coordination
Omni Multi-Agent is a cutting-edge AI system that orchestrates multiple specialized AI agents through an intelligent routing system. Built with enterprise-grade architecture, it provides seamless integration of conversational AI, image generation, web search, and document processing capabilities with persistent session management.
- Intelligent Router: Automatically routes requests to the most suitable agent
- Specialized Agents: Research, Math, Planning, Image Generation, and more
- Agent Coordination: Seamless handoffs and collaborative problem-solving
- Session Management: Maintain conversation context across sessions
- Message History: Full conversation persistence with SQLAlchemy
- Context Retrieval: Smart context loading for enhanced responses
- Image Generation: High-quality image creation with Stable Diffusion XL
- Document Processing: PDF analysis and content extraction
- Web Search: Real-time web information retrieval
- Speech Integration: Text-to-speech and speech-to-text support
- Backend: FastAPI with async/await support
- Frontend: React with modern UI components
- Database: SQLAlchemy with async ORM
- Vector DB: Qdrant for semantic search
- AI Framework: LangGraph + LangChain for agent orchestration
# Clone the repository
git clone https://github.com/tantran24/Omni-Multi-Agent.git
cd Omni-Multi-Agent
# Start all services with Docker Compose
docker-compose up -d
# Access the application
# Frontend: http://localhost
# Backend API: http://localhost:8000
# API Documentation: http://localhost:8000/docs- Python 3.11+
- Node.js 18+
- Ollama
- Git
# Navigate to backend directory
cd backend
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Initialize database
python init_db.py
# Start the server
uvicorn main:app --reload --host 0.0.0.0 --port 8000# Navigate to frontend directory
cd frontend
# Install dependencies
npm install
# Start development server
npm run dev# Install Ollama (if not already installed)
curl -fsSL https://ollama.ai/install.sh | sh
# Pull required models
ollama pull llama2
ollama pull gemma:2b- Start a Conversation: Open the app and type your message
- Session Management: Your conversations are automatically saved
- Switch Sessions: Use the session manager to navigate between chats
- Persistent History: All messages are preserved across browser sessions
Generate an image of a futuristic city at sunset
Create a logo for a tech startup
Draw a cute cartoon cat wearing a spacesuit
- Upload PDF files for analysis and Q&A
- Extract key information from documents
- Summarize long documents
Search for the latest news about artificial intelligence
Find information about climate change solutions
Look up the current stock price of Tesla
Solve the equation: 2x + 5 = 15
Calculate the derivative of x^2 + 3x + 2
What is the area of a circle with radius 5?
Omni-Multi-Agent/
βββ backend/ # FastAPI backend
β βββ services/ # Core business logic
β βββ utils/ # Utilities and agents
β βββ config/ # Configuration files
β βββ database/ # Database models and migrations
β βββ main.py # Application entry point
βββ frontend/ # React frontend
β βββ src/
β β βββ components/ # React components
β β βββ services/ # API services
β β βββ utils/ # Frontend utilities
β βββ public/ # Static assets
βββ docker-compose.yml # Docker orchestration
βββ README.md # This file
The backend provides comprehensive API documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
- OpenAPI Schema: http://localhost:8000/openapi.json
- General conversation handling
- Context-aware responses
- Memory integration
- Text-to-image generation
- Style and quality optimization
- Multiple format support
- Document Q&A
- Semantic search
- Context extraction
- Web search integration
- Information synthesis
- Fact verification
- Task breakdown
- Project planning
- Goal-oriented assistance
# Backend Configuration
OLLAMA_BASE_URL=http://localhost:11434
HUGGINGFACE_API_KEY=your_hf_api_key
DATABASE_URL=sqlite:///./database/app.db
# Frontend Configuration
VITE_API_URL=http://localhost:8000The system is highly configurable through:
backend/config/config.py- Backend settingsfrontend/.env- Frontend environment variablesbackend/config/prompts.py- Agent prompts and behaviors
- CORS protection configured
- Input validation and sanitization
- Secure file upload handling
- Environment-based configuration
- Docker security best practices
- π Documentation
- π¬ Discussions
- π Issue Tracker
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
- LangChain Team for the amazing agent framework
- FastAPI for the high-performance web framework
- React Community for the frontend ecosystem
- Ollama for local LLM infrastructure
- Hugging Face for AI model hosting
Made with β€οΈ by the Omni Multi-Agent Team
β Star this project if you find it helpful!