╭─────────────────────────────────────────────────────────────╮
│ │
│ ███╗ ██╗███████╗██╗ ██╗██╗ ██╗███████╗ █████╗ ██╗ │
│ ████╗ ██║██╔════╝╚██╗██╔╝██║ ██║██╔════╝ ██╔══██╗██║ │
│ ██╔██╗ ██║█████╗ ╚███╔╝ ██║ ██║███████╗ ███████║██║ │
│ ██║╚██╗██║██╔══╝ ██╔██╗ ██║ ██║╚════██║ ██╔══██║██║ │
│ ██║ ╚████║███████╗██╔╝ ██╗╚██████╔╝███████║ ██║ ██║██║ │
│ ╚═╝ ╚═══╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚══════╝ ╚═╝ ╚═╝╚═╝ │
│ │
│ ◉ ◉ ◉ 🧠 Neural Network Intelligence ◉ ◉ ◉ │
│ ╲ ╱ ╲ ╱ │
│ ◉ 🔗 Connected AI Ecosystem ◉ │
│ ╱ ╲ ╱ ╲ │
│ ◉ ◉ ◉ ⚡ Lightning Fast Responses ◉ ◉ ◉ │
│ │
╰─────────────────────────────────────────────────────────────╯
🚀 Production-ready AI chat platform with multi-provider support, advanced RAG, LoRA fine-tuning, and enterprise security
NexusAI is a production-ready AI platform featuring multi-provider LLM support, advanced RAG with vector search, LoRA fine-tuning capabilities, comprehensive AI safety guardrails, and a modular architecture designed for enterprise deployment.
Latest Updates:
- ✅ Multi-Provider LLM Support - Groq, OpenAI, Anthropic, Ollama integration
- ✅ Advanced RAG System - Vector search, document processing, knowledge base management
- ✅ LoRA Fine-Tuning - Custom model adaptation with hyperparameter optimization
- ✅ AI Safety Guardrails - Content filtering, PII detection, prompt injection prevention
- ✅ Enterprise Database - SQLite with full user management and analytics
- ✅ PWA Support - Installable web app with offline capabilities
- ✅ Docker Production - Full containerization with monitoring stack
- ✅ Modular Architecture - Scalable backend/frontend separation
# 🎯 One-command setup
git clone <repository-url>
cd nexusai && ./run-local.shThat's it! 🎉 NexusAI handles the rest automatically. |
# 📦 Clone repository
git clone <repository-url>
cd nexusai
# 🔑 Configure API key
cp .env.example .env
echo "GROQ_API_KEY=<your-api-key>" >> .env
# 🚀 Launch application
./run-local.sh |
|
🤖 Multi-Provider AI Integration
🧠 Advanced RAG System
🔧 LoRA Fine-Tuning Platform
🛡️ Enterprise Security
📊 Production Infrastructure
🎨 Modern User Experience
|
./run-local.sh✅ Complete development environment |
./run-frontend.sh✅ Static file server |
cd backend
python app.py✅ API development |
|
|
|
💎 Glass Morphism UI Modern translucent design with blur effects |
📱 PWA Installation Install as native app on any device |
🌓 Theme System Dark/light modes with custom themes |
📐 Responsive Design Optimized for mobile, tablet, desktop |
⚡ Real-time Updates Live model switching & status indicators |
🚀 One-Command Setup./run-local.sh
|
🔄 Hot Reload Instant development feedback |
🧩 Modular Architecture Clean separation of concerns |
📚 Comprehensive Docs Detailed guides & API reference |
|
Get running in 2 minutes git clone <repository-url>
cd nexusai
./run-local.sh📊 Requirements:
✅ Includes:
|
Complete feature set git clone <repository-url>
cd nexusai
# Configure .env with all API keys
./run-local.sh📊 Requirements:
✅ Includes:
|
Enterprise deployment git clone <repository-url>
cd nexusai
# Configure production .env
docker-compose up --build📊 Requirements:
✅ Includes:
|
| Feature Category | Quick Start | Full Development | Production |
|---|---|---|---|
| 🤖 AI Providers | |||
| Groq Integration | ✅ | ✅ | ✅ |
| OpenAI Support | ✅ | ✅ | |
| Anthropic Support | ✅ | ✅ | |
| Ollama Local | ✅ | ✅ | |
| 🧠 RAG System | |||
| Document Upload | ✅ | ✅ | ✅ |
| Vector Search | ✅ Simplified | ✅ Advanced | ✅ Enterprise |
| Knowledge Base | ✅ | ✅ | ✅ |
| 🔧 LoRA System | |||
| Model Fine-tuning | ✅ Basic | ✅ Advanced | ✅ Enterprise |
| Hyperparameter Optimization | ❌ | ✅ | ✅ |
| Training Analytics | ❌ | ✅ | ✅ |
| 🛡️ Security | |||
| AI Guardrails | ✅ | ✅ | ✅ |
| Content Filtering | ✅ | ✅ | ✅ |
| PII Detection | ✅ | ✅ | ✅ |
| Audit Logging | ❌ | ✅ | ✅ |
| 📊 Infrastructure | |||
| SQLite Database | ✅ | ✅ | ✅ |
| PostgreSQL | ❌ | ❌ | ✅ |
| Redis Caching | ❌ | ❌ | ✅ |
| Monitoring Stack | ❌ | ❌ | ✅ |
| 🎨 Interface | |||
| PWA Support | ✅ | ✅ | ✅ |
| Mobile Responsive | ✅ | ✅ | ✅ |
| Real-time Updates | ✅ | ✅ | ✅ |
| ⚙️ Deployment | |||
| Local Development | ✅ | ✅ | ✅ |
| Docker Support | ✅ | ✅ | ✅ |
| Kubernetes Ready | ❌ | ❌ | ✅ |
| Auto-scaling | ❌ | ❌ | ✅ |
# ===== REQUIRED CONFIGURATION =====
GROQ_API_KEY=<your-groq-api-key> # Primary AI provider (required)
SECRET_KEY=<your-secret-key> # Flask session security
# ===== MULTI-PROVIDER SUPPORT =====
OPENAI_API_KEY=<your-openai-api-key> # Optional: GPT models
ANTHROPIC_API_KEY=<your-anthropic-api-key> # Optional: Claude models
OLLAMA_BASE_URL=http://localhost:11434 # Optional: Local Ollama
# ===== APPLICATION SETTINGS =====
FLASK_DEBUG=True # Development mode
PORT=5002 # Server port
FLASK_ENV=development # Environment
# ===== AI/ML CONFIGURATION =====
TRANSFORMERS_CACHE=./backend/data/models_cache # Model cache directory
HF_HOME=./backend/data/models_cache # Hugging Face cache
MAX_TOKENS_DEFAULT=512 # Default response length
TEMPERATURE_DEFAULT=0.7 # Default creativity level
# ===== RAG SYSTEM SETTINGS =====
RAG_CHUNK_SIZE=1000 # Document chunk size
RAG_CHUNK_OVERLAP=200 # Chunk overlap
RAG_MAX_RESULTS=10 # Max search results
VECTOR_DB_PATH=./backend/data/vector_db # Vector database path
# ===== LORA TRAINING SETTINGS =====
LORA_RANK_DEFAULT=16 # Default LoRA rank
LORA_ALPHA_DEFAULT=32 # Default LoRA alpha
LORA_DROPOUT_DEFAULT=0.1 # Default dropout rate
TRAINING_DATA_PATH=./backend/lora_data # Training data directory
# ===== SECURITY & SAFETY =====
ENABLE_GUARDRAILS=True # AI safety guardrails
ENABLE_PII_DETECTION=True # PII detection
ENABLE_CONTENT_FILTER=True # Content filtering
MAX_UPLOAD_SIZE=50MB # File upload limit
# ===== DATABASE CONFIGURATION =====
DATABASE_URL=sqlite:///nexusai.db # SQLite (default)
# DATABASE_URL=postgresql://user:pass@host:port/db # PostgreSQL (production)
# ===== PRODUCTION SETTINGS =====
SENTRY_DSN=<your-sentry-dsn> # Error monitoring
RATE_LIMIT_PER_MINUTE=100 # API rate limiting
ENABLE_ANALYTICS=True # Usage analytics
LOG_LEVEL=INFO # Logging level
# ===== REDIS CONFIGURATION (Production) =====
REDIS_URL=redis://localhost:6379/0 # Redis cache
ENABLE_CACHING=False # Enable Redis caching
# ===== MONITORING (Production) =====
PROMETHEUS_ENABLED=False # Prometheus metrics
GRAFANA_ENABLED=False # Grafana dashboards |
🚀 Groq (Primary - Required)
🤖 OpenAI (Optional)
🧠 Anthropic (Optional)
🏠 Ollama (Optional - Local)
🔧 Setup Priority:
|
# 🚀 Quick start with Docker
docker-compose up --build
# 🔍 View logs
docker-compose logs -f
# 🛑 Stop services
docker-compose down🌐 Access: http://localhost:5002 |
# 🚀 Production deployment
docker-compose -f docker-compose.prod.yml up -d
# 📊 Health check
docker-compose ps
# 📈 Scale services
docker-compose up --scale app=3🌐 Access: https://your-domain.com |
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ 🌐 Nginx │ │ 🤖 NexusAI │ │ 🗄️ Database │
│ Reverse Proxy │◄──►│ Application │◄──►│ PostgreSQL │
│ Load Balancer │ │ Flask + ML │ │ + Redis Cache │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ 📊 Monitoring │ │ 🔍 Logging │ │ 🛡️ Security │
│ Prometheus │ │ Centralized │ │ SSL + Auth │
│ + Grafana │ │ ELK Stack │ │ Rate Limiting │
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
📋 Installation Guide 🏗️ Project Structure ⚙️ Configuration Guide |
🤖 RAG/LoRA Guide 🔌 API Documentation 🛡️ Security Guide |
| Category | Document | Description |
|---|---|---|
| 🚀 Getting Started | ||
| Setup Guide | INSTALLATION_GUIDE.md | Complete setup instructions |
| Multi-Provider Setup | MULTI_PROVIDER_SETUP.md | Configure all AI providers |
| 🏗️ Architecture | ||
| Code Documentation | CODE_DOCUMENTATION.md | Developer reference & API docs |
| Frontend Modularization | FRONTEND_MODULARIZATION_SUMMARY.md | UI architecture guide |
| 🧠 AI Features | ||
| Knowledge Base | ENHANCED_KNOWLEDGE_BASE.md | Advanced RAG system guide |
| LoRA Fine-tuning | ENHANCED_LORA_SYSTEM.md | Model customization guide |
| 🛡️ Security | ||
| AI Guardrails | AI_GUARDRAILS_DOCUMENTATION.md | Safety & security features |
| Security Best Practices | SECURITY.md | Production security guide |
| ⚙️ Development | ||
| Pre-commit Setup | PRE_COMMIT_SETUP.md | Development workflow |
POST /api/chat
# Multi-provider chat completion
# Supports Groq, OpenAI, Anthropic, Ollama
GET /api/models
# List available models from all providers
GET /api/providers
# Get provider status and capabilities
POST /api/models/compare
# Compare responses across multiple models
POST /api/models/recommend
# Get AI-recommended model for inputPOST /api/rag/upload
# Upload documents to knowledge base
POST /api/rag/search
# Search knowledge base with vector similarity
GET /api/rag/summary
# Get knowledge base statistics
GET /api/rag/analyze
# Analyze knowledge base performance
DELETE /api/rag/documents/{id}
# Remove documents from knowledge baseGET /api/lora/adapters
# List all LoRA adapters
POST /api/lora/create
# Create new LoRA adapter
POST /api/lora/train
# Start training process
GET /api/lora/analyze
# Performance analysis
POST /api/lora/optimize
# Hyperparameter optimization |
POST /api/users
# Create or update user profile
GET /api/users/{user_id}
# Get user profile
PUT /api/users/{user_id}
# Update user profile
GET /api/users/{user_id}/analytics
# Get user analytics and usage statsGET /api/conversations
# List user conversations
POST /api/conversations
# Save conversation
GET /api/conversations/{id}
# Get specific conversation
DELETE /api/conversations/{id}
# Delete conversation
POST /api/templates
# Create message templatePOST /api/search
# Global search across all content
GET /api/search/history/{user_id}
# Get search history
GET /api/analytics
# System-wide analytics
POST /api/analytics/log
# Log custom analytics event
GET /api/export/{user_id}
# Export all user dataGET /api/guardrails/status
# AI guardrails status
GET /api/status
# System health check
GET /api/features
# Available features status
GET /api/system/stats
# System statistics |
# Run comprehensive test suite
cd backend
python test_features.py
# Test specific components
python test_lora_system.py
python test_modular.py✅ Database operations |
# Test all backend features
cd backend
python -c "import app; print('✅ App imports successfully')"
# Test database system
python -c "from database import initialize_database; initialize_database()"
# Test RAG system
python -c "from models.rag_system import get_rag_system"✅ Flask application startup |
# Test frontend independently
./run-frontend.sh
# Test PWA functionality
# Open browser dev tools > Application > Service Workers
# Test responsive design
# Resize browser window or use device emulation✅ PWA installation |
Old Structure:
nexusai/
├── app.py
├── rag_system.py
├── static/
└── index.html
New Structure:
nexusai/
├── backend/
│ ├── app.py
│ └── models/
└── frontend/
├── index.html
└── static/ |
✅ Automatic Migration
🔄 Updated Commands # Old way
python app.py
# New way
./run-local.sh📋 Benefits
|
# 1. Fork & Clone
git clone https://github.com/yourusername/nexusai
cd nexusai
# 2. Create Feature Branch
git checkout -b feature/amazing-feature
# 3. Make Changes
# Backend: backend/
# Frontend: frontend/
# Docs: docs/
# 4. Test Changes
./test-structure.sh
cd backend && python -m pytest
# 5. Submit PR
git push origin feature/amazing-feature |
🎯 Areas We Need Help
✅ Code Standards
🏆 Recognition
|
|
Get running immediately 💻 Software:
💾 Hardware:
🔑 Required:
⏱️ Setup Time: 2 minutes |
Complete feature set 💻 Software:
💾 Hardware:
🔑 Optional:
⏱️ Setup Time: 5 minutes |
Enterprise deployment 💻 Software:
💾 Hardware:
🔑 Required:
⏱️ Setup Time: 15 minutes |
Scalable cloud hosting ☁️ Platforms:
💾 Resources:
🔑 Required:
⏱️ Setup Time: 3 minutes |
|
🐧 Linux Ubuntu 20.04+ CentOS 8+ Debian 11+ ✅ Fully Supported |
🍎 macOS macOS 11+ Intel & Apple Silicon Homebrew recommended ✅ Fully Supported |
🪟 Windows Windows 10+ WSL2 recommended PowerShell/CMD ✅ Fully Supported |
🐳 Docker Any Docker host Linux containers Kubernetes ready ✅ Recommended |
☁️ Cloud Heroku, Railway AWS, GCP, Azure Serverless ready ✅ Production Ready |
|
Chrome 90+ ✅ Full PWA Support |
Firefox 88+ ✅ Full Support |
Safari 14+ ✅ Full Support |
Edge 90+ ✅ Full Support |
Mobile Safari ✅ PWA Install |
Mobile Chrome ✅ PWA Install |
|
🐍 Core Framework
🤖 AI/ML Stack
🛡️ Security & Monitoring
📊 Data Processing
|
🌐 Core Web Technologies
🎨 UI/UX Framework
📱 Progressive Web App
⚡ Performance & Optimization
|
|
🔄 MVC Pattern Clean separation of Model, View, Controller |
🧩 Modular Design Loosely coupled, highly cohesive modules |
🔌 RESTful API Standard HTTP methods & status codes |
📱 PWA Architecture App-like experience with web technologies |
Quick solutions for common issues
|
🐍 Python Import Errors # Verify Python version
python --version # Should be 3.8+
# Check virtual environment
source venv/bin/activate
pip list | grep Flask
# Test core imports
cd backend && python -c "import app; print('✅ Success')"🔑 API Key Configuration # Check .env file exists
ls -la .env
# Verify API key format
cat .env | grep GROQ_API_KEY
# Should show: GROQ_API_KEY=gsk_...
# Test API key validity
curl -H "Authorization: Bearer $GROQ_API_KEY" \
https://api.groq.com/openai/v1/models🔌 Port & Network Issues # Check if port is in use
lsof -i :5002
# Use different port
echo "PORT=5003" >> .env
./run-local.sh
# Check firewall settings
# Ensure port 5002 is open for local development📦 Dependency Problems # Clean install
rm -rf venv
python -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
# Install minimal dependencies only
pip install Flask Werkzeug python-dotenv requests flask-cors groq |
⚡ Development Speed # Frontend-only development
./run-frontend.sh # Faster UI iteration
# Skip ML dependencies for UI work
pip install Flask Werkzeug python-dotenv requests flask-cors groq
# Use hot reload
export FLASK_DEBUG=True
./run-local.sh🧠 AI Model Performance # Cache models locally
export TRANSFORMERS_CACHE=./backend/data/models_cache
export HF_HOME=./backend/data/models_cache
# Use faster models for development
# Prefer llama-3.1-8b-instant over larger models
# Optimize token limits
# Set MAX_TOKENS_DEFAULT=256 for faster responses🐳 Docker Optimization # Use .dockerignore
echo "venv/\n*.pyc\n__pycache__/\n.git/" > .dockerignore
# Multi-stage builds
docker build --target production .
# Optimize layer caching
# Put requirements.txt COPY before code COPY💾 Database Performance # SQLite optimization
echo "PRAGMA journal_mode=WAL;" | sqlite3 nexusai.db
# Regular cleanup
python -c "from database import get_database; db = get_database(); print('DB size:', db.get_database_stats())"
# Backup before major changes
cp nexusai.db nexusai.db.backup |
| Issue Category | Diagnostic Command | Solution Guide |
|---|---|---|
| 🚀 Setup & Installation | python backend/test_features.py |
Installation Guide |
| 🤖 AI Provider Issues | curl -H "Authorization: Bearer $API_KEY" https://api.groq.com/openai/v1/models |
Multi-Provider Setup |
| 🧠 RAG System Problems | python -c "from models.rag_system import get_rag_system; print('RAG OK')" |
Knowledge Base Guide |
| 🔧 LoRA Training Issues | python -c "from models.lora_system import get_lora_system; print('LoRA OK')" |
LoRA System Guide |
| 🛡️ Security & Guardrails | curl http://localhost:5002/api/guardrails/status |
Security Documentation |
| 🐳 Docker Deployment | docker-compose logs -f app |
Code Documentation |
| 📱 PWA & Frontend | Browser DevTools > Application > Service Workers | Frontend Guide |
| 🐛 Bug Reports | Create detailed issue with logs | GitHub Issues |
| 💡 Feature Requests | Start community discussion | GitHub Discussions |
|
📚 Documentation Comprehensive guides in docs/ folder
|
🧪 Self-Diagnosis Run python backend/test_features.py
|
🐛 Bug Reports GitHub Issues with detailed logs |
💬 Community GitHub Discussions for questions |
|
⚡ Groq 500+ tokens/second |
🤖 OpenAI GPT-4 & GPT-3.5 |
🧠 Anthropic Claude 3.5 Sonnet |
🌐 Flask Ecosystem Lightweight & flexible |
🎨 Modern Web Standards Service Workers |
|
🐍 Python Ecosystem
|
🌐 Web Technologies
|
🔧 Development Tools
|
|
🤖 AI Research Community For advancing open AI models and safety research |
🔬 Hugging Face For democratizing AI and providing model infrastructure |
🌍 Open Source Contributors For building the tools and libraries we depend on |
👥 Developer Community For feedback, testing, and continuous improvement |
MIT License - see LICENSE file for details
🚀 Ready to deploy enterprise-grade AI?
1️⃣ Quick Deploygit clone && ./run-local.sh2 minutes to running |
2️⃣ Configure Providers Add OpenAI, Anthropic keys Multi-provider power |
3️⃣ Upload Knowledge Add documents to RAG system Custom knowledge base |
4️⃣ Fine-tune Models Create custom LoRA adapters Personalized AI |