Skip to content

anuu1989/nexus-ai

Repository files navigation

    ╭─────────────────────────────────────────────────────────────╮
    │                                                             │
    │  ███╗   ██╗███████╗██╗  ██╗██╗   ██╗███████╗  █████╗ ██╗   │
    │  ████╗  ██║██╔════╝╚██╗██╔╝██║   ██║██╔════╝ ██╔══██╗██║   │
    │  ██╔██╗ ██║█████╗   ╚███╔╝ ██║   ██║███████╗ ███████║██║   │
    │  ██║╚██╗██║██╔══╝   ██╔██╗ ██║   ██║╚════██║ ██╔══██║██║   │
    │  ██║ ╚████║███████╗██╔╝ ██╗╚██████╔╝███████║ ██║  ██║██║   │
    │  ╚═╝  ╚═══╝╚══════╝╚═╝  ╚═╝ ╚═════╝ ╚══════╝ ╚═╝  ╚═╝╚═╝   │
    │                                                             │
    │     ◉ ◉ ◉     🧠 Neural Network Intelligence     ◉ ◉ ◉     │
    │      ╲ ╱                                           ╲ ╱      │
    │       ◉          🔗 Connected AI Ecosystem          ◉       │
    │      ╱ ╲                                           ╱ ╲      │
    │     ◉ ◉ ◉     ⚡ Lightning Fast Responses      ◉ ◉ ◉     │
    │                                                             │
    ╰─────────────────────────────────────────────────────────────╯

🧠 NexusAI - Enterprise-Grade AI Platform

🚀 Production-ready AI chat platform with multi-provider support, advanced RAG, LoRA fine-tuning, and enterprise security

Python Flask AI Powered License

Fast Secure RAG LoRA PWA


🌟 Enterprise AI Platform Built for Scale

NexusAI is a production-ready AI platform featuring multi-provider LLM support, advanced RAG with vector search, LoRA fine-tuning capabilities, comprehensive AI safety guardrails, and a modular architecture designed for enterprise deployment.

🎯 Current Version: 2.0 - Production Ready

Latest Updates:

  • Multi-Provider LLM Support - Groq, OpenAI, Anthropic, Ollama integration
  • Advanced RAG System - Vector search, document processing, knowledge base management
  • LoRA Fine-Tuning - Custom model adaptation with hyperparameter optimization
  • AI Safety Guardrails - Content filtering, PII detection, prompt injection prevention
  • Enterprise Database - SQLite with full user management and analytics
  • PWA Support - Installable web app with offline capabilities
  • Docker Production - Full containerization with monitoring stack
  • Modular Architecture - Scalable backend/frontend separation

🚀 Quick Start Guide

🔥 Lightning Setup

# 🎯 One-command setup
git clone <repository-url>
cd nexusai && ./run-local.sh

That's it! 🎉 NexusAI handles the rest automatically.

⚙️ Manual Setup

# 📦 Clone repository
git clone <repository-url>
cd nexusai

# 🔑 Configure API key
cp .env.example .env
echo "GROQ_API_KEY=<your-api-key>" >> .env

# 🚀 Launch application
./run-local.sh

🌐 Access Your AI Assistant

http://localhost:5002 ← Click to open NexusAI


🏗️ Architecture Overview

🏢 nexusai/                           # Enterprise AI Platform
├── 🔧 backend/                       # Python Flask Backend (Production Ready)
│   ├── 🚀 app.py                    # Main Flask application (1800+ lines)
│   ├── 🚀 main.py                   # Modular entry point
│   ├── 🗄️ database.py              # SQLite database system
│   ├── 🧠 models/                   # AI/ML Systems
│   │   ├── 📚 rag_system.py        # Advanced RAG with vector search
│   │   ├── 🔧 lora_system.py       # LoRA fine-tuning system
│   │   ├── 🤖 llm_providers.py     # Multi-provider LLM manager
│   │   └── 🛡️ simple_rag_system.py # Lightweight RAG fallback
│   ├── 🔌 api/                      # Modular API Routes
│   │   ├── chat_routes.py           # Chat & conversation endpoints
│   │   ├── rag_routes.py            # Knowledge base endpoints
│   │   └── lora_routes.py           # Model tuning endpoints
│   ├── 🛠️ modules/                  # Core Modules
│   │   ├── core/                    # Application core
│   │   ├── auth/                    # Authentication system
│   │   └── analytics/               # Usage analytics
│   ├── 🔧 utils/                    # Utility Functions
│   │   ├── helpers.py               # Common utilities
│   │   └── validators.py            # Input validation
│   ├── 📊 data/                     # Data Storage
│   │   ├── 🗄️ nexusai.db          # Main SQLite database
│   │   ├── 📁 rag_data/            # Knowledge base documents
│   │   ├── 🎯 lora_data/           # Training datasets
│   │   └── 📤 uploads/             # File uploads
│   └── 🧪 test_*.py                # Comprehensive test suite
├── 🎨 frontend/                     # Modern Progressive Web App
│   ├── 🌐 index.html               # Main application interface
│   ├── 📱 public/                  # PWA Configuration
│   │   ├── manifest.json            # App manifest
│   │   └── sw.js                    # Service worker
│   ├── 💎 static/                  # Static Assets
│   │   ├── 🎨 css/                 # Responsive stylesheets
│   │   │   ├── clean-ui.css        # Modern UI components
│   │   │   ├── nexusai-theme.css   # Theme system
│   │   │   └── new-features.css    # Feature-specific styles
│   │   └── ⚡ js/                  # Interactive JavaScript
│   │       ├── components/          # UI components
│   │       ├── services/            # API services
│   │       ├── modules/             # Feature modules
│   │       └── utils/               # Utility functions
│   └── 🧩 js/                      # Modular JavaScript Architecture
│       ├── app.js                   # Main application
│       ├── nexusai-modular.js      # Modular system
│       └── components/              # Reusable components
├── 🐳 Docker & Deployment          # Production Infrastructure
│   ├── Dockerfile                   # Multi-stage production build
│   ├── docker-compose.yml          # Full stack with monitoring
│   ├── nginx/                       # Reverse proxy configuration
│   └── monitoring/                  # Prometheus & Grafana setup
├── 📚 docs/                        # Comprehensive Documentation
│   ├── INSTALLATION_GUIDE.md       # Setup instructions
│   ├── CODE_DOCUMENTATION.md       # Developer reference
│   ├── AI_GUARDRAILS_DOCUMENTATION.md # Security guide
│   ├── ENHANCED_KNOWLEDGE_BASE.md  # RAG system guide
│   ├── ENHANCED_LORA_SYSTEM.md     # LoRA tuning guide
│   └── MULTI_PROVIDER_SETUP.md     # LLM provider setup
├── 🚀 Scripts & Automation         # Development & Deployment
│   ├── run-local.sh                # Local development server
│   ├── run-frontend.sh             # Frontend-only development
│   ├── start.sh                    # Production startup
│   └── manage-production.sh        # Production management
├── ⚙️ Configuration                # Environment & Settings
│   ├── requirements.txt            # Python dependencies (flexible)
│   ├── .env.example               # Environment template
│   ├── app.json                   # Heroku deployment
│   ├── Procfile                   # Process configuration
│   └── railway.toml               # Railway deployment
└── 🧪 Quality Assurance           # Testing & Validation
    ├── backend/test_*.py          # Backend test suite
    ├── frontend/test-*.js         # Frontend tests
    └── scripts/                   # Automation scripts

🎯 Enterprise Features & Benefits

🤖 Multi-Provider AI Integration

  • Groq (Ultra-fast inference)
  • OpenAI (GPT-4, GPT-3.5)
  • Anthropic (Claude models)
  • Ollama (Local deployment)
  • Automatic failover & load balancing

🧠 Advanced RAG System

  • Vector database integration
  • Document chunking & embedding
  • Semantic search capabilities
  • Knowledge base management
  • Real-time document processing

🔧 LoRA Fine-Tuning Platform

  • Custom model adaptation
  • Hyperparameter optimization
  • Training progress monitoring
  • Performance analytics
  • Dataset management tools

🛡️ Enterprise Security

  • AI safety guardrails
  • Content filtering system
  • PII detection & protection
  • Prompt injection prevention
  • Comprehensive audit logging

📊 Production Infrastructure

  • SQLite database with full schema
  • User management & authentication
  • Conversation persistence
  • Analytics & monitoring
  • Docker containerization
  • Nginx reverse proxy setup

🎨 Modern User Experience

  • Progressive Web App (PWA)
  • Glass morphism design
  • Responsive mobile interface
  • Real-time model switching
  • Conversation management
  • Template system

🛠️ Development Modes

🚀 Full Stack

./run-local.sh

🌐 http://localhost:5002

✅ Complete development environment
✅ Backend + Frontend integrated
✅ Hot reload enabled
✅ Debug mode active

🎨 Frontend Only

./run-frontend.sh

🌐 http://localhost:8000

✅ Static file server
✅ UI/UX development
✅ No backend dependencies
✅ Fast iteration cycles

🔧 Backend Only

cd backend
python app.py

🌐 http://localhost:5002

✅ API development
✅ Database operations
✅ ML model testing
✅ Direct Flask access

Core Platform Features

🤖 Multi-Provider AI Engine

┌─────────────────────────────────────┐
│  🚀 Groq Integration (Primary)      │
├─────────────────────────────────────┤
│  ⚡ Llama 3.1/3.2 models           │
│  🔥 Mixtral & Gemma support        │
│  🖼️ Vision model capabilities       │
│  📊 Real-time model switching       │
│  🎯 Intelligent model selection     │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│  🤖 OpenAI & Anthropic Support     │
├─────────────────────────────────────┤
│  🧠 GPT-4 & GPT-3.5 integration    │
│  🎭 Claude model support           │
│  🔄 Automatic failover system      │
│  ⚖️ Load balancing across providers │
│  📈 Usage analytics per provider    │
└─────────────────────────────────────┘

🧠 Advanced RAG System

┌─────────────────────────────────────┐
│  📚 Enterprise Knowledge Base       │
├─────────────────────────────────────┤
│  📄 Multi-format document support   │
│  🔍 Vector similarity search        │
│  🧩 Intelligent text chunking       │
│  📊 Relevance scoring & ranking     │
│  💾 Persistent document storage     │
│  🔧 Real-time knowledge updates     │
└─────────────────────────────────────┘

🔧 LoRA Fine-Tuning Platform

┌─────────────────────────────────────┐
│  🎯 Custom Model Adaptation         │
├─────────────────────────────────────┤
│  🔬 Low-rank adaptation training    │
│  ⚙️ Hyperparameter optimization     │
│  📈 Training progress monitoring    │
│  💾 Dataset management tools        │
│  📊 Performance analytics           │
│  🎛️ A/B testing capabilities        │
└─────────────────────────────────────┘

🛡️ AI Safety & Security

┌─────────────────────────────────────┐
│  🛡️ Multi-Layer Protection         │
├─────────────────────────────────────┤
│  🔒 Content safety filtering        │
│  🕵️ PII detection & redaction       │
│  🚫 Prompt injection prevention     │
│  📊 Real-time threat monitoring     │
│  📋 Compliance audit trails         │
│  ⚠️ Automated alert system          │
└─────────────────────────────────────┘

📊 Enterprise Database

┌─────────────────────────────────────┐
│  🗄️ Production Data Management      │
├─────────────────────────────────────┤
│  👥 User profiles & authentication  │
│  💬 Conversation persistence        │
│  📄 Document storage & indexing     │
│  📈 Analytics & usage tracking      │
│  🔍 Global search capabilities      │
│  📤 Data export & backup tools      │
└─────────────────────────────────────┘

🎨 Progressive Web Application

💎 Glass Morphism UI
Modern translucent design with blur effects
📱 PWA Installation
Install as native app on any device
🌓 Theme System
Dark/light modes with custom themes
📐 Responsive Design
Optimized for mobile, tablet, desktop
⚡ Real-time Updates
Live model switching & status indicators

🔧 Developer Experience

🚀 One-Command Setup
./run-local.sh
🔄 Hot Reload
Instant development feedback
🧩 Modular Architecture
Clean separation of concerns
📚 Comprehensive Docs
Detailed guides & API reference

📦 Deployment Options

Choose the deployment method that fits your needs

🏃‍♂️ Quick Start

Get running in 2 minutes

git clone <repository-url>
cd nexusai
./run-local.sh

📊 Requirements:

  • 💾 2GB RAM minimum
  • 💿 1GB Storage
  • 🔑 Groq API key (free)
  • ⚡ Basic chat + RAG + LoRA

✅ Includes:

  • Multi-provider LLM support
  • Basic RAG capabilities
  • LoRA fine-tuning
  • AI safety guardrails
  • SQLite database
  • PWA interface

🧠 Full Development

Complete feature set

git clone <repository-url>
cd nexusai
# Configure .env with all API keys
./run-local.sh

📊 Requirements:

  • 💾 8GB RAM (16GB recommended)
  • 💿 10GB Storage
  • 🔑 Multiple API keys
  • 🚀 All features enabled

✅ Includes:

  • All AI providers (Groq, OpenAI, Anthropic)
  • Advanced RAG with vector search
  • Full LoRA training capabilities
  • Enterprise security features
  • Analytics & monitoring
  • Complete documentation

🏭 Production

Enterprise deployment

git clone <repository-url>
cd nexusai
# Configure production .env
docker-compose up --build

📊 Requirements:

  • 💾 4GB+ RAM per container
  • 💿 20GB+ Storage
  • 🔑 Production API keys
  • 🛡️ SSL certificates

✅ Includes:

  • Docker containerization
  • Nginx reverse proxy
  • PostgreSQL database
  • Redis caching
  • Prometheus monitoring
  • Grafana dashboards
  • Auto-scaling support

🎯 Feature Comparison Matrix

Feature Category Quick Start Full Development Production
🤖 AI Providers
Groq Integration
OpenAI Support ⚠️ Optional
Anthropic Support ⚠️ Optional
Ollama Local ⚠️ Optional
🧠 RAG System
Document Upload
Vector Search ✅ Simplified ✅ Advanced ✅ Enterprise
Knowledge Base
🔧 LoRA System
Model Fine-tuning ✅ Basic ✅ Advanced ✅ Enterprise
Hyperparameter Optimization
Training Analytics
🛡️ Security
AI Guardrails
Content Filtering
PII Detection
Audit Logging
📊 Infrastructure
SQLite Database
PostgreSQL
Redis Caching
Monitoring Stack
🎨 Interface
PWA Support
Mobile Responsive
Real-time Updates
⚙️ Deployment
Local Development
Docker Support
Kubernetes Ready
Auto-scaling

🔧 Configuration Guide

🔑 Environment Configuration

# ===== REQUIRED CONFIGURATION =====
GROQ_API_KEY=<your-groq-api-key>              # Primary AI provider (required)
SECRET_KEY=<your-secret-key>                   # Flask session security

# ===== MULTI-PROVIDER SUPPORT =====
OPENAI_API_KEY=<your-openai-api-key>          # Optional: GPT models
ANTHROPIC_API_KEY=<your-anthropic-api-key>    # Optional: Claude models
OLLAMA_BASE_URL=http://localhost:11434        # Optional: Local Ollama

# ===== APPLICATION SETTINGS =====
FLASK_DEBUG=True                               # Development mode
PORT=5002                                      # Server port
FLASK_ENV=development                          # Environment

# ===== AI/ML CONFIGURATION =====
TRANSFORMERS_CACHE=./backend/data/models_cache # Model cache directory
HF_HOME=./backend/data/models_cache            # Hugging Face cache
MAX_TOKENS_DEFAULT=512                         # Default response length
TEMPERATURE_DEFAULT=0.7                        # Default creativity level

# ===== RAG SYSTEM SETTINGS =====
RAG_CHUNK_SIZE=1000                           # Document chunk size
RAG_CHUNK_OVERLAP=200                         # Chunk overlap
RAG_MAX_RESULTS=10                            # Max search results
VECTOR_DB_PATH=./backend/data/vector_db       # Vector database path

# ===== LORA TRAINING SETTINGS =====
LORA_RANK_DEFAULT=16                          # Default LoRA rank
LORA_ALPHA_DEFAULT=32                         # Default LoRA alpha
LORA_DROPOUT_DEFAULT=0.1                      # Default dropout rate
TRAINING_DATA_PATH=./backend/lora_data        # Training data directory

# ===== SECURITY & SAFETY =====
ENABLE_GUARDRAILS=True                        # AI safety guardrails
ENABLE_PII_DETECTION=True                     # PII detection
ENABLE_CONTENT_FILTER=True                    # Content filtering
MAX_UPLOAD_SIZE=50MB                          # File upload limit

# ===== DATABASE CONFIGURATION =====
DATABASE_URL=sqlite:///nexusai.db             # SQLite (default)
# DATABASE_URL=postgresql://user:pass@host:port/db  # PostgreSQL (production)

# ===== PRODUCTION SETTINGS =====
SENTRY_DSN=<your-sentry-dsn>                 # Error monitoring
RATE_LIMIT_PER_MINUTE=100                     # API rate limiting
ENABLE_ANALYTICS=True                         # Usage analytics
LOG_LEVEL=INFO                                # Logging level

# ===== REDIS CONFIGURATION (Production) =====
REDIS_URL=redis://localhost:6379/0           # Redis cache
ENABLE_CACHING=False                          # Enable Redis caching

# ===== MONITORING (Production) =====
PROMETHEUS_ENABLED=False                      # Prometheus metrics
GRAFANA_ENABLED=False                         # Grafana dashboards

🔗 AI Provider Setup Guide

🚀 Groq (Primary - Required)

  • Get API Key: console.groq.com
  • Models Available: Llama 3.1/3.2, Mixtral 8x7B, Gemma 2
  • Features: Ultra-fast inference, vision models, free tier
  • Speed: Up to 500+ tokens/second ⚡
  • Cost: Free tier available, very affordable

🤖 OpenAI (Optional)

  • Get API Key: platform.openai.com
  • Models Available: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
  • Features: Advanced reasoning, function calling, vision
  • Best For: Complex reasoning, creative tasks
  • Cost: Pay-per-use, higher cost but high quality

🧠 Anthropic (Optional)

  • Get API Key: console.anthropic.com
  • Models Available: Claude 3.5 Sonnet, Claude 3 Haiku
  • Features: Built-in safety, long context, analysis
  • Best For: Safe AI, document analysis, coding
  • Cost: Competitive pricing, excellent safety

🏠 Ollama (Optional - Local)

  • Setup: ollama.ai - Install locally
  • Models Available: Llama 2/3, Mistral, CodeLlama, many others
  • Features: Complete privacy, no API costs, offline
  • Best For: Privacy-sensitive use cases, offline deployment
  • Cost: Free (uses your hardware)

🔧 Setup Priority:

  1. Start with Groq (required, free, fast)
  2. Add OpenAI for advanced reasoning
  3. Add Anthropic for safety-critical applications
  4. Add Ollama for privacy/offline needs

🐳 Docker Deployment

Containerized deployment for any environment

🔧 Development Mode

# 🚀 Quick start with Docker
docker-compose up --build

# 🔍 View logs
docker-compose logs -f

# 🛑 Stop services
docker-compose down

🌐 Access: http://localhost:5002
📊 Monitoring: http://localhost:3000
🔍 Metrics: http://localhost:9090

🏭 Production Mode

# 🚀 Production deployment
docker-compose -f docker-compose.prod.yml up -d

# 📊 Health check
docker-compose ps

# 📈 Scale services
docker-compose up --scale app=3

🌐 Access: https://your-domain.com
🛡️ SSL: Automatic certificates
📊 Monitoring: Full observability stack

🏗️ Container Architecture

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   🌐 Nginx      │    │   🤖 NexusAI    │    │   🗄️ Database   │
│   Reverse Proxy │◄──►│   Application   │◄──►│   PostgreSQL    │
│   Load Balancer │    │   Flask + ML    │    │   + Redis Cache │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         ▼                       ▼                       ▼
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  📊 Monitoring  │    │  🔍 Logging     │    │  🛡️ Security    │
│  Prometheus     │    │  Centralized    │    │  SSL + Auth     │
│  + Grafana      │    │  ELK Stack      │    │  Rate Limiting  │
└─────────────────┘    └─────────────────┘    └─────────────────┘

📖 Documentation Hub

Comprehensive guides for every aspect of NexusAI

🚀 Getting Started

📋 Installation Guide
Step-by-step setup instructions

🏗️ Project Structure
Architecture deep dive

⚙️ Configuration Guide
Customization options

🧠 Advanced Features

🤖 RAG/LoRA Guide
AI/ML capabilities

🔌 API Documentation
Backend API reference

🛡️ Security Guide
Safety & compliance

📚 Documentation Quick Reference

Category Document Description
🚀 Getting Started
Setup Guide INSTALLATION_GUIDE.md Complete setup instructions
Multi-Provider Setup MULTI_PROVIDER_SETUP.md Configure all AI providers
🏗️ Architecture
Code Documentation CODE_DOCUMENTATION.md Developer reference & API docs
Frontend Modularization FRONTEND_MODULARIZATION_SUMMARY.md UI architecture guide
🧠 AI Features
Knowledge Base ENHANCED_KNOWLEDGE_BASE.md Advanced RAG system guide
LoRA Fine-tuning ENHANCED_LORA_SYSTEM.md Model customization guide
🛡️ Security
AI Guardrails AI_GUARDRAILS_DOCUMENTATION.md Safety & security features
Security Best Practices SECURITY.md Production security guide
⚙️ Development
Pre-commit Setup PRE_COMMIT_SETUP.md Development workflow

🔌 API Reference

Comprehensive REST API for all platform features

🤖 AI & Chat Endpoints

POST /api/chat
# Multi-provider chat completion
# Supports Groq, OpenAI, Anthropic, Ollama

GET /api/models
# List available models from all providers

GET /api/providers
# Get provider status and capabilities

POST /api/models/compare
# Compare responses across multiple models

POST /api/models/recommend
# Get AI-recommended model for input

🧠 RAG System Endpoints

POST /api/rag/upload
# Upload documents to knowledge base

POST /api/rag/search
# Search knowledge base with vector similarity

GET /api/rag/summary
# Get knowledge base statistics

GET /api/rag/analyze
# Analyze knowledge base performance

DELETE /api/rag/documents/{id}
# Remove documents from knowledge base

🔧 LoRA Fine-tuning Endpoints

GET /api/lora/adapters
# List all LoRA adapters

POST /api/lora/create
# Create new LoRA adapter

POST /api/lora/train
# Start training process

GET /api/lora/analyze
# Performance analysis

POST /api/lora/optimize
# Hyperparameter optimization

👥 User Management Endpoints

POST /api/users
# Create or update user profile

GET /api/users/{user_id}
# Get user profile

PUT /api/users/{user_id}
# Update user profile

GET /api/users/{user_id}/analytics
# Get user analytics and usage stats

💬 Conversation Management

GET /api/conversations
# List user conversations

POST /api/conversations
# Save conversation

GET /api/conversations/{id}
# Get specific conversation

DELETE /api/conversations/{id}
# Delete conversation

POST /api/templates
# Create message template

🔍 Search & Analytics

POST /api/search
# Global search across all content

GET /api/search/history/{user_id}
# Get search history

GET /api/analytics
# System-wide analytics

POST /api/analytics/log
# Log custom analytics event

GET /api/export/{user_id}
# Export all user data

🛡️ Security & Monitoring

GET /api/guardrails/status
# AI guardrails status

GET /api/status
# System health check

GET /api/features
# Available features status

GET /api/system/stats
# System statistics

🧪 Testing & Quality Assurance

🔍 Automated Testing

# Run comprehensive test suite
cd backend
python test_features.py

# Test specific components
python test_lora_system.py
python test_modular.py

✅ Database operations
✅ API endpoint validation
✅ ML model integration
✅ Security guardrails
✅ Multi-provider LLM support

🐍 Backend Validation

# Test all backend features
cd backend
python -c "import app; print('✅ App imports successfully')"

# Test database system
python -c "from database import initialize_database; initialize_database()"

# Test RAG system
python -c "from models.rag_system import get_rag_system"

✅ Flask application startup
✅ Database schema creation
✅ RAG system initialization
✅ LoRA system validation
✅ API route registration

🎨 Frontend Testing

# Test frontend independently
./run-frontend.sh

# Test PWA functionality
# Open browser dev tools > Application > Service Workers

# Test responsive design
# Resize browser window or use device emulation

✅ PWA installation
✅ Service worker registration
✅ Responsive design validation
✅ Cross-browser compatibility
✅ Real-time UI updates

🔄 Migration Guide

Seamless upgrade from legacy structure

📦 What Changed

Old Structure:
nexusai/
├── app.py
├── rag_system.py
├── static/
└── index.html

New Structure:
nexusai/
├── backend/
│   ├── app.py
│   └── models/
└── frontend/
    ├── index.html
    └── static/

Migration Steps

✅ Automatic Migration

  • Files moved to organized folders
  • Import paths updated automatically
  • All functionality preserved

🔄 Updated Commands

# Old way
python app.py

# New way
./run-local.sh

📋 Benefits

  • Better organization
  • Team-friendly structure
  • Production-ready architecture

🤝 Contributing to NexusAI

Join our community of AI enthusiasts and developers

🚀 Getting Started

# 1. Fork & Clone
git clone https://github.com/yourusername/nexusai
cd nexusai

# 2. Create Feature Branch
git checkout -b feature/amazing-feature

# 3. Make Changes
# Backend: backend/
# Frontend: frontend/
# Docs: docs/

# 4. Test Changes
./test-structure.sh
cd backend && python -m pytest

# 5. Submit PR
git push origin feature/amazing-feature

📋 Contribution Guidelines

🎯 Areas We Need Help

  • 🤖 New AI model integrations
  • 🎨 UI/UX improvements
  • 📚 Documentation enhancements
  • 🧪 Test coverage expansion
  • 🌍 Internationalization

✅ Code Standards

  • Follow existing code style
  • Add tests for new features
  • Update documentation
  • Use meaningful commit messages

🏆 Recognition

  • Contributors listed in README
  • Special badges for major contributions
  • Early access to new features

📋 System Requirements & Compatibility

🏃‍♂️ Quick Start

Get running immediately

💻 Software:

  • Python 3.8+ (3.11 recommended)
  • pip package manager
  • Git

💾 Hardware:

  • 2GB RAM minimum
  • 1GB free storage
  • Any modern OS

🔑 Required:

  • Groq API key (free)
  • Internet connection

⏱️ Setup Time: 2 minutes

🧠 Full Development

Complete feature set

💻 Software:

  • Python 3.8+ with venv
  • Node.js (for frontend tools)
  • Git & curl

💾 Hardware:

  • 8GB RAM (16GB recommended)
  • 10GB free storage
  • Multi-core CPU recommended

🔑 Optional:

  • OpenAI API key
  • Anthropic API key
  • Ollama installation

⏱️ Setup Time: 5 minutes

🏭 Production

Enterprise deployment

💻 Software:

  • Docker & Docker Compose
  • Nginx or load balancer
  • PostgreSQL (optional)
  • Redis (optional)

💾 Hardware:

  • 4GB+ RAM per container
  • 20GB+ storage
  • SSD recommended
  • Load balancer capable

🔑 Required:

  • Production API keys
  • SSL certificates
  • Monitoring setup

⏱️ Setup Time: 15 minutes

☁️ Cloud Deployment

Scalable cloud hosting

☁️ Platforms:

  • Heroku (1-click deploy)
  • Railway (Git-based)
  • Render (auto-deploy)
  • AWS/GCP/Azure

💾 Resources:

  • 512MB+ RAM (Heroku)
  • 1GB+ storage
  • Auto-scaling capable

🔑 Required:

  • Cloud platform account
  • Environment variables
  • Domain (optional)

⏱️ Setup Time: 3 minutes

🖥️ Operating System Compatibility

🐧 Linux
Ubuntu 20.04+
CentOS 8+
Debian 11+
✅ Fully Supported
🍎 macOS
macOS 11+
Intel & Apple Silicon
Homebrew recommended
✅ Fully Supported
🪟 Windows
Windows 10+
WSL2 recommended
PowerShell/CMD
✅ Fully Supported
🐳 Docker
Any Docker host
Linux containers
Kubernetes ready
✅ Recommended
☁️ Cloud
Heroku, Railway
AWS, GCP, Azure
Serverless ready
✅ Production Ready

🌐 Browser Compatibility

Chrome 90+
✅ Full PWA Support
Firefox 88+
✅ Full Support
Safari 14+
✅ Full Support
Edge 90+
✅ Full Support
Mobile Safari
✅ PWA Install
Mobile Chrome
✅ PWA Install

🛠️ Technology Stack

Modern, production-ready technologies powering NexusAI

🔧 Backend Technologies

🐍 Core Framework

  • Flask 2.3+ - Lightweight, scalable web framework
  • Python 3.8+ - Modern Python with type hints
  • SQLite/PostgreSQL - Flexible database options
  • Redis - High-performance caching (production)

🤖 AI/ML Stack

  • Groq SDK - Ultra-fast LLM inference
  • OpenAI SDK - GPT model integration
  • Anthropic SDK - Claude model support
  • Transformers - Hugging Face model library
  • FAISS - Vector similarity search
  • PyTorch - Deep learning framework (optional)

🛡️ Security & Monitoring

  • Flask-CORS - Cross-origin resource sharing
  • Python-dotenv - Environment management
  • Werkzeug - WSGI utilities and security
  • Prometheus - Metrics collection (production)
  • Sentry - Error tracking (production)

📊 Data Processing

  • Pandas - Data manipulation (optional)
  • NumPy - Numerical computing (optional)
  • Scikit-learn - Machine learning utilities (optional)

🎨 Frontend Technologies

🌐 Core Web Technologies

  • HTML5 - Semantic markup with modern features
  • CSS3 - Advanced styling with Grid & Flexbox
  • JavaScript ES6+ - Modern JavaScript features
  • Web APIs - Service Workers, IndexedDB, Notifications

🎨 UI/UX Framework

  • Glass Morphism Design - Modern translucent UI
  • CSS Grid & Flexbox - Responsive layout system
  • Font Awesome 6 - Comprehensive icon library
  • Google Fonts (Inter) - Modern typography
  • CSS Custom Properties - Dynamic theming

📱 Progressive Web App

  • Service Workers - Offline functionality & caching
  • Web App Manifest - Native app-like experience
  • Push Notifications - Real-time updates
  • IndexedDB - Client-side data storage
  • Responsive Design - Mobile-first approach

⚡ Performance & Optimization

  • Modular JavaScript - Component-based architecture
  • Lazy Loading - Optimized resource loading
  • CSS Minification - Reduced bundle sizes
  • Image Optimization - WebP & SVG support

🏗️ Architecture Patterns

🔄 MVC Pattern
Clean separation of Model, View, Controller
🧩 Modular Design
Loosely coupled, highly cohesive modules
🔌 RESTful API
Standard HTTP methods & status codes
📱 PWA Architecture
App-like experience with web technologies

🆘 Troubleshooting Guide

Quick solutions for common issues

🔧 Common Setup Issues

🐍 Python Import Errors

# Verify Python version
python --version  # Should be 3.8+

# Check virtual environment
source venv/bin/activate
pip list | grep Flask

# Test core imports
cd backend && python -c "import app; print('✅ Success')"

🔑 API Key Configuration

# Check .env file exists
ls -la .env

# Verify API key format
cat .env | grep GROQ_API_KEY
# Should show: GROQ_API_KEY=gsk_...

# Test API key validity
curl -H "Authorization: Bearer $GROQ_API_KEY" \
  https://api.groq.com/openai/v1/models

🔌 Port & Network Issues

# Check if port is in use
lsof -i :5002

# Use different port
echo "PORT=5003" >> .env
./run-local.sh

# Check firewall settings
# Ensure port 5002 is open for local development

📦 Dependency Problems

# Clean install
rm -rf venv
python -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt

# Install minimal dependencies only
pip install Flask Werkzeug python-dotenv requests flask-cors groq

🚀 Performance Optimization

⚡ Development Speed

# Frontend-only development
./run-frontend.sh  # Faster UI iteration

# Skip ML dependencies for UI work
pip install Flask Werkzeug python-dotenv requests flask-cors groq

# Use hot reload
export FLASK_DEBUG=True
./run-local.sh

🧠 AI Model Performance

# Cache models locally
export TRANSFORMERS_CACHE=./backend/data/models_cache
export HF_HOME=./backend/data/models_cache

# Use faster models for development
# Prefer llama-3.1-8b-instant over larger models

# Optimize token limits
# Set MAX_TOKENS_DEFAULT=256 for faster responses

🐳 Docker Optimization

# Use .dockerignore
echo "venv/\n*.pyc\n__pycache__/\n.git/" > .dockerignore

# Multi-stage builds
docker build --target production .

# Optimize layer caching
# Put requirements.txt COPY before code COPY

💾 Database Performance

# SQLite optimization
echo "PRAGMA journal_mode=WAL;" | sqlite3 nexusai.db

# Regular cleanup
python -c "from database import get_database; db = get_database(); print('DB size:', db.get_database_stats())"

# Backup before major changes
cp nexusai.db nexusai.db.backup

🔍 Advanced Troubleshooting

Issue Category Diagnostic Command Solution Guide
🚀 Setup & Installation python backend/test_features.py Installation Guide
🤖 AI Provider Issues curl -H "Authorization: Bearer $API_KEY" https://api.groq.com/openai/v1/models Multi-Provider Setup
🧠 RAG System Problems python -c "from models.rag_system import get_rag_system; print('RAG OK')" Knowledge Base Guide
🔧 LoRA Training Issues python -c "from models.lora_system import get_lora_system; print('LoRA OK')" LoRA System Guide
🛡️ Security & Guardrails curl http://localhost:5002/api/guardrails/status Security Documentation
🐳 Docker Deployment docker-compose logs -f app Code Documentation
📱 PWA & Frontend Browser DevTools > Application > Service Workers Frontend Guide
🐛 Bug Reports Create detailed issue with logs GitHub Issues
💡 Feature Requests Start community discussion GitHub Discussions

📞 Getting Help

📚 Documentation
Comprehensive guides in docs/ folder
🧪 Self-Diagnosis
Run python backend/test_features.py
🐛 Bug Reports
GitHub Issues with detailed logs
💬 Community
GitHub Discussions for questions

🏆 Technology Partners & Acknowledgments

Built with best-in-class technologies and community support

⚡ Groq
Ultra-Fast AI Inference

500+ tokens/second
Llama 3.1/3.2 models
Developer-friendly API
Affordable pricing

🤖 OpenAI
Advanced AI Models

GPT-4 & GPT-3.5
Function calling
Vision capabilities
Industry standard

🧠 Anthropic
Safe AI Systems

Claude 3.5 Sonnet
Built-in safety
Long context windows
Ethical AI approach

🌐 Flask Ecosystem
Python Web Framework

Lightweight & flexible
Extensive ecosystem
Production-ready
Microservices friendly

🎨 Modern Web Standards
Progressive Web Apps

Service Workers
Web App Manifest
Responsive design
Accessibility first

🛠️ Open Source Dependencies

🐍 Python Ecosystem

  • Flask - Web framework
  • SQLAlchemy - Database ORM
  • Transformers - ML models
  • FAISS - Vector search
  • PyTorch - Deep learning
  • Pandas - Data processing

🌐 Web Technologies

  • Font Awesome - Icon library
  • Google Fonts - Typography
  • CSS Grid & Flexbox - Layout
  • Service Workers - PWA functionality
  • IndexedDB - Client storage
  • Web APIs - Modern browser features

🔧 Development Tools

  • Docker - Containerization
  • Nginx - Reverse proxy
  • Prometheus - Monitoring
  • Grafana - Visualization
  • Git - Version control
  • GitHub - Code hosting

🌟 Special Recognition

🤖 AI Research Community
For advancing open AI models and safety research
🔬 Hugging Face
For democratizing AI and providing model infrastructure
🌍 Open Source Contributors
For building the tools and libraries we depend on
👥 Developer Community
For feedback, testing, and continuous improvement

📄 License

MIT License

MIT License - see LICENSE file for details


💝 Built with Love

Made with Love For AI Community

🚀 Ready to deploy enterprise-grade AI?

Quick Start Installation Guide Documentation Community

🎯 Next Steps

1️⃣ Quick Deploy
git clone && ./run-local.sh
2 minutes to running
2️⃣ Configure Providers
Add OpenAI, Anthropic keys
Multi-provider power
3️⃣ Upload Knowledge
Add documents to RAG system
Custom knowledge base
4️⃣ Fine-tune Models
Create custom LoRA adapters
Personalized AI

📊 Project Stats

Lines of Code Backend Files Frontend Components API Endpoints Documentation Pages

⭐ Star this repository if NexusAI powers your AI projects! ⭐

Built with ❤️ for the AI community • Production-ready • Enterprise-grade • Open Source

About

Sample Chat bot with the backend of Groq Cloud/AI

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors