Skip to content

Latest commit

 

History

History
228 lines (183 loc) · 9.17 KB

File metadata and controls

228 lines (183 loc) · 9.17 KB

CLAUDE.md

This file provides guidance to Claude (claude.ai) and other AI assistants when working with code in this repository.

Project Overview

This is AI LaunchKit: a comprehensive Docker Compose-based toolkit that creates a complete self-hosted AI development and automation environment. It transforms any Ubuntu server into a powerful AI development platform with 20+ pre-configured services that can be selectively deployed with a single command.

Core Features

  • AI Development Tools: bolt.diy, OpenHands, OpenUI, ComfyUI, Dify
  • Automation Platform: n8n with 300+ pre-configured workflows, Flowise
  • LLM Infrastructure: Ollama, Open WebUI, Letta
  • Vector Databases: Qdrant, Supabase (with pgvector)
  • Monitoring & Observability: Langfuse, Grafana, Prometheus
  • Media Processing: ComfyUI (Stable Diffusion), Speech Stack (Whisper STT, OpenedAI TTS)
  • Development Tools: SearXNG, Crawl4ai, browserless (planned), LiveKit (planned)

Essential Commands

Installation and Updates

# Main installation (from project root)
sudo bash ./scripts/install.sh

# Update all services to latest versions
sudo bash ./scripts/update.sh

# Clean up unused Docker resources
sudo bash ./scripts/cleanup.sh

Docker Operations

# View running services
docker compose ps

# View logs for specific service
docker compose logs [service-name]

# Restart services with specific profiles
docker compose --profile n8n --profile ai-dev up -d

# Stop all services
docker compose down

Python Helper Script

# Start services with automatic profile detection
python3 start_services.py

Service Management

# Apply configuration updates to running services
sudo bash ./scripts/apply_update.sh

# View all available Docker Compose profiles
grep -E '^\s*profiles:' docker-compose.yml

# Check service health status
docker compose ps --format "table {{.Name}}\t{{.Status}}\t{{.Ports}}"

Architecture

Core Installation Flow

The installation follows a strict 6-step sequence managed by scripts/install.sh:

  1. 01_system_preparation.sh - Updates system, installs dependencies, configures security
  2. 02_install_docker.sh - Installs Docker Engine and Docker Compose
  3. 03_generate_secrets.sh - Creates .env file with secure passwords and keys
  4. 04_wizard.sh - Interactive service selection with whiptail UI (20+ services)
  5. 05_run_services.sh - Deploys selected services using Docker Compose profiles
  6. 06_final_report.sh - Displays access URLs and credentials

Docker Compose Architecture

  • Profiles: Services organized into logical groups (n8n, ai-dev, monitoring, langfuse, ollama, speech, etc.)
  • Environment: All configuration through .env file generated during installation
  • Volumes: Named volumes for data persistence across container rebuilds
  • Networks: All services on default Docker network with internal service discovery

Key Service Dependencies

  • n8n: Requires postgres, redis. Runs in queue mode with configurable worker count
  • Supabase: Full stack with postgres, auth, storage, analytics, edge functions
  • Langfuse: Requires postgres, redis, clickhouse, minio for LLM observability
  • Open WebUI: Integrates with Ollama for local LLMs
  • ComfyUI: Requires models downloaded separately (FLUX, SDXL, etc.)
  • bolt.diy: Standalone AI-powered full-stack development platform
  • OpenHands: AI software engineer with web interface

Important Implementation Details

Environment Variable System

  • Generated by 03_generate_secrets.sh using openssl rand -hex
  • Contains service hostnames, passwords, API keys
  • Used by both Docker Compose and Caddy for routing
  • AI-specific keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.

Service Selection Wizard

  • 04_wizard.sh uses whiptail to create interactive checklist
  • Updates COMPOSE_PROFILES in .env based on selections
  • Service dependencies automatically handled
  • Grouped by category: AI Development, Automation, Databases, Monitoring

Shared File Access

  • ./shared/ directory is mounted to /data/shared in containers
  • Use this path in n8n workflows and AI tools to share files
  • Persistent storage for generated content (images, documents, models)

AI Service Configuration

  • ComfyUI: Models stored in /var/lib/docker/volumes/localai_comfyui_data/_data/models/
  • Ollama: Models persist in ollama_data volume
  • Speech Stack: Uses ports 8001 (Whisper) and 5001 (TTS)
  • Vector DBs: Qdrant on port 6333, Supabase includes pgvector

Custom n8n Configuration

  • Pre-installs Node.js libraries: cheerio, axios, moment, lodash
  • Runs in production mode with PostgreSQL backend
  • Queue mode for parallel workflow processing
  • Community packages and AI runners enabled
  • Configured for tool usage and external function calls
  • Metrics enabled for Prometheus monitoring

Network Architecture

  • Caddy handles HTTPS/TLS termination and reverse proxy
  • Services exposed via subdomains: n8n.domain.com, comfyui.domain.com, etc.
  • Internal services (redis, postgres) not exposed externally
  • Supports Cloudflare Tunnel as alternative to port exposure
  • sslip.io domains work without DNS configuration

Common Development Tasks

Testing Profile Changes

  1. Update COMPOSE_PROFILES in .env
  2. Run docker compose --profile [profiles] up -d
  3. Check docker compose ps to verify services started

Adding New AI Services

  1. Define service in docker-compose.yml with appropriate profile
  2. Add hostname variables to Caddy environment section
  3. Update 04_wizard.sh to include in service selection
  4. Add Caddyfile routing if service needs web access
  5. Consider GPU requirements and volume mounts
  6. Document API endpoints and integration points

Working with AI Models

  • ComfyUI models: Download to model directories before use
  • Ollama models: Pull with docker exec ollama ollama pull [model]
  • Embeddings: Configure in n8n AI nodes or Flowise chains
  • Vector stores: Initialize indexes in Qdrant or Supabase

Backup/Restore

  • n8n workflows: ./n8n/backup/workflows/
  • ComfyUI workflows: Export from UI
  • Vector databases: Use respective backup tools
  • All data in Docker volumes: docker run --rm -v [volume]:/data -v $(pwd):/backup busybox tar czf /backup/[name].tar.gz -C /data .

Troubleshooting AI Services

  1. Check service logs: docker compose logs [service-name] -f
  2. Verify GPU access (if applicable): docker exec [container] nvidia-smi
  3. Monitor resource usage: docker stats
  4. Check model loading: Service-specific logs
  5. API connectivity: Test with curl or service UI

Security Considerations

  • All services require authentication (configured during setup)
  • Firewall configured to only allow SSH, HTTP, HTTPS
  • Fail2Ban enabled for brute-force protection
  • SSL certificates managed by Caddy with Let's Encrypt
  • API keys stored in .env file (never commit to git)
  • Services isolated in Docker network namespaces
  • Regular security updates via update.sh

File Structure Notes

  • memory-bank/ - Project documentation and development notes
  • flowise/ - Flowise workflow templates and custom tools
  • n8n/ - n8n configurations and community workflows
  • comfyui/ - ComfyUI custom nodes and workflows (if added)
  • scripts/ - Installation and utility scripts (all bash)
  • shared/ - Shared directory accessible by all containers
  • openedai-config/ - TTS voice configurations
  • openedai-voices/ - Custom TTS voice models

AI LaunchKit Specific Features

Speech Stack Integration

  • Whisper (STT): OpenAI-compatible API on port 8001
  • OpenedAI (TTS): Multiple voices, OpenAI-compatible API on port 5001
  • Both services integrate seamlessly with n8n HTTP nodes

Development Platforms

  • bolt.diy: Full-stack development with AI assistance
  • OpenHands: Autonomous coding agent with web interface
  • OpenUI: AI-powered UI component generator

Planned Additions

  • LiveKit: Real-time communication and AI agents
  • browserless: Headless Chrome for web automation
  • PaddleOCR: Document OCR processing

Critical Implementation Notes

  • Never modify the installation script sequence (01-06) without understanding dependencies
  • Always use logging functions from utils.sh in new scripts
  • The .env file contains all secrets and must never be committed
  • Services use Docker health checks - respect dependency conditions
  • Profile-based deployment allows selective service activation
  • When modifying docker-compose.yml, maintain the x-templates pattern
  • AI services may require significant resources (RAM, disk, GPU)
  • Community workflows are imported during installation (20-30 minutes)
  • Model downloads for ComfyUI/Ollama happen post-installation
  • Test AI integrations with small workloads before scaling

Contributing Guidelines

When contributing new AI services or features:

  1. Follow existing patterns in docker-compose.yml
  2. Add comprehensive health checks
  3. Document resource requirements
  4. Include example workflows or usage
  5. Test with minimal and full installations
  6. Update this CLAUDE.md file with relevant information

Useful Resources