This file provides guidance to Claude (claude.ai) and other AI assistants when working with code in this repository.
This is AI LaunchKit: a comprehensive Docker Compose-based toolkit that creates a complete self-hosted AI development and automation environment. It transforms any Ubuntu server into a powerful AI development platform with 20+ pre-configured services that can be selectively deployed with a single command.
- AI Development Tools: bolt.diy, OpenHands, OpenUI, ComfyUI, Dify
- Automation Platform: n8n with 300+ pre-configured workflows, Flowise
- LLM Infrastructure: Ollama, Open WebUI, Letta
- Vector Databases: Qdrant, Supabase (with pgvector)
- Monitoring & Observability: Langfuse, Grafana, Prometheus
- Media Processing: ComfyUI (Stable Diffusion), Speech Stack (Whisper STT, OpenedAI TTS)
- Development Tools: SearXNG, Crawl4ai, browserless (planned), LiveKit (planned)
# Main installation (from project root)
sudo bash ./scripts/install.sh
# Update all services to latest versions
sudo bash ./scripts/update.sh
# Clean up unused Docker resources
sudo bash ./scripts/cleanup.sh# View running services
docker compose ps
# View logs for specific service
docker compose logs [service-name]
# Restart services with specific profiles
docker compose --profile n8n --profile ai-dev up -d
# Stop all services
docker compose down# Start services with automatic profile detection
python3 start_services.py# Apply configuration updates to running services
sudo bash ./scripts/apply_update.sh
# View all available Docker Compose profiles
grep -E '^\s*profiles:' docker-compose.yml
# Check service health status
docker compose ps --format "table {{.Name}}\t{{.Status}}\t{{.Ports}}"The installation follows a strict 6-step sequence managed by scripts/install.sh:
- 01_system_preparation.sh - Updates system, installs dependencies, configures security
- 02_install_docker.sh - Installs Docker Engine and Docker Compose
- 03_generate_secrets.sh - Creates .env file with secure passwords and keys
- 04_wizard.sh - Interactive service selection with whiptail UI (20+ services)
- 05_run_services.sh - Deploys selected services using Docker Compose profiles
- 06_final_report.sh - Displays access URLs and credentials
- Profiles: Services organized into logical groups (n8n, ai-dev, monitoring, langfuse, ollama, speech, etc.)
- Environment: All configuration through
.envfile generated during installation - Volumes: Named volumes for data persistence across container rebuilds
- Networks: All services on default Docker network with internal service discovery
- n8n: Requires postgres, redis. Runs in queue mode with configurable worker count
- Supabase: Full stack with postgres, auth, storage, analytics, edge functions
- Langfuse: Requires postgres, redis, clickhouse, minio for LLM observability
- Open WebUI: Integrates with Ollama for local LLMs
- ComfyUI: Requires models downloaded separately (FLUX, SDXL, etc.)
- bolt.diy: Standalone AI-powered full-stack development platform
- OpenHands: AI software engineer with web interface
- Generated by
03_generate_secrets.shusingopenssl rand -hex - Contains service hostnames, passwords, API keys
- Used by both Docker Compose and Caddy for routing
- AI-specific keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.
04_wizard.shuses whiptail to create interactive checklist- Updates
COMPOSE_PROFILESin.envbased on selections - Service dependencies automatically handled
- Grouped by category: AI Development, Automation, Databases, Monitoring
./shared/directory is mounted to/data/sharedin containers- Use this path in n8n workflows and AI tools to share files
- Persistent storage for generated content (images, documents, models)
- ComfyUI: Models stored in
/var/lib/docker/volumes/localai_comfyui_data/_data/models/ - Ollama: Models persist in
ollama_datavolume - Speech Stack: Uses ports 8001 (Whisper) and 5001 (TTS)
- Vector DBs: Qdrant on port 6333, Supabase includes pgvector
- Pre-installs Node.js libraries: cheerio, axios, moment, lodash
- Runs in production mode with PostgreSQL backend
- Queue mode for parallel workflow processing
- Community packages and AI runners enabled
- Configured for tool usage and external function calls
- Metrics enabled for Prometheus monitoring
- Caddy handles HTTPS/TLS termination and reverse proxy
- Services exposed via subdomains: n8n.domain.com, comfyui.domain.com, etc.
- Internal services (redis, postgres) not exposed externally
- Supports Cloudflare Tunnel as alternative to port exposure
- sslip.io domains work without DNS configuration
- Update
COMPOSE_PROFILESin.env - Run
docker compose --profile [profiles] up -d - Check
docker compose psto verify services started
- Define service in
docker-compose.ymlwith appropriate profile - Add hostname variables to Caddy environment section
- Update
04_wizard.shto include in service selection - Add Caddyfile routing if service needs web access
- Consider GPU requirements and volume mounts
- Document API endpoints and integration points
- ComfyUI models: Download to model directories before use
- Ollama models: Pull with
docker exec ollama ollama pull [model] - Embeddings: Configure in n8n AI nodes or Flowise chains
- Vector stores: Initialize indexes in Qdrant or Supabase
- n8n workflows:
./n8n/backup/workflows/ - ComfyUI workflows: Export from UI
- Vector databases: Use respective backup tools
- All data in Docker volumes:
docker run --rm -v [volume]:/data -v $(pwd):/backup busybox tar czf /backup/[name].tar.gz -C /data .
- Check service logs:
docker compose logs [service-name] -f - Verify GPU access (if applicable):
docker exec [container] nvidia-smi - Monitor resource usage:
docker stats - Check model loading: Service-specific logs
- API connectivity: Test with curl or service UI
- All services require authentication (configured during setup)
- Firewall configured to only allow SSH, HTTP, HTTPS
- Fail2Ban enabled for brute-force protection
- SSL certificates managed by Caddy with Let's Encrypt
- API keys stored in
.envfile (never commit to git) - Services isolated in Docker network namespaces
- Regular security updates via
update.sh
memory-bank/- Project documentation and development notesflowise/- Flowise workflow templates and custom toolsn8n/- n8n configurations and community workflowscomfyui/- ComfyUI custom nodes and workflows (if added)scripts/- Installation and utility scripts (all bash)shared/- Shared directory accessible by all containersopenedai-config/- TTS voice configurationsopenedai-voices/- Custom TTS voice models
- Whisper (STT): OpenAI-compatible API on port 8001
- OpenedAI (TTS): Multiple voices, OpenAI-compatible API on port 5001
- Both services integrate seamlessly with n8n HTTP nodes
- bolt.diy: Full-stack development with AI assistance
- OpenHands: Autonomous coding agent with web interface
- OpenUI: AI-powered UI component generator
- LiveKit: Real-time communication and AI agents
- browserless: Headless Chrome for web automation
- PaddleOCR: Document OCR processing
- Never modify the installation script sequence (01-06) without understanding dependencies
- Always use logging functions from
utils.shin new scripts - The
.envfile contains all secrets and must never be committed - Services use Docker health checks - respect dependency conditions
- Profile-based deployment allows selective service activation
- When modifying
docker-compose.yml, maintain the x-templates pattern - AI services may require significant resources (RAM, disk, GPU)
- Community workflows are imported during installation (20-30 minutes)
- Model downloads for ComfyUI/Ollama happen post-installation
- Test AI integrations with small workloads before scaling
When contributing new AI services or features:
- Follow existing patterns in docker-compose.yml
- Add comprehensive health checks
- Document resource requirements
- Include example workflows or usage
- Test with minimal and full installations
- Update this CLAUDE.md file with relevant information
- Repository: https://github.com/freddy-schuetz/ai-launchkit
- Based on: n8n-installer by @kossakovsky
- License: Apache 2.0 (commercial use allowed)