|
| 1 | +# 🐳 Docker Support for TimeCapsule-SLM |
| 2 | + |
| 3 | +Run TimeCapsule-SLM in a containerized environment with full AI support including Ollama integration. |
| 4 | + |
| 5 | +## 🚀 Quick Start |
| 6 | + |
| 7 | +### Option 1: Docker Run (Simple) |
| 8 | +```bash |
| 9 | +# Pull and run the latest image |
| 10 | +docker run -d -p 3000:80 --name timecapsule-slm timecapsule-slm:latest |
| 11 | + |
| 12 | +# Access at http://localhost:3000 |
| 13 | +``` |
| 14 | + |
| 15 | +### Option 2: Docker Compose (Recommended) |
| 16 | +```bash |
| 17 | +# Clone the repository |
| 18 | +git clone https://github.com/thefirehacker/TimeCapsule-SLM.git |
| 19 | +cd TimeCapsule-SLM |
| 20 | + |
| 21 | +# Start TimeCapsule-SLM only |
| 22 | +docker-compose up -d |
| 23 | + |
| 24 | +# Start with Ollama AI support |
| 25 | +docker-compose --profile ai-enabled up -d |
| 26 | + |
| 27 | +# Access at http://localhost:3000 |
| 28 | +``` |
| 29 | + |
| 30 | +--- |
| 31 | + |
| 32 | +## 📋 Prerequisites |
| 33 | + |
| 34 | +- **Docker**: Version 20.10 or higher |
| 35 | +- **Docker Compose**: Version 2.0 or higher |
| 36 | +- **RAM**: 4GB+ (8GB+ recommended with AI services) |
| 37 | +- **Storage**: 2GB+ free space (10GB+ with AI models) |
| 38 | + |
| 39 | +--- |
| 40 | + |
| 41 | +## 🛠️ Build from Source |
| 42 | + |
| 43 | +```bash |
| 44 | +# Clone the repository |
| 45 | +git clone https://github.com/thefirehacker/TimeCapsule-SLM.git |
| 46 | +cd TimeCapsule-SLM |
| 47 | + |
| 48 | +# Build the Docker image |
| 49 | +docker build -t timecapsule-slm:latest . |
| 50 | + |
| 51 | +# Run the container |
| 52 | +docker run -d -p 3000:80 --name timecapsule-slm timecapsule-slm:latest |
| 53 | +``` |
| 54 | + |
| 55 | +--- |
| 56 | + |
| 57 | +## 🤖 AI Integration Options |
| 58 | + |
| 59 | +### Option 1: With Ollama (Local AI) |
| 60 | +```bash |
| 61 | +# Start TimeCapsule-SLM with Ollama |
| 62 | +docker-compose --profile ai-enabled up -d |
| 63 | + |
| 64 | +# Pull a model in the Ollama container |
| 65 | +docker exec timecapsule-ollama ollama pull qwen2.5:0.5b |
| 66 | + |
| 67 | +# Verify Ollama is running |
| 68 | +curl http://localhost:11434/api/version |
| 69 | +``` |
| 70 | + |
| 71 | +### Option 2: External AI Services |
| 72 | +```bash |
| 73 | +# Start without AI containers (use external Ollama/LM Studio) |
| 74 | +docker-compose up -d timecapsule-slm |
| 75 | + |
| 76 | +# Configure external AI in TimeCapsule-SLM UI: |
| 77 | +# - Ollama: http://host.docker.internal:11434 |
| 78 | +# - LM Studio: http://host.docker.internal:1234 |
| 79 | +# - OpenAI API: Enter your API key |
| 80 | +``` |
| 81 | + |
| 82 | +--- |
| 83 | + |
| 84 | +## ⚙️ Configuration |
| 85 | + |
| 86 | +### Environment Variables |
| 87 | +Create a `.env` file for customization: |
| 88 | + |
| 89 | +```bash |
| 90 | +# .env |
| 91 | +TIMECAPSULE_PORT=3000 |
| 92 | +OLLAMA_PORT=11434 |
| 93 | +OLLAMA_ORIGINS=http://localhost:3000 |
| 94 | +RESTART_POLICY=unless-stopped |
| 95 | +``` |
| 96 | + |
| 97 | +### Custom docker-compose Override |
| 98 | +```yaml |
| 99 | +# docker-compose.override.yml |
| 100 | +version: '3.8' |
| 101 | +services: |
| 102 | + timecapsule-slm: |
| 103 | + ports: |
| 104 | + - "8080:80" # Custom port |
| 105 | + environment: |
| 106 | + - CUSTOM_VAR=value |
| 107 | +``` |
| 108 | +
|
| 109 | +--- |
| 110 | +
|
| 111 | +## 📊 Service Management |
| 112 | +
|
| 113 | +### Basic Commands |
| 114 | +```bash |
| 115 | +# Start services |
| 116 | +docker-compose up -d |
| 117 | + |
| 118 | +# Stop services |
| 119 | +docker-compose down |
| 120 | + |
| 121 | +# View logs |
| 122 | +docker-compose logs -f |
| 123 | + |
| 124 | +# Restart services |
| 125 | +docker-compose restart |
| 126 | + |
| 127 | +# Update to latest |
| 128 | +docker-compose pull && docker-compose up -d |
| 129 | +``` |
| 130 | + |
| 131 | +### Health Checks |
| 132 | +```bash |
| 133 | +# Check service status |
| 134 | +docker-compose ps |
| 135 | + |
| 136 | +# Check container health |
| 137 | +docker inspect --format='{{.State.Health.Status}}' timecapsule-slm |
| 138 | + |
| 139 | +# View health check logs |
| 140 | +docker inspect --format='{{range .State.Health.Log}}{{.Output}}{{end}}' timecapsule-slm |
| 141 | +``` |
| 142 | + |
| 143 | +--- |
| 144 | + |
| 145 | +## 💾 Data Persistence |
| 146 | + |
| 147 | +### Ollama Models |
| 148 | +Models are automatically persisted in Docker volumes: |
| 149 | +```bash |
| 150 | +# List Ollama models |
| 151 | +docker exec timecapsule-ollama ollama list |
| 152 | + |
| 153 | +# Backup Ollama data |
| 154 | +docker run --rm -v timecapsule-slm_ollama_data:/data -v $(pwd):/backup alpine tar czf /backup/ollama-backup.tar.gz -C /data . |
| 155 | + |
| 156 | +# Restore Ollama data |
| 157 | +docker run --rm -v timecapsule-slm_ollama_data:/data -v $(pwd):/backup alpine tar xzf /backup/ollama-backup.tar.gz -C /data |
| 158 | +``` |
| 159 | + |
| 160 | +### User Data |
| 161 | +TimeCapsule-SLM stores data in browser localStorage. For enterprise use, consider: |
| 162 | +- External database integration |
| 163 | +- Shared volume mounts |
| 164 | +- Network storage solutions |
| 165 | + |
| 166 | +--- |
| 167 | + |
| 168 | +## 🌐 Production Deployment |
| 169 | + |
| 170 | +### Reverse Proxy (Nginx) |
| 171 | +```nginx |
| 172 | +# nginx.conf |
| 173 | +server { |
| 174 | + listen 80; |
| 175 | + server_name timecapsule.yourdomain.com; |
| 176 | +
|
| 177 | + location / { |
| 178 | + proxy_pass http://localhost:3000; |
| 179 | + proxy_set_header Host $host; |
| 180 | + proxy_set_header X-Real-IP $remote_addr; |
| 181 | + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; |
| 182 | + proxy_set_header X-Forwarded-Proto $scheme; |
| 183 | + } |
| 184 | +} |
| 185 | +``` |
| 186 | + |
| 187 | +### Traefik Labels (Already included) |
| 188 | +```yaml |
| 189 | +labels: |
| 190 | + - "traefik.enable=true" |
| 191 | + - "traefik.http.routers.timecapsule.rule=Host(`timecapsule.yourdomain.com`)" |
| 192 | + - "traefik.http.services.timecapsule.loadbalancer.server.port=80" |
| 193 | +``` |
| 194 | +
|
| 195 | +### Docker Swarm |
| 196 | +```bash |
| 197 | +# Deploy to swarm |
| 198 | +docker stack deploy -c docker-compose.yml timecapsule-stack |
| 199 | + |
| 200 | +# Scale services |
| 201 | +docker service scale timecapsule-stack_timecapsule-slm=3 |
| 202 | +``` |
| 203 | + |
| 204 | +--- |
| 205 | + |
| 206 | +## 🔧 Troubleshooting |
| 207 | + |
| 208 | +### Common Issues |
| 209 | + |
| 210 | +**Port Already in Use** |
| 211 | +```bash |
| 212 | +# Check what's using the port |
| 213 | +sudo lsof -i :3000 |
| 214 | + |
| 215 | +# Use different port |
| 216 | +docker-compose up -d -p 8080:80 |
| 217 | +``` |
| 218 | + |
| 219 | +**Ollama Connection Failed** |
| 220 | +```bash |
| 221 | +# Check Ollama service |
| 222 | +docker-compose logs ollama |
| 223 | + |
| 224 | +# Test Ollama API |
| 225 | +curl http://localhost:11434/api/version |
| 226 | + |
| 227 | +# Restart Ollama |
| 228 | +docker-compose restart ollama |
| 229 | +``` |
| 230 | + |
| 231 | +**CORS Issues** |
| 232 | +```bash |
| 233 | +# Verify OLLAMA_ORIGINS environment |
| 234 | +docker-compose exec ollama env | grep OLLAMA_ORIGINS |
| 235 | + |
| 236 | +# Update origins in docker-compose.yml |
| 237 | +environment: |
| 238 | + - OLLAMA_ORIGINS=http://localhost:3000,https://yourdomain.com |
| 239 | +``` |
| 240 | + |
| 241 | +### Logs and Debugging |
| 242 | +```bash |
| 243 | +# View all logs |
| 244 | +docker-compose logs |
| 245 | + |
| 246 | +# Follow specific service logs |
| 247 | +docker-compose logs -f timecapsule-slm |
| 248 | + |
| 249 | +# Debug container |
| 250 | +docker-compose exec timecapsule-slm /bin/sh |
| 251 | + |
| 252 | +# Check container resource usage |
| 253 | +docker stats |
| 254 | +``` |
| 255 | + |
| 256 | +--- |
| 257 | + |
| 258 | +## 🚀 Performance Optimization |
| 259 | + |
| 260 | +### Resource Limits |
| 261 | +```yaml |
| 262 | +# docker-compose.yml |
| 263 | +services: |
| 264 | + timecapsule-slm: |
| 265 | + deploy: |
| 266 | + resources: |
| 267 | + limits: |
| 268 | + memory: 512M |
| 269 | + cpus: '0.5' |
| 270 | + reservations: |
| 271 | + memory: 256M |
| 272 | + cpus: '0.25' |
| 273 | +``` |
| 274 | +
|
| 275 | +### Caching |
| 276 | +```dockerfile |
| 277 | +# Multi-stage build for better caching |
| 278 | +FROM nginx:alpine AS base |
| 279 | +# ... optimization steps |
| 280 | +``` |
| 281 | + |
| 282 | +--- |
| 283 | + |
| 284 | +## 📚 Additional Resources |
| 285 | + |
| 286 | +- **Docker Hub**: [timecapsule-slm](https://hub.docker.com/r/firehacker/timecapsule-slm) |
| 287 | +- **GitHub**: [TimeCapsule-SLM](https://github.com/thefirehacker/TimeCapsule-SLM) |
| 288 | +- **Documentation**: [Main README](README.md) |
| 289 | +- **Issues**: [GitHub Issues](https://github.com/thefirehacker/TimeCapsule-SLM/issues) |
| 290 | + |
| 291 | +--- |
| 292 | + |
| 293 | +## 💡 Tips for Success |
| 294 | + |
| 295 | +1. **Start Simple**: Use `docker-compose up -d` first |
| 296 | +2. **Add AI Gradually**: Enable Ollama with `--profile ai-enabled` |
| 297 | +3. **Monitor Resources**: Use `docker stats` to monitor usage |
| 298 | +4. **Backup Data**: Regular backups of Ollama models and user data |
| 299 | +5. **Update Regularly**: Pull latest images for security updates |
| 300 | + |
| 301 | +--- |
| 302 | + |
| 303 | +## 💬 **Need Help?** |
| 304 | + |
| 305 | +🎧 **Discord Community**: [discord.gg/ExQ8fCv9](https://discord.gg/ExQ8fCv9) - Get real-time help with Docker setup |
| 306 | +📧 **Email Support **: [[email protected]](mailto:[email protected]) - Technical support and questions |
| 307 | +🐛 **Report Issues**: [GitHub Issues](https://github.com/thefirehacker/TimeCapsule-SLM/issues) - Bug reports and feature requests |
| 308 | + |
| 309 | +*Our community is here to help you get TimeCapsule-SLM running smoothly!* |
0 commit comments