Skip to content

Commit 69f29f5

Browse files
committed
Stability Fix
1 parent 20f8d51 commit 69f29f5

17 files changed

+1028
-1498
lines changed

DEPLOYMENT_GUIDE.md

Lines changed: 217 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,217 @@
1+
# 🚀 FFprobe API Deployment Guide
2+
3+
## 📋 Deployment Options Overview
4+
5+
### 🟢 Simple Deployment (Recommended for Small/Test Organizations)
6+
**File**: `compose.simple.yml`
7+
**Purpose**: Complete LLM-powered API setup without monitoring overhead
8+
9+
**What's Included:**
10+
- ✅ FFprobe API service
11+
- ✅ PostgreSQL database
12+
- ✅ Redis cache
13+
-**Ollama LLM (enabled by default)** - Essential for AI-powered analysis
14+
-**OpenRouter fallback** - Automatic fallback for enhanced reliability
15+
- ❌ No Prometheus/Grafana (enterprise-only monitoring)
16+
17+
**Resource Usage:**
18+
- Memory: ~4-5GB total (includes LLM)
19+
- CPU: 2-4 cores recommended
20+
- Storage: ~8GB base + models + uploads
21+
22+
**Command:**
23+
```bash
24+
docker compose -f compose.simple.yml up -d
25+
```
26+
27+
**Perfect for:**
28+
- Small organizations
29+
- Test/staging environments
30+
- Cost-conscious deployments
31+
- Quick demos with AI features
32+
33+
---
34+
35+
### 🟡 Development Deployment
36+
**File**: `compose.yml + compose.dev.yml`
37+
**Purpose**: Local development with debugging tools and AI features
38+
39+
**What's Included:**
40+
- ✅ All simple deployment features
41+
- ✅ Adminer (database GUI)
42+
- ✅ Redis Commander (Redis GUI)
43+
- ✅ Hot reload for development
44+
- ✅ Debug logging enabled
45+
- ✅ Full LLM capabilities
46+
47+
**Command:**
48+
```bash
49+
docker compose -f compose.yml -f compose.dev.yml up -d
50+
```
51+
52+
---
53+
54+
### 🟠 Production Deployment
55+
**File**: `compose.yml + compose.production.yml`
56+
**Purpose**: Medium-scale production with enhanced AI features
57+
58+
**What's Included:**
59+
- ✅ All simple deployment features
60+
-**Enhanced Ollama setup** - Optimized for production workloads
61+
-**Multiple LLM models** - Better AI analysis variety
62+
- ✅ Production-optimized settings
63+
- ✅ Resource limits configured
64+
-**Intelligent LLM fallback** - Local-first, cloud backup
65+
- ❌ No monitoring stack (keeps it lightweight)
66+
67+
**Resource Usage:**
68+
- Memory: ~6-8GB total
69+
- CPU: 4-6 cores recommended
70+
- Storage: ~15GB base + models + uploads
71+
72+
**Command:**
73+
```bash
74+
docker compose -f compose.yml -f compose.production.yml up -d
75+
```
76+
77+
---
78+
79+
### 🔴 Enterprise Deployment
80+
**File**: `compose.yml + compose.enterprise.yml`
81+
**Purpose**: Full-scale enterprise with monitoring and AI intelligence
82+
83+
**What's Included:**
84+
- ✅ All production deployment features
85+
-**Prometheus monitoring**
86+
-**Grafana dashboards**
87+
- ✅ Load balancer (Nginx)
88+
- ✅ Message queue (RabbitMQ)
89+
-**Advanced LLM orchestration** - Multiple models with smart routing
90+
- ✅ Horizontal scaling support
91+
- ✅ Enhanced resource allocation
92+
93+
**Resource Usage:**
94+
- Memory: ~12-16GB total
95+
- CPU: 8+ cores recommended
96+
- Storage: ~30GB base + monitoring data
97+
98+
**Command:**
99+
```bash
100+
docker compose -f compose.yml -f compose.enterprise.yml up -d
101+
```
102+
103+
---
104+
105+
## 🤖 AI/LLM Features Across All Deployments
106+
107+
All deployment options include **LLM-powered analysis** by default:
108+
109+
### 🎯 **What's AI-Powered:**
110+
-**Video Analysis Reports** - Human-readable technical insights
111+
-**Quality Assessment** - Professional video quality evaluation
112+
-**Comparison Analysis** - AI-driven before/after analysis
113+
-**Technical Recommendations** - FFmpeg optimization suggestions
114+
-**Format Suitability** - Delivery platform recommendations
115+
116+
### 🔄 **Smart Fallback System:**
117+
1. **Local LLM First** - Uses Ollama for privacy and speed
118+
2. **OpenRouter Fallback** - Automatic cloud backup if local fails
119+
3. **Graceful Degradation** - API continues working without AI if both fail
120+
121+
### ⚙️ **LLM Configuration:**
122+
```bash
123+
# Local LLM (default: enabled)
124+
ENABLE_LOCAL_LLM=true
125+
OLLAMA_URL=http://ollama:11434
126+
OLLAMA_MODEL=phi3:mini
127+
128+
# OpenRouter fallback (optional)
129+
OPENROUTER_API_KEY=your-api-key-here
130+
```
131+
132+
---
133+
134+
## 🎯 Which Deployment Should You Choose?
135+
136+
### Choose **Simple** if:
137+
- ✅ Small team (< 10 users)
138+
- ✅ Want AI features without complexity
139+
- ✅ Budget/resource constraints
140+
- ✅ Testing or staging environment
141+
- ✅ Don't need monitoring dashboards
142+
143+
### Choose **Production** if:
144+
- ✅ Medium team (10-50 users)
145+
- ✅ Need enhanced AI performance
146+
- ✅ Production workload with AI requirements
147+
- ✅ Want optimized LLM processing
148+
149+
### Choose **Enterprise** if:
150+
- ✅ Large team (50+ users)
151+
- ✅ Need comprehensive monitoring
152+
- ✅ High availability requirements
153+
- ✅ Advanced AI orchestration needed
154+
- ✅ Compliance/audit requirements
155+
156+
---
157+
158+
## 🔧 Quick Setup Commands
159+
160+
### Simple Deployment (LLM-Powered)
161+
```bash
162+
# 1. Clone repository
163+
git clone https://github.com/rendiffdev/ffprobe-api.git
164+
cd ffprobe-api
165+
166+
# 2. Set environment variables
167+
cp .env.example .env
168+
# Edit .env with your values
169+
170+
# 3. Deploy with AI features
171+
docker compose -f compose.simple.yml up -d
172+
173+
# 4. Verify (should show LLM status)
174+
curl http://localhost:8080/health
175+
```
176+
177+
### Test AI Features
178+
```bash
179+
# Upload a video and get AI analysis
180+
curl -X POST http://localhost:8080/api/v1/probe/file \
181+
-H "X-API-Key: your-api-key" \
182+
183+
184+
# The response will include LLM-generated insights
185+
```
186+
187+
---
188+
189+
## 💡 Cost Optimization Tips
190+
191+
1. **Start Simple**: Begin with `compose.simple.yml` - includes AI without monitoring overhead
192+
2. **Local LLM First**: Uses free Ollama models, only pays for OpenRouter fallback when needed
193+
3. **Smart Resource Limits**: Each deployment tier optimized for different scales
194+
4. **Optional Cloud LLM**: OpenRouter fallback is optional - works great with just local LLM
195+
196+
---
197+
198+
## 📊 Resource Requirements Summary
199+
200+
| Deployment | Memory | CPU | Storage | AI Features | Monitoring |
201+
|------------|--------|-----|---------|-------------|------------|
202+
| **Simple** | 4-5GB | 2-4 cores | 8GB+ | ✅ Full LLM | Logs only |
203+
| **Production** | 6-8GB | 4-6 cores | 15GB+ | ✅ Enhanced LLM | Logs only |
204+
| **Enterprise** | 12-16GB | 8+ cores | 30GB+ | ✅ Advanced LLM | Full monitoring |
205+
206+
---
207+
208+
## 🔒 Security & Privacy
209+
210+
- **Local LLM**: All AI processing can run locally for maximum privacy
211+
- **Encrypted Communication**: All external LLM calls use HTTPS
212+
- **API Key Security**: OpenRouter keys are optional and securely managed
213+
- **No Data Leakage**: Local-first approach means your videos stay on your infrastructure
214+
215+
---
216+
217+
*The FFprobe API is designed to be **AI-first** while maintaining complete flexibility in deployment scale and privacy requirements.*

0 commit comments

Comments
 (0)