Skip to content

Commit 859b541

Browse files
committed
Move Traefik to proxy profile and create Docker Profiles documentation
Changes: - Moved traefik from 'core' profile to 'proxy' profile - All core services (lpr-app, prometheus, grafana, blackbox, canary) work without Traefik - Services communicate via internal Docker network regardless of proxy - Created comprehensive README-DOCKER-PROFILES.md documentation - Documented all profiles: core, proxy, cpu, amd-vulkan, nvidia-cuda - Explained profile combinations and use cases - Updated Coolify deployment guidance Benefits: - Coolify can use 'core' profile without Traefik conflicts - Traefik is optional for reverse proxy functionality - Services maintain internal communication without proxy - Clear documentation for different deployment scenarios
1 parent c93d103 commit 859b541

File tree

2 files changed

+195
-100
lines changed

2 files changed

+195
-100
lines changed

README-DOCKER-PROFILES.md

Lines changed: 194 additions & 99 deletions
Original file line numberDiff line numberDiff line change
@@ -1,152 +1,247 @@
1-
# Docker Compose Profiles for OpenLPR
1+
# Docker Compose Profiles Guide
22

3-
This document describes the refactored Docker Compose setup using the merge design pattern with profiles.
3+
This document explains the different Docker Compose profiles available for deploying OpenLPR.
44

5-
## Profiles
5+
## Overview
66

7-
### Core Profile (`core`)
8-
Contains the core infrastructure services:
9-
- **traefik**: Reverse proxy with dashboard (http://traefik.localhost)
10-
- **lpr-app**: Main Django application (http://lpr.localhost)
11-
- **prometheus**: Monitoring system (http://prometheus.localhost)
12-
- **grafana**: Visualization dashboard (http://grafana.localhost)
7+
OpenLPR uses Docker Compose profiles to allow flexible deployment scenarios. Each profile groups related services together.
138

14-
### Inference Profiles
15-
Choose one based on your hardware:
9+
## Available Profiles
1610

17-
#### CPU Profile (`cpu`)
18-
- **llamacpp-cpu**: CPU-based inference server
11+
### `core` - Core Application and Monitoring
1912

20-
#### AMD Vulkan Profile (`amd-vulkan`)
21-
- **llamacpp-amd-vulkan**: AMD GPU inference with Vulkan support
13+
**Services included:**
14+
- `lpr-app` - Main OpenLPR Django application
15+
- `prometheus` - Metrics collection and storage
16+
- `grafana` - Metrics visualization and dashboards
17+
- `blackbox-exporter` - HTTP probe for health checking
18+
- `lpr-canary` - Canary tests and synthetic monitoring
2219

23-
#### NVIDIA CUDA Profile (`nvidia-cuda`)
24-
- **llamacpp-nvidia-cuda**: NVIDIA GPU inference with CUDA support
20+
**When to use:**
21+
- Production deployments
22+
- Development with monitoring
23+
- When you don't need a reverse proxy
24+
- Coolify deployments (which provide their own routing)
2525

26-
## Usage Examples
27-
28-
### Start with Core Infrastructure + CPU Inference
26+
**Deployment:**
2927
```bash
30-
docker-compose --profile core --profile cpu up -d
28+
docker compose --profile core up -d
3129
```
3230

33-
### Start with Core Infrastructure + NVIDIA Inference
31+
**Access points:**
32+
- OpenLPR: http://localhost:8000
33+
- Prometheus: http://localhost:9090
34+
- Grafana: http://localhost:3000 (default: admin/admin)
35+
- Blackbox Exporter: http://localhost:9115
36+
- Canary Metrics: http://localhost:9100/metrics
37+
38+
**Network Communication:**
39+
All services in the `core` profile communicate via the `openlpr-network` bridge network, regardless of whether a reverse proxy is used. Services use internal DNS names (e.g., `lpr-app`, `prometheus`) to communicate with each other.
40+
41+
### `proxy` - Reverse Proxy (Traefik)
42+
43+
**Services included:**
44+
- `traefik` - Modern HTTP reverse proxy and load balancer
45+
46+
**When to use:**
47+
- Local development with domain-based routing
48+
- When you need a reverse proxy but not using Coolify
49+
- To access services via friendly URLs (e.g., `lpr.localhost`)
50+
- To enable HTTPS/TLS termination
51+
52+
**Deployment:**
3453
```bash
35-
docker-compose --profile core --profile nvidia-cuda up -d
54+
# Deploy core services + proxy
55+
docker compose --profile core --profile proxy up -d
56+
57+
# Or add proxy to existing core deployment
58+
docker compose --profile proxy up -d
3659
```
3760

38-
### Start with Core Infrastructure + AMD Vulkan Inference
61+
**Access points:**
62+
- OpenLPR: http://lpr.localhost
63+
- Prometheus: http://prometheus.localhost
64+
- Grafana: http://grafana.localhost
65+
- Blackbox Exporter: http://blackbox.localhost
66+
- Canary: http://canary.localhost
67+
- Traefik Dashboard: http://traefik.localhost (or `TRAEFIK_HOST`)
68+
69+
**Note:** Traefik uses random ports by default (e.g., 32768, 32769). Set `TRAEFIK_HTTP_PORT` and `TRAEFIK_DASHBOARD_PORT` to use specific ports (e.g., 80, 8080).
70+
71+
### `cpu` - CPU Inference
72+
73+
**Services included:**
74+
- `llamacpp-cpu` - Llama.cpp server for CPU inference
75+
76+
**When to use:**
77+
- Systems without GPU acceleration
78+
- Development and testing
79+
- When model inference performance is not critical
80+
81+
**Deployment:**
3982
```bash
40-
docker-compose --profile core --profile amd-vulkan up -d
83+
docker compose --profile core --profile cpu up -d
4184
```
4285

43-
### Start Only Core Services
86+
**Configuration:**
87+
See `.env.llamacpp` for model configuration.
88+
89+
### `amd-vulkan` - AMD GPU Inference
90+
91+
**Services included:**
92+
- `llamacpp-amd-vulkan` - Llama.cpp server for AMD GPU inference (Vulkan)
93+
94+
**When to use:**
95+
- Systems with AMD GPUs supporting Vulkan
96+
- Production deployments requiring faster inference
97+
98+
**Requirements:**
99+
- AMD GPU with Vulkan support
100+
- GPU drivers installed
101+
- `/dev/kfd` and `/dev/dri` device access
102+
103+
**Deployment:**
44104
```bash
45-
docker-compose --profile core up -d
105+
docker compose --profile core --profile amd-vulkan up -d
46106
```
47107

48-
### Stop All Services
108+
### `nvidia-cuda` - NVIDIA GPU Inference
109+
110+
**Services included:**
111+
- `llamacpp-nvidia-cuda` - Llama.cpp server for NVIDIA GPU inference (CUDA)
112+
113+
**When to use:**
114+
- Systems with NVIDIA GPUs
115+
- Production deployments requiring fastest inference
116+
- When using NVIDIA CUDA ecosystem
117+
118+
**Requirements:**
119+
- NVIDIA GPU with CUDA support
120+
- NVIDIA Container Toolkit installed
121+
- GPU drivers installed
122+
123+
**Deployment:**
49124
```bash
50-
docker-compose down
125+
docker compose --profile core --profile nvidia-cuda up -d
51126
```
52127

53-
## Environment Configuration
128+
## Profile Combinations
54129

55-
Copy the example environment file:
130+
### Development Setup (No Proxy)
56131
```bash
57-
cp .env.llamacpp.example .env.llamacpp
132+
docker compose --profile core up -d
58133
```
134+
Access services directly via ports.
59135

60-
Edit `.env.llamacpp` with your specific configuration:
61-
- HuggingFace token for model downloads
62-
- Django settings (SECRET_KEY, DEBUG, etc.)
63-
- Grafana credentials
64-
- Model configuration
65-
66-
## Access Points
136+
### Development Setup (With Proxy)
137+
```bash
138+
docker compose --profile core --profile proxy up -d
139+
```
140+
Access services via domain names.
67141

68-
After starting the services:
142+
### Production with CPU Inference
143+
```bash
144+
docker compose --profile core --profile cpu up -d
145+
```
69146

70-
- **OpenLPR Application**: http://lpr.localhost
71-
- **Traefik Dashboard**: http://traefik.localhost
72-
- **Prometheus**: http://prometheus.localhost
73-
- **Grafana**: http://grafana.localhost (admin/admin by default)
147+
### Production with NVIDIA GPU
148+
```bash
149+
docker compose --profile core --profile nvidia-cuda up -d
150+
```
74151

75-
## Service Dependencies
152+
### Production with AMD GPU
153+
```bash
154+
docker compose --profile core --profile amd-vulkan up -d
155+
```
76156

77-
- `lpr-app` depends on a healthy inference service (when inference profiles are active)
78-
- All services share the `openlpr-network` for communication
79-
- Volumes are shared for data persistence
157+
## Environment Variables
80158

81-
## Monitoring
159+
### Service Ports
82160

83-
The setup includes comprehensive monitoring:
84-
- **Prometheus** collects metrics from all services
85-
- **Grafana** provides visualization dashboards
86-
- **Traefik** provides request metrics and routing
161+
All service ports are configurable via environment variables:
87162

88-
## Hardware Requirements
163+
```bash
164+
# Core services
165+
LPR_APP_PORT=8000
166+
PROMETHEUS_PORT=9090
167+
GRAFANA_PORT=3000
168+
BLACKBOX_PORT=9115
169+
CANARY_PORT=9100
170+
171+
# Proxy (only when using proxy profile)
172+
TRAEFIK_HTTP_PORT=80 # Leave unset for Coolify
173+
TRAEFIK_DASHBOARD_PORT=8080 # Leave unset for Coolify
174+
```
89175

90-
### CPU Profile
91-
- Any modern CPU with sufficient RAM
92-
- Recommended: 8GB+ RAM for model loading
176+
### Proxy Host Names
93177

94-
### AMD Vulkan Profile
95-
- AMD GPU with Vulkan support
96-
- Proper GPU drivers installed
97-
- Access to `/dev/dri` and `/dev/kfd` devices
178+
When using the `proxy` profile, you can customize domain names:
98179

99-
### NVIDIA CUDA Profile
100-
- NVIDIA GPU with CUDA support
101-
- NVIDIA Container Toolkit installed
102-
- Proper GPU drivers
180+
```bash
181+
TRAEFIK_HOST=traefik.yourdomain.com
182+
PROMETHEUS_HOST=prometheus.yourdomain.com
183+
GRAFANA_HOST=grafana.yourdomain.com
184+
BLACKBOX_HOST=blackbox.yourdomain.com
185+
CANARY_HOST=canary.yourdomain.com
186+
LPR_APP_HOST=lpr.yourdomain.com
187+
```
103188

104-
## Development vs Production
189+
## Coolify Deployment
105190

106-
### Development
107-
- Uses localhost domains
108-
- Debug mode enabled
109-
- Basic authentication only
191+
For Coolify deployments:
110192

111-
### Production (TODO)
112-
- Configure proper domains
113-
- Set up SSL certificates
114-
- Secure authentication
115-
- Resource limits
116-
- Backup strategies
193+
1. **Use the `core` profile** - don't include `proxy`
194+
2. **DO NOT set** `TRAEFIK_HTTP_PORT` or `TRAEFIK_DASHBOARD_PORT`
195+
3. Coolify provides its own reverse proxy and routing
196+
4. Services communicate via internal Docker network
197+
5. Access services via Coolify's configured domains
117198

118-
## Troubleshooting
199+
```bash
200+
# Coolify deployment command
201+
docker compose --profile core up -d
202+
```
119203

120-
### Common Issues
204+
## Managing Profiles
121205

122-
1. **Port Conflicts**: Ensure ports 80, 8080, 3000, 9090, 8000, 8001 are available
123-
2. **GPU Access**: Verify GPU drivers and container runtime for GPU profiles
124-
3. **Permission Issues**: Check Docker socket access for Traefik
125-
4. **Model Downloads**: Verify HuggingFace token and network access
206+
### View running services
207+
```bash
208+
docker compose ps
209+
```
126210

127-
### Health Checks
211+
### Stop specific profile
212+
```bash
213+
docker compose --profile proxy down
214+
```
128215

129-
All services include health checks. Monitor with:
216+
### Stop all services
130217
```bash
131-
docker-compose ps
132-
docker-compose logs [service-name]
218+
docker compose down
133219
```
134220

135-
## Migration from Old Setup
221+
### View logs
222+
```bash
223+
# All services
224+
docker compose logs -f
225+
226+
# Specific service
227+
docker compose logs -f lpr-app
228+
229+
# Specific profile
230+
docker compose --profile proxy logs -f
231+
```
136232

137-
The old separate compose files are deprecated. To migrate:
138-
1. Backup existing data in `container-data` and `container-media`
139-
2. Update environment file format
140-
3. Use new profile commands to start services
141-
4. Verify all functionality works as expected
233+
## Network Architecture
142234

143-
## Customization
235+
All services are connected to the `openlpr-network` bridge network:
144236

145-
### Adding New Services
146-
Add services to `docker-compose.yml` with appropriate profiles and labels.
237+
- **Internal communication**: Services use DNS names (e.g., `http://lpr-app:8000`)
238+
- **External access**: Via exposed ports or proxy routing
239+
- **No proxy needed**: Services communicate with each other regardless of proxy profile
147240

148-
### Modifying Routes
149-
Update `traefik/dynamic/config.yml` for custom routing rules.
241+
Example internal communication:
242+
- Prometheus scrapes metrics from `http://lpr-app:8000/metrics`
243+
- Blackbox exporter probes `http://lpr-app:8000/health`
244+
- Canary tests `http://lpr-app:8000/api/v1/ocr/`
245+
- Grafana connects to `http://prometheus:9090`
150246

151-
### Monitoring Configuration
152-
Modify `prometheus/prometheus.yml` and Grafana provisioning files for custom metrics.
247+
This internal communication works with or without the Traefik proxy enabled.

docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ services:
4242
- openlpr-network
4343
restart: unless-stopped
4444
profiles:
45-
- core
45+
- proxy
4646
labels:
4747
- "traefik.enable=true"
4848
- "traefik.http.routers.traefik.rule=${TRAEFIK_HOST:-Host(`traefik.localhost`)}"

0 commit comments

Comments
 (0)