|
1 | | -# Docker Compose Profiles for OpenLPR |
| 1 | +# Docker Compose Profiles Guide |
2 | 2 |
|
3 | | -This document describes the refactored Docker Compose setup using the merge design pattern with profiles. |
| 3 | +This document explains the different Docker Compose profiles available for deploying OpenLPR. |
4 | 4 |
|
5 | | -## Profiles |
| 5 | +## Overview |
6 | 6 |
|
7 | | -### Core Profile (`core`) |
8 | | -Contains the core infrastructure services: |
9 | | -- **traefik**: Reverse proxy with dashboard (http://traefik.localhost) |
10 | | -- **lpr-app**: Main Django application (http://lpr.localhost) |
11 | | -- **prometheus**: Monitoring system (http://prometheus.localhost) |
12 | | -- **grafana**: Visualization dashboard (http://grafana.localhost) |
| 7 | +OpenLPR uses Docker Compose profiles to allow flexible deployment scenarios. Each profile groups related services together. |
13 | 8 |
|
14 | | -### Inference Profiles |
15 | | -Choose one based on your hardware: |
| 9 | +## Available Profiles |
16 | 10 |
|
17 | | -#### CPU Profile (`cpu`) |
18 | | -- **llamacpp-cpu**: CPU-based inference server |
| 11 | +### `core` - Core Application and Monitoring |
19 | 12 |
|
20 | | -#### AMD Vulkan Profile (`amd-vulkan`) |
21 | | -- **llamacpp-amd-vulkan**: AMD GPU inference with Vulkan support |
| 13 | +**Services included:** |
| 14 | +- `lpr-app` - Main OpenLPR Django application |
| 15 | +- `prometheus` - Metrics collection and storage |
| 16 | +- `grafana` - Metrics visualization and dashboards |
| 17 | +- `blackbox-exporter` - HTTP probe for health checking |
| 18 | +- `lpr-canary` - Canary tests and synthetic monitoring |
22 | 19 |
|
23 | | -#### NVIDIA CUDA Profile (`nvidia-cuda`) |
24 | | -- **llamacpp-nvidia-cuda**: NVIDIA GPU inference with CUDA support |
| 20 | +**When to use:** |
| 21 | +- Production deployments |
| 22 | +- Development with monitoring |
| 23 | +- When you don't need a reverse proxy |
| 24 | +- Coolify deployments (which provide their own routing) |
25 | 25 |
|
26 | | -## Usage Examples |
27 | | - |
28 | | -### Start with Core Infrastructure + CPU Inference |
| 26 | +**Deployment:** |
29 | 27 | ```bash |
30 | | -docker-compose --profile core --profile cpu up -d |
| 28 | +docker compose --profile core up -d |
31 | 29 | ``` |
32 | 30 |
|
33 | | -### Start with Core Infrastructure + NVIDIA Inference |
| 31 | +**Access points:** |
| 32 | +- OpenLPR: http://localhost:8000 |
| 33 | +- Prometheus: http://localhost:9090 |
| 34 | +- Grafana: http://localhost:3000 (default: admin/admin) |
| 35 | +- Blackbox Exporter: http://localhost:9115 |
| 36 | +- Canary Metrics: http://localhost:9100/metrics |
| 37 | + |
| 38 | +**Network Communication:** |
| 39 | +All services in the `core` profile communicate via the `openlpr-network` bridge network, regardless of whether a reverse proxy is used. Services use internal DNS names (e.g., `lpr-app`, `prometheus`) to communicate with each other. |
| 40 | + |
| 41 | +### `proxy` - Reverse Proxy (Traefik) |
| 42 | + |
| 43 | +**Services included:** |
| 44 | +- `traefik` - Modern HTTP reverse proxy and load balancer |
| 45 | + |
| 46 | +**When to use:** |
| 47 | +- Local development with domain-based routing |
| 48 | +- When you need a reverse proxy but not using Coolify |
| 49 | +- To access services via friendly URLs (e.g., `lpr.localhost`) |
| 50 | +- To enable HTTPS/TLS termination |
| 51 | + |
| 52 | +**Deployment:** |
34 | 53 | ```bash |
35 | | -docker-compose --profile core --profile nvidia-cuda up -d |
| 54 | +# Deploy core services + proxy |
| 55 | +docker compose --profile core --profile proxy up -d |
| 56 | + |
| 57 | +# Or add proxy to existing core deployment |
| 58 | +docker compose --profile proxy up -d |
36 | 59 | ``` |
37 | 60 |
|
38 | | -### Start with Core Infrastructure + AMD Vulkan Inference |
| 61 | +**Access points:** |
| 62 | +- OpenLPR: http://lpr.localhost |
| 63 | +- Prometheus: http://prometheus.localhost |
| 64 | +- Grafana: http://grafana.localhost |
| 65 | +- Blackbox Exporter: http://blackbox.localhost |
| 66 | +- Canary: http://canary.localhost |
| 67 | +- Traefik Dashboard: http://traefik.localhost (or `TRAEFIK_HOST`) |
| 68 | + |
| 69 | +**Note:** Traefik uses random ports by default (e.g., 32768, 32769). Set `TRAEFIK_HTTP_PORT` and `TRAEFIK_DASHBOARD_PORT` to use specific ports (e.g., 80, 8080). |
| 70 | + |
| 71 | +### `cpu` - CPU Inference |
| 72 | + |
| 73 | +**Services included:** |
| 74 | +- `llamacpp-cpu` - Llama.cpp server for CPU inference |
| 75 | + |
| 76 | +**When to use:** |
| 77 | +- Systems without GPU acceleration |
| 78 | +- Development and testing |
| 79 | +- When model inference performance is not critical |
| 80 | + |
| 81 | +**Deployment:** |
39 | 82 | ```bash |
40 | | -docker-compose --profile core --profile amd-vulkan up -d |
| 83 | +docker compose --profile core --profile cpu up -d |
41 | 84 | ``` |
42 | 85 |
|
43 | | -### Start Only Core Services |
| 86 | +**Configuration:** |
| 87 | +See `.env.llamacpp` for model configuration. |
| 88 | + |
| 89 | +### `amd-vulkan` - AMD GPU Inference |
| 90 | + |
| 91 | +**Services included:** |
| 92 | +- `llamacpp-amd-vulkan` - Llama.cpp server for AMD GPU inference (Vulkan) |
| 93 | + |
| 94 | +**When to use:** |
| 95 | +- Systems with AMD GPUs supporting Vulkan |
| 96 | +- Production deployments requiring faster inference |
| 97 | + |
| 98 | +**Requirements:** |
| 99 | +- AMD GPU with Vulkan support |
| 100 | +- GPU drivers installed |
| 101 | +- `/dev/kfd` and `/dev/dri` device access |
| 102 | + |
| 103 | +**Deployment:** |
44 | 104 | ```bash |
45 | | -docker-compose --profile core up -d |
| 105 | +docker compose --profile core --profile amd-vulkan up -d |
46 | 106 | ``` |
47 | 107 |
|
48 | | -### Stop All Services |
| 108 | +### `nvidia-cuda` - NVIDIA GPU Inference |
| 109 | + |
| 110 | +**Services included:** |
| 111 | +- `llamacpp-nvidia-cuda` - Llama.cpp server for NVIDIA GPU inference (CUDA) |
| 112 | + |
| 113 | +**When to use:** |
| 114 | +- Systems with NVIDIA GPUs |
| 115 | +- Production deployments requiring fastest inference |
| 116 | +- When using NVIDIA CUDA ecosystem |
| 117 | + |
| 118 | +**Requirements:** |
| 119 | +- NVIDIA GPU with CUDA support |
| 120 | +- NVIDIA Container Toolkit installed |
| 121 | +- GPU drivers installed |
| 122 | + |
| 123 | +**Deployment:** |
49 | 124 | ```bash |
50 | | -docker-compose down |
| 125 | +docker compose --profile core --profile nvidia-cuda up -d |
51 | 126 | ``` |
52 | 127 |
|
53 | | -## Environment Configuration |
| 128 | +## Profile Combinations |
54 | 129 |
|
55 | | -Copy the example environment file: |
| 130 | +### Development Setup (No Proxy) |
56 | 131 | ```bash |
57 | | -cp .env.llamacpp.example .env.llamacpp |
| 132 | +docker compose --profile core up -d |
58 | 133 | ``` |
| 134 | +Access services directly via ports. |
59 | 135 |
|
60 | | -Edit `.env.llamacpp` with your specific configuration: |
61 | | -- HuggingFace token for model downloads |
62 | | -- Django settings (SECRET_KEY, DEBUG, etc.) |
63 | | -- Grafana credentials |
64 | | -- Model configuration |
65 | | - |
66 | | -## Access Points |
| 136 | +### Development Setup (With Proxy) |
| 137 | +```bash |
| 138 | +docker compose --profile core --profile proxy up -d |
| 139 | +``` |
| 140 | +Access services via domain names. |
67 | 141 |
|
68 | | -After starting the services: |
| 142 | +### Production with CPU Inference |
| 143 | +```bash |
| 144 | +docker compose --profile core --profile cpu up -d |
| 145 | +``` |
69 | 146 |
|
70 | | -- **OpenLPR Application**: http://lpr.localhost |
71 | | -- **Traefik Dashboard**: http://traefik.localhost |
72 | | -- **Prometheus**: http://prometheus.localhost |
73 | | -- **Grafana**: http://grafana.localhost (admin/admin by default) |
| 147 | +### Production with NVIDIA GPU |
| 148 | +```bash |
| 149 | +docker compose --profile core --profile nvidia-cuda up -d |
| 150 | +``` |
74 | 151 |
|
75 | | -## Service Dependencies |
| 152 | +### Production with AMD GPU |
| 153 | +```bash |
| 154 | +docker compose --profile core --profile amd-vulkan up -d |
| 155 | +``` |
76 | 156 |
|
77 | | -- `lpr-app` depends on a healthy inference service (when inference profiles are active) |
78 | | -- All services share the `openlpr-network` for communication |
79 | | -- Volumes are shared for data persistence |
| 157 | +## Environment Variables |
80 | 158 |
|
81 | | -## Monitoring |
| 159 | +### Service Ports |
82 | 160 |
|
83 | | -The setup includes comprehensive monitoring: |
84 | | -- **Prometheus** collects metrics from all services |
85 | | -- **Grafana** provides visualization dashboards |
86 | | -- **Traefik** provides request metrics and routing |
| 161 | +All service ports are configurable via environment variables: |
87 | 162 |
|
88 | | -## Hardware Requirements |
| 163 | +```bash |
| 164 | +# Core services |
| 165 | +LPR_APP_PORT=8000 |
| 166 | +PROMETHEUS_PORT=9090 |
| 167 | +GRAFANA_PORT=3000 |
| 168 | +BLACKBOX_PORT=9115 |
| 169 | +CANARY_PORT=9100 |
| 170 | + |
| 171 | +# Proxy (only when using proxy profile) |
| 172 | +TRAEFIK_HTTP_PORT=80 # Leave unset for Coolify |
| 173 | +TRAEFIK_DASHBOARD_PORT=8080 # Leave unset for Coolify |
| 174 | +``` |
89 | 175 |
|
90 | | -### CPU Profile |
91 | | -- Any modern CPU with sufficient RAM |
92 | | -- Recommended: 8GB+ RAM for model loading |
| 176 | +### Proxy Host Names |
93 | 177 |
|
94 | | -### AMD Vulkan Profile |
95 | | -- AMD GPU with Vulkan support |
96 | | -- Proper GPU drivers installed |
97 | | -- Access to `/dev/dri` and `/dev/kfd` devices |
| 178 | +When using the `proxy` profile, you can customize domain names: |
98 | 179 |
|
99 | | -### NVIDIA CUDA Profile |
100 | | -- NVIDIA GPU with CUDA support |
101 | | -- NVIDIA Container Toolkit installed |
102 | | -- Proper GPU drivers |
| 180 | +```bash |
| 181 | +TRAEFIK_HOST=traefik.yourdomain.com |
| 182 | +PROMETHEUS_HOST=prometheus.yourdomain.com |
| 183 | +GRAFANA_HOST=grafana.yourdomain.com |
| 184 | +BLACKBOX_HOST=blackbox.yourdomain.com |
| 185 | +CANARY_HOST=canary.yourdomain.com |
| 186 | +LPR_APP_HOST=lpr.yourdomain.com |
| 187 | +``` |
103 | 188 |
|
104 | | -## Development vs Production |
| 189 | +## Coolify Deployment |
105 | 190 |
|
106 | | -### Development |
107 | | -- Uses localhost domains |
108 | | -- Debug mode enabled |
109 | | -- Basic authentication only |
| 191 | +For Coolify deployments: |
110 | 192 |
|
111 | | -### Production (TODO) |
112 | | -- Configure proper domains |
113 | | -- Set up SSL certificates |
114 | | -- Secure authentication |
115 | | -- Resource limits |
116 | | -- Backup strategies |
| 193 | +1. **Use the `core` profile** - don't include `proxy` |
| 194 | +2. **DO NOT set** `TRAEFIK_HTTP_PORT` or `TRAEFIK_DASHBOARD_PORT` |
| 195 | +3. Coolify provides its own reverse proxy and routing |
| 196 | +4. Services communicate via internal Docker network |
| 197 | +5. Access services via Coolify's configured domains |
117 | 198 |
|
118 | | -## Troubleshooting |
| 199 | +```bash |
| 200 | +# Coolify deployment command |
| 201 | +docker compose --profile core up -d |
| 202 | +``` |
119 | 203 |
|
120 | | -### Common Issues |
| 204 | +## Managing Profiles |
121 | 205 |
|
122 | | -1. **Port Conflicts**: Ensure ports 80, 8080, 3000, 9090, 8000, 8001 are available |
123 | | -2. **GPU Access**: Verify GPU drivers and container runtime for GPU profiles |
124 | | -3. **Permission Issues**: Check Docker socket access for Traefik |
125 | | -4. **Model Downloads**: Verify HuggingFace token and network access |
| 206 | +### View running services |
| 207 | +```bash |
| 208 | +docker compose ps |
| 209 | +``` |
126 | 210 |
|
127 | | -### Health Checks |
| 211 | +### Stop specific profile |
| 212 | +```bash |
| 213 | +docker compose --profile proxy down |
| 214 | +``` |
128 | 215 |
|
129 | | -All services include health checks. Monitor with: |
| 216 | +### Stop all services |
130 | 217 | ```bash |
131 | | -docker-compose ps |
132 | | -docker-compose logs [service-name] |
| 218 | +docker compose down |
133 | 219 | ``` |
134 | 220 |
|
135 | | -## Migration from Old Setup |
| 221 | +### View logs |
| 222 | +```bash |
| 223 | +# All services |
| 224 | +docker compose logs -f |
| 225 | + |
| 226 | +# Specific service |
| 227 | +docker compose logs -f lpr-app |
| 228 | + |
| 229 | +# Specific profile |
| 230 | +docker compose --profile proxy logs -f |
| 231 | +``` |
136 | 232 |
|
137 | | -The old separate compose files are deprecated. To migrate: |
138 | | -1. Backup existing data in `container-data` and `container-media` |
139 | | -2. Update environment file format |
140 | | -3. Use new profile commands to start services |
141 | | -4. Verify all functionality works as expected |
| 233 | +## Network Architecture |
142 | 234 |
|
143 | | -## Customization |
| 235 | +All services are connected to the `openlpr-network` bridge network: |
144 | 236 |
|
145 | | -### Adding New Services |
146 | | -Add services to `docker-compose.yml` with appropriate profiles and labels. |
| 237 | +- **Internal communication**: Services use DNS names (e.g., `http://lpr-app:8000`) |
| 238 | +- **External access**: Via exposed ports or proxy routing |
| 239 | +- **No proxy needed**: Services communicate with each other regardless of proxy profile |
147 | 240 |
|
148 | | -### Modifying Routes |
149 | | -Update `traefik/dynamic/config.yml` for custom routing rules. |
| 241 | +Example internal communication: |
| 242 | +- Prometheus scrapes metrics from `http://lpr-app:8000/metrics` |
| 243 | +- Blackbox exporter probes `http://lpr-app:8000/health` |
| 244 | +- Canary tests `http://lpr-app:8000/api/v1/ocr/` |
| 245 | +- Grafana connects to `http://prometheus:9090` |
150 | 246 |
|
151 | | -### Monitoring Configuration |
152 | | -Modify `prometheus/prometheus.yml` and Grafana provisioning files for custom metrics. |
| 247 | +This internal communication works with or without the Traefik proxy enabled. |
0 commit comments