|
| 1 | +# Distributed Configuration |
| 2 | + |
| 3 | +This document describes how to configure the API for distributed operation. |
| 4 | + |
| 5 | +## Overview |
| 6 | + |
| 7 | +The API supports distributed operation using Redis or MongoDB as a backend for: |
| 8 | + |
| 9 | +- Session storage |
| 10 | +- Memory persistence |
| 11 | +- Tool usage logging |
| 12 | +- Conversation history |
| 13 | + |
| 14 | +This allows multiple API instances to share state and work together in a horizontally scaled environment. |
| 15 | + |
| 16 | +## Redis Configuration |
| 17 | + |
| 18 | +### Environment Variables |
| 19 | + |
| 20 | +To enable distributed operation with Redis, set the following environment variables: |
| 21 | + |
| 22 | +```bash |
| 23 | +# Enable distributed mode |
| 24 | +API_ENABLE_DISTRIBUTED=true |
| 25 | +API_DISTRIBUTED_BACKEND=redis |
| 26 | +API_SESSION_STORE=redis |
| 27 | + |
| 28 | +# Redis connection settings |
| 29 | +API_REDIS_HOST=localhost |
| 30 | +API_REDIS_PORT=6379 |
| 31 | +API_REDIS_DB=0 |
| 32 | +API_REDIS_PASSWORD=your_password |
| 33 | +API_REDIS_PREFIX=datamcp: |
| 34 | + |
| 35 | +# Memory backend |
| 36 | +API_MEMORY_BACKEND=redis |
| 37 | +``` |
| 38 | + |
| 39 | +### Docker Compose Example |
| 40 | + |
| 41 | +Here's an example Docker Compose configuration for running the API with Redis: |
| 42 | + |
| 43 | +```yaml |
| 44 | +version: '3' |
| 45 | + |
| 46 | +services: |
| 47 | + api1: |
| 48 | + build: . |
| 49 | + ports: |
| 50 | + - "8000:8000" |
| 51 | + environment: |
| 52 | + - API_ENABLE_DISTRIBUTED=true |
| 53 | + - API_DISTRIBUTED_BACKEND=redis |
| 54 | + - API_SESSION_STORE=redis |
| 55 | + - API_REDIS_HOST=redis |
| 56 | + - API_REDIS_PORT=6379 |
| 57 | + - API_REDIS_DB=0 |
| 58 | + - API_REDIS_PASSWORD=your_password |
| 59 | + - API_REDIS_PREFIX=datamcp: |
| 60 | + - API_MEMORY_BACKEND=redis |
| 61 | + depends_on: |
| 62 | + - redis |
| 63 | + |
| 64 | + api2: |
| 65 | + build: . |
| 66 | + ports: |
| 67 | + - "8001:8000" |
| 68 | + environment: |
| 69 | + - API_ENABLE_DISTRIBUTED=true |
| 70 | + - API_DISTRIBUTED_BACKEND=redis |
| 71 | + - API_SESSION_STORE=redis |
| 72 | + - API_REDIS_HOST=redis |
| 73 | + - API_REDIS_PORT=6379 |
| 74 | + - API_REDIS_DB=0 |
| 75 | + - API_REDIS_PASSWORD=your_password |
| 76 | + - API_REDIS_PREFIX=datamcp: |
| 77 | + - API_MEMORY_BACKEND=redis |
| 78 | + depends_on: |
| 79 | + - redis |
| 80 | + |
| 81 | + redis: |
| 82 | + image: redis:7 |
| 83 | + ports: |
| 84 | + - "6379:6379" |
| 85 | + command: redis-server --requirepass your_password |
| 86 | + volumes: |
| 87 | + - redis-data:/data |
| 88 | + |
| 89 | +volumes: |
| 90 | + redis-data: |
| 91 | +``` |
| 92 | +
|
| 93 | +## MongoDB Configuration |
| 94 | +
|
| 95 | +### Environment Variables |
| 96 | +
|
| 97 | +To enable distributed operation with MongoDB, set the following environment variables: |
| 98 | +
|
| 99 | +```bash |
| 100 | +# Enable distributed mode |
| 101 | +API_ENABLE_DISTRIBUTED=true |
| 102 | +API_DISTRIBUTED_BACKEND=mongodb |
| 103 | +API_SESSION_STORE=mongodb |
| 104 | + |
| 105 | +# MongoDB connection settings |
| 106 | +API_MONGODB_URI=mongodb://localhost:27017 |
| 107 | +API_MONGODB_DB=datamcp |
| 108 | + |
| 109 | +# Memory backend |
| 110 | +API_MEMORY_BACKEND=mongodb |
| 111 | +``` |
| 112 | + |
| 113 | +### Docker Compose Example |
| 114 | + |
| 115 | +Here's an example Docker Compose configuration for running the API with MongoDB: |
| 116 | + |
| 117 | +```yaml |
| 118 | +version: '3' |
| 119 | + |
| 120 | +services: |
| 121 | + api1: |
| 122 | + build: . |
| 123 | + ports: |
| 124 | + - "8000:8000" |
| 125 | + environment: |
| 126 | + - API_ENABLE_DISTRIBUTED=true |
| 127 | + - API_DISTRIBUTED_BACKEND=mongodb |
| 128 | + - API_SESSION_STORE=mongodb |
| 129 | + - API_MONGODB_URI=mongodb://mongodb:27017 |
| 130 | + - API_MONGODB_DB=datamcp |
| 131 | + - API_MEMORY_BACKEND=mongodb |
| 132 | + depends_on: |
| 133 | + - mongodb |
| 134 | + |
| 135 | + api2: |
| 136 | + build: . |
| 137 | + ports: |
| 138 | + - "8001:8000" |
| 139 | + environment: |
| 140 | + - API_ENABLE_DISTRIBUTED=true |
| 141 | + - API_DISTRIBUTED_BACKEND=mongodb |
| 142 | + - API_SESSION_STORE=mongodb |
| 143 | + - API_MONGODB_URI=mongodb://mongodb:27017 |
| 144 | + - API_MONGODB_DB=datamcp |
| 145 | + - API_MEMORY_BACKEND=mongodb |
| 146 | + depends_on: |
| 147 | + - mongodb |
| 148 | + |
| 149 | + mongodb: |
| 150 | + image: mongo:6 |
| 151 | + ports: |
| 152 | + - "27017:27017" |
| 153 | + volumes: |
| 154 | + - mongodb-data:/data/db |
| 155 | + |
| 156 | +volumes: |
| 157 | + mongodb-data: |
| 158 | +``` |
| 159 | +
|
| 160 | +## Load Balancing |
| 161 | +
|
| 162 | +To distribute traffic across multiple API instances, you can use a load balancer like Nginx, HAProxy, or a cloud load balancer. |
| 163 | +
|
| 164 | +### Nginx Example |
| 165 | +
|
| 166 | +Here's an example Nginx configuration for load balancing: |
| 167 | +
|
| 168 | +```nginx |
| 169 | +upstream datamcp_api { |
| 170 | + server api1:8000; |
| 171 | + server api2:8000; |
| 172 | + # Add more servers as needed |
| 173 | +} |
| 174 | + |
| 175 | +server { |
| 176 | + listen 80; |
| 177 | + server_name api.example.com; |
| 178 | + |
| 179 | + location / { |
| 180 | + proxy_pass http://datamcp_api; |
| 181 | + proxy_set_header Host $host; |
| 182 | + proxy_set_header X-Real-IP $remote_addr; |
| 183 | + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; |
| 184 | + proxy_set_header X-Forwarded-Proto $scheme; |
| 185 | + } |
| 186 | +} |
| 187 | +``` |
| 188 | + |
| 189 | +## Session Persistence |
| 190 | + |
| 191 | +When using a load balancer, you should ensure that requests from the same client are routed to the same API instance (session persistence or sticky sessions). This is important for WebSocket connections and streaming responses. |
| 192 | + |
| 193 | +### Nginx Example with Sticky Sessions |
| 194 | + |
| 195 | +```nginx |
| 196 | +upstream datamcp_api { |
| 197 | + ip_hash; # This ensures requests from the same IP go to the same server |
| 198 | + server api1:8000; |
| 199 | + server api2:8000; |
| 200 | + # Add more servers as needed |
| 201 | +} |
| 202 | +
|
| 203 | +server { |
| 204 | + listen 80; |
| 205 | + server_name api.example.com; |
| 206 | +
|
| 207 | + location / { |
| 208 | + proxy_pass http://datamcp_api; |
| 209 | + proxy_set_header Host $host; |
| 210 | + proxy_set_header X-Real-IP $remote_addr; |
| 211 | + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; |
| 212 | + proxy_set_header X-Forwarded-Proto $scheme; |
| 213 | + |
| 214 | + # WebSocket support |
| 215 | + proxy_http_version 1.1; |
| 216 | + proxy_set_header Upgrade $http_upgrade; |
| 217 | + proxy_set_header Connection "upgrade"; |
| 218 | + } |
| 219 | +} |
| 220 | +``` |
| 221 | + |
| 222 | +## Monitoring |
| 223 | + |
| 224 | +When running in a distributed environment, it's important to monitor the health and performance of your API instances and backend services. |
| 225 | + |
| 226 | +You can use tools like Prometheus, Grafana, and ELK stack for monitoring and logging. |
| 227 | + |
| 228 | +## Scaling |
| 229 | + |
| 230 | +To scale the API horizontally, you can add more API instances to your deployment. The shared state in Redis or MongoDB ensures that all instances work together seamlessly. |
| 231 | + |
| 232 | +You can also scale the Redis or MongoDB backends for better performance and reliability. |
0 commit comments