How to deploy Crypto Vision to various environments.
- Quick Start (Local)
- Docker Compose
- Google Cloud Run
- Kubernetes
- Self-Hosted (VPS)
- Environment Variables
- CI/CD Pipeline
- Monitoring & Health Checks
- Scaling
# Install dependencies
npm install
# Configure environment
cp .env.example .env
# Edit .env with your API keys
# Start development server (hot reload)
npm run dev
# Or build and run production
npm run build
npm startServer starts on http://localhost:8080.
The simplest production-ready deployment. No cloud dependencies.
# Build and start all services
docker compose up -d
# View logs
docker compose logs -f api
# Stop
docker compose downServices:
| Service | Image | Port | Resources |
|---|---|---|---|
api |
Custom (Dockerfile) | 8080 | 2Gi RAM, 4 CPU |
redis |
redis:7-alpine | 6379 | 256MB maxmemory, LRU eviction, AOF |
postgres |
postgres:16-alpine | 5432 | Default |
# Start full stack + ingestion pipeline
docker compose -f docker-compose.yml -f docker-compose.ingest.yml up -dAdds:
- Pub/Sub emulator
- 8 ingestion workers (market, defi, news, dex, derivatives, onchain, governance, macro)
- BigQuery emulator
The Dockerfile uses a 3-stage build:
# Stage 1: Dependencies (cached layer)
FROM node:22-alpine AS deps
COPY package*.json ./
RUN npm ci --omit=dev
# Stage 2: Build (TypeScript compilation)
FROM node:22-alpine AS build
COPY . .
RUN npm run build
# Stage 3: Production (minimal image)
FROM node:22-alpine AS production
COPY --from=deps /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
USER node
EXPOSE 8080
HEALTHCHECK --interval=30s CMD wget -qO- http://localhost:8080/health
CMD ["node", "dist/src/index.js"]- GCP project with billing enabled
gcloudCLI authenticated- Artifact Registry repository created
# Submit build (uses cloudbuild.yaml)
gcloud builds submit --config=cloudbuild.yaml
# This automatically:
# 1. Type-checks
# 2. Lints
# 3. Runs tests
# 4. Builds container
# 5. Pushes to Artifact Registry
# 6. Deploys canary (5% traffic)
# 7. Health checks
# 8. Promotes to 100%# Build and push
docker build -t gcr.io/$PROJECT_ID/crypto-vision .
docker push gcr.io/$PROJECT_ID/crypto-vision
# Deploy to Cloud Run
gcloud run deploy crypto-vision \
--image gcr.io/$PROJECT_ID/crypto-vision \
--region us-central1 \
--memory 2Gi \
--cpu 4 \
--min-instances 2 \
--max-instances 500 \
--port 8080 \
--set-secrets="REDIS_URL=redis-url:latest,GROQ_API_KEY=groq-key:latest" \
--allow-unauthenticatedcd infra/terraform
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars
terraform init
terraform plan
terraform applyTerraform manages:
- Cloud Run service
- Redis (Memorystore)
- Secret Manager secrets
- Cloud Scheduler jobs
- IAM roles
- Pub/Sub topics
- BigQuery datasets
- Monitoring alerts
- VPC networking
gcloud builds submit --config=cloudbuild-workers.yamlDeploys 8 Cloud Run Jobs for data ingestion + 1 backfill job.
cd infra
./teardown.sh
# Or: cd terraform && terraform destroyPortable Kubernetes manifests in infra/k8s/.
# Create namespace
kubectl apply -f infra/k8s/namespace.yaml
# Create secrets (from template)
cp infra/k8s/secrets-template.yaml infra/k8s/secrets.yaml
# Edit secrets.yaml with base64-encoded values
kubectl apply -f infra/k8s/secrets.yaml
# Deploy all resources
kubectl apply -f infra/k8s/
# Verify
kubectl get pods -n crypto-vision
kubectl get svc -n crypto-vision| Manifest | Resource | Description |
|---|---|---|
deployment.yaml |
Deployment | API server (2 replicas, 2Gi/4CPU) |
service.yaml |
Service | ClusterIP service on port 8080 |
hpa.yaml |
HPA | Auto-scale 2–50 pods on CPU/memory |
redis.yaml |
StatefulSet | Redis 7 with persistent volume |
cronjobs.yaml |
CronJob | 7 scheduled data refresh jobs |
network-policies.yaml |
NetworkPolicy | Pod-to-pod traffic rules |
pdb.yaml |
PDB | Max 1 unavailable during rollout |
inference-deployment.yaml |
Deployment | Model inference server |
training-job.yaml |
Job | GPU training job (GKE GPU node pool) |
| Platform | Status |
|---|---|
| GKE (Google) | Production |
| EKS (AWS) | Compatible |
| AKS (Azure) | Compatible |
| k3s / k3d | Compatible (lightweight) |
| minikube | Development |
Minimal deployment on a single VPS (2+ CPU, 4+ GB RAM).
# On the VPS
git clone https://github.com/nirholas/crypto-vision.git
cd crypto-vision
cp .env.example .env
# Edit .env
docker compose up -d# Build
npm install && npm run build
# Create systemd service
sudo cat > /etc/systemd/system/crypto-vision.service << EOF
[Unit]
Description=Crypto Vision API
After=network.target redis.target
[Service]
Type=simple
User=node
WorkingDirectory=/opt/crypto-vision
EnvironmentFile=/opt/crypto-vision/.env
ExecStart=/usr/bin/node dist/src/index.js
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl enable crypto-vision
sudo systemctl start crypto-visionserver {
listen 443 ssl http2;
server_name cryptocurrency.cv;
ssl_certificate /etc/ssl/certs/fullchain.pem;
ssl_certificate_key /etc/ssl/private/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /ws/ {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}See the root README.md for the complete environment variable reference.
| Variable | Description |
|---|---|
NODE_ENV |
Set to production |
PORT |
HTTP port (default: 8080) |
REDIS_URL |
Redis connection string |
CORS_ORIGINS |
Comma-separated allowed origins |
| Variable | Description |
|---|---|
COINGECKO_API_KEY |
Higher rate limits for market data |
GROQ_API_KEY |
Fastest LLM provider for AI endpoints |
LOG_LEVEL |
Set to warn or error in production |
10-step pipeline:
1. npm ci # Install dependencies
2. npm run typecheck # TypeScript strict check ─┐
3. npm run lint # ESLint ├─ Parallel
4. npm test # Vitest ─┘
5. docker build # Build container image
6. docker push # Push to Artifact Registry
7. gcloud run deploy (canary) # Deploy with 5% traffic
8. health check # Verify /health endpoint
9. gcloud run update-traffic # Promote to 100%
10. cleanup # Remove old revisions
Builds worker image and deploys 8 Cloud Run Jobs + 1 backfill job.
GET /health returns:
{
"status": "healthy",
"uptime": 86400,
"version": "0.1.0",
"cache": { "type": "redis", "connected": true },
"sources": {
"coingecko": "healthy",
"defillama": "healthy",
"mempool": "degraded"
}
}Status values:
healthy— all systems operationaldegraded— some sources unavailable (stale cache being served)unhealthy— critical failure
GET /metrics exposes:
http_requests_total— request count by method, path, statushttp_request_duration_seconds— latency histogramhttp_errors_total— error count by typecache_hits_total/cache_misses_total— cache performanceupstream_requests_total— source adapter call countscircuit_breaker_state— circuit breaker status per source
GET /api/ready — returns 200 when the API is ready to serve traffic.
| Platform | Mechanism | Config |
|---|---|---|
| Cloud Run | Request-based autoscaling | 2–500 instances |
| Kubernetes | HPA on CPU/memory | 2–50 pods |
| Docker Compose | Manual --scale |
docker compose up --scale api=4 |
| Component | Recommended | Maximum |
|---|---|---|
| API Server | 2Gi RAM, 4 CPU | 4Gi RAM, 8 CPU |
| Redis | 256MB | 1Gi |
| PostgreSQL | 1Gi | 4Gi |
- WebSocket throttling — 5 Hz broadcast batching (see PERFORMANCE.md)
- Cache warming — 7 Cloud Scheduler jobs pre-warm popular data
- Connection pooling — Redis and PostgreSQL connections are pooled
- Response compression — gzip/brotli on all responses >1KB
- ETag caching — conditional GET reduces bandwidth
See PERFORMANCE.md for detailed performance optimization guidance.