-
Notifications
You must be signed in to change notification settings - Fork 2
Observability
D1 Manager supports integration with external observability platforms via Cloudflare's native OpenTelemetry (OTel) export. This allows you to send traces and logs to services like Grafana Cloud, Datadog, Honeycomb, Sentry, and Axiom.
There are two approaches to observability in D1 Manager:
- Cloudflare Native OpenTelemetry - Export traces and logs directly from Cloudflare Workers to OTLP-compatible endpoints
- Application Webhooks - Send event notifications to HTTP endpoints (see Webhooks)
Cloudflare Workers natively supports exporting OpenTelemetry-compliant traces and logs to any OTLP endpoint.
- Go to Workers Observability in the Cloudflare Dashboard
- Click Add destination
- Configure your provider's OTLP endpoint and authentication headers
Add observability configuration to your wrangler.toml:
[observability]
enabled = true
[observability.traces]
enabled = true
destinations = ["your-traces-destination"]
[observability.logs]
enabled = true
destinations = ["your-logs-destination"]npm run build
npx wrangler deployOTLP Endpoints:
- Traces:
https://otlp-gateway-{region}.grafana.net/otlp/v1/traces - Logs:
https://otlp-gateway-{region}.grafana.net/otlp/v1/logs
Authentication:
- Header:
Authorization: Basic <base64(instanceId:apiKey)>
Setup:
- Get your Grafana Cloud instance ID and API key
- Create a destination in Cloudflare Dashboard with the endpoint and auth header
- Enable observability in wrangler.toml
OTLP Endpoints:
- Traces:
https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/traces - Logs:
https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/logs
Authentication:
- Header:
X-Sentry-DSN: <your-dsn>
Setup:
- Get your Sentry project ID and DSN
- Create a destination in Cloudflare Dashboard
- Enable observability in wrangler.toml
OTLP Endpoints:
- Traces: Coming soon via OTLP
- Logs:
https://otlp.{SITE}.datadoghq.com/v1/logs
Authentication:
- Header:
DD-API-KEY: <your-api-key>
Note: Datadog traces via OTLP are not yet available. Use Datadog's native integration or logs only for now.
OTLP Endpoints:
- Traces:
https://api.honeycomb.io/v1/traces - Logs:
https://api.honeycomb.io/v1/logs
Authentication:
- Header:
X-Honeycomb-Team: <your-api-key>
OTLP Endpoints:
- Traces:
https://api.axiom.co/v1/traces - Logs:
https://api.axiom.co/v1/logs
Authentication:
- Header:
Authorization: Bearer <your-api-key>
For custom metrics and usage-based analytics, use Workers Analytics Engine:
Step 1: Add binding to wrangler.toml
[[analytics_engine_datasets]]
binding = "D1_METRICS"
dataset = "d1_manager_metrics"Step 2: Query via SQL API or Grafana
SELECT
blob1 AS database_id,
SUM(double1) AS total_export_bytes,
COUNT(*) AS operation_count
FROM d1_manager_metrics
WHERE timestamp > NOW() - INTERVAL '7' DAY
GROUP BY blob1For real-time log processing, create a Tail Worker:
export default {
async tail(events: TraceItem[]) {
for (const event of events) {
// Forward to your logging service
await fetch('https://your-logging-service.com/ingest', {
method: 'POST',
body: JSON.stringify(event),
});
}
}
};Configure in wrangler.toml:
[[tail_consumers]]
service = "my-tail-worker"For structured log export to storage destinations:
- Go to Cloudflare Dashboard > Analytics > Logs
- Create a Logpush job for Workers Trace Events
- Select your destination (R2, S3, Azure, GCS, etc.)
D1 Manager includes a centralized error logging system that produces structured logs:
[ERROR] [databases] [DB_CREATE_FAILED] Failed to create database (db: abc-123)
Log fields include:
- timestamp: ISO timestamp
- level: error, warning, or info
-
code: Module-prefixed error code (e.g.,
DB_CREATE_FAILED,TBL_DELETE_FAILED) - message: Human-readable error message
- context: Module, operation, database ID, user ID, metadata
These structured logs are automatically exported when OpenTelemetry is enabled.
- Start with logs - Enable log export first, then add traces as needed
- Use sampling - For high-traffic deployments, configure trace sampling
- Set retention - Configure appropriate data retention in your observability platform
-
Alert on errors - Set up alerts for
job_failedevents via webhooks or log queries - Monitor latency - Use traces to identify slow operations
- Verify destination is configured correctly in Cloudflare Dashboard
- Check that destination name matches wrangler.toml
- Redeploy after configuration changes
- Check Cloudflare Dashboard for delivery status
- Verify API key/token is correct
- Check header format matches provider requirements
- Ensure endpoint URL is correct for your region
- Confirm traces are enabled in wrangler.toml
- Check trace sampling configuration
- Verify your observability platform supports OTLP traces
See also:
- Webhooks - Application-level event notifications
- Job History - Track all database operations
- Troubleshooting - General troubleshooting guide
- Database Management
- R2 Backup Restore
- Scheduled Backups
- Table Operations
- Query Console
- Schema Designer
- Column Management
- Bulk Operations
- Job History
- Time Travel
- Read Replication
- Undo Rollback
- Foreign Key Visualizer
- ER Diagram
- Foreign Key Dependencies
- Foreign Key Navigation
- Circular Dependency Detector
- Cascade Impact Simulator
- AI Search
- FTS5 Full Text Search
- Cross Database Search
- Index Analyzer
- Database Comparison
- Database Optimization