Last Updated January 7, 2026 - Production/Stable v2.3.0
This Docker image provides a modern, full-featured web application for managing Cloudflare D1 databases with enterprise-grade authentication via Cloudflare Access (Zero Trust). Run D1 Database Manager in Docker for development, testing, or self-hosted deployments.
Live Demo • Docker • Wiki • Changelog • Release Article
Frontend: React 19.2.3 | Vite 7.3.1 | TypeScript 5.9.3 | Tailwind CSS 4.1.17 | shadcn/ui
Backend: Cloudflare Workers + KV + D1 + R2 + Durable Objects + Zero Trust
- Create, rename, clone, delete, and optimize databases
- Export Database - SQL/JSON/CSV formats with portable schema + data
- Import Database - SQL/JSON/CSV/ZIP with create new or update existing
- Quick Actions - Icon buttons on each database card for instant access to all operations
- Bulk Operations - Multi-select for batch export (SQL/JSON/CSV), optimize, and delete
- Clone Database - Multi-step wizard with selective table cloning and granular options
- Grid/List View Toggle - Switch between card grid and compact sortable table view
- Copyable Database IDs - Click to copy database IDs with visual feedback
- Expanded Color Picker - 27 colors organized by hue family for visual organization
- R2 Backup/Restore + Unified Hub - Manual backups, undo history, and R2 snapshots in one tabbed dialog with multi-select, backup counts, and orphaned-backup visibility
- Scheduled R2 Backups - Per-database daily/weekly/monthly schedules with cron triggers, next-run tracking, and enable/disable controls
- Safety Backups - Delete, rename, and STRICT mode operations automatically create R2 backups before proceeding
- Automated Migration System - Auto-detects and applies schema upgrades with one-click "Upgrade Now" banner
- Database search filter & job history - Quickly find databases and audit all operations with full date/time and duration tracking
- Visual schema designer with STRICT mode and generated column support
- NEW! Clone, export (SQL/CSV/JSON), import (CSV/JSON/SQL), and bulk operations
- Column management (add, modify, rename, delete) with UNIQUE constraint support
- Foreign key dependency analysis
- Quick Actions - Icon buttons on each table card for instant access to all operations
- Import Data - Import CSV, JSON, or SQL into new or existing tables with duplicate handling (Fail/Replace/Skip) and auto-add missing columns
- STRICT Mode - Create new tables with STRICT mode or convert existing tables with automatic type mapping and validation
- Generated Columns - Create STORED or VIRTUAL computed columns with expression editor
- Table-level R2 Backup/Restore - Backup or restore individual tables directly from table cards (Grid and List views)
- Grid/List View Toggle - Switch between card grid and compact sortable table view (list view is default)
- Row search filter + table search - Quickly filter visible rows and find tables
- Table Row Counts - See row counts on table cards (formatted with locale separators)
- Table color tags - Assign colors for visual organization
- Tabbed interface - Query, SQL Diff, Drizzle ORM, and Query Builder in dedicated tabs
- Drizzle ORM Console - Introspect schemas, view migration status/history, generate SQL, push changes (with dry-run), and export TypeScript schema
- SQL Formatter - One-click formatting with SQLite-aware sql-formatter library
- SQL Autocomplete Toggle - Turn suggestions on/off with preference persisted to localStorage
- Rich editor - Syntax highlighting, line numbers, find/replace, hover docs, inline error squiggles, and word wrap toggle
- SQL Diff Editor - Compare queries side-by-side with syntax highlighting
- Visual Query Builder - Build queries interactively with editable SQL output and "Send to Editor" integration
- Query history and saved queries
- CSV export
- NEW! AI Search Integration - Semantic search over database schemas and data using Cloudflare AI Search (AutoRAG). Export databases to R2 for indexing, then query with natural language or get AI-generated SQL suggestions
- Unified Backup & Restore Hub - Undo history and R2 backups in one dialog with bulk restore/download/delete, backup counts, source tags, and orphaned-backup visibility
- Scheduled R2 Backups - Daily/weekly/monthly schedules with per-database controls, next-run previews, last-run status, and job history integration
- Foreign Key Visualizer & ER Diagram - Interactive graphs with fullscreen mode, export (PNG/SVG/JSON), alphabetized dropdowns, and dual layouts (hierarchical/force-directed)
- Cascade Impact Simulator - Preview DELETE cascades before execution with optimized dependency checks and multi-format export (CSV/JSON/Text/PDF)
- Circular Dependency Detector - DFS-based cycle detection with severity classification and breaking suggestions
- Undo/Rollback with safeguards - Automatic undo snapshots or R2 backups for all destructive operations (delete database/table, rename, STRICT mode, FTS5 convert, modify column)
- FTS5 Full-Text Search - Manage virtual tables, convert to/from regular tables, dedicated FTS5 search mode in main Search tab, and quick actions on FTS5 table cards
- Index Analyzer - Smart index recommendations plus "Create All Indexes" one-click apply with optional R2 backup and progress tracking
- Metrics Dashboard - Query volume, P90 latency, rows read, and storage trends via GraphQL Analytics.
- NEW! Query Insights Tab for slow query analysis with performance badges and sortable table
- NEW! Health Dashboard - System health score, backup coverage alerts, failed backup tracking, and replication status overview
- Time Travel - View bookmarks, checkpoint history, manual checkpoint capture, and CLI restore commands
- Read Replication - Enable/disable global read replicas with D1 Sessions API info
- Constraint Validator - Validate foreign key, NOT NULL, and UNIQUE constraints across database with guided fix workflow
- Event-driven webhooks - Send HTTP notifications on key database events
- Configurable events - database_create, database_delete, database_export, database_import, job_failed, batch_complete
- HMAC signatures - Optional secret-based request signing for security
- Test webhooks - Verify endpoint connectivity before going live
- Centralized Error Logging - Structured logging with module-prefixed error codes, automatic webhook notifications for critical errors
📚 Full documentation: Wiki
- Node.js 18+
- Cloudflare account (for production)
git clone https://github.com/neverinfamous/d1-manager.gitcd d1-managernpm installStart the servers (2 terminals):
Terminal 1 - Frontend:
npm run devTerminal 2 - Worker API:
npx wrangler dev --config wrangler.dev.toml --localOpen http://localhost:5173 - no auth required, mock data included.
npx wrangler loginnpx wrangler d1 create d1-manager-metadatanpx wrangler d1 execute d1-manager-metadata --remote --file=worker/schema.sqlcp wrangler.toml.example wrangler.tomlEdit wrangler.toml with your database_id from step 2.
To enable database backups to R2 storage:
npx wrangler r2 bucket create d1-manager-backupsThe wrangler.toml.example already includes the R2 and Durable Object configuration needed for backups. Features include:
- Backup databases to R2 before rename, STRICT mode, or FTS5 conversion operations
- Manual backup/restore from database cards
- Full backup history with restore capability
- Go to Cloudflare Zero Trust
- Configure authentication (GitHub OAuth, etc.)
- Create an Access Application for your domain
- Copy the Application Audience (AUD) tag
- Go to Cloudflare API Tokens
- Create Custom Token with Account → D1 → Edit permission
npx wrangler secret put ACCOUNT_IDnpx wrangler secret put API_KEYnpx wrangler secret put TEAM_DOMAINnpx wrangler secret put POLICY_AUDnpm run buildnpx wrangler deployD1 Manager includes an automated migration system. When you upgrade to a new version:
- Deploy the new version - The app will automatically detect pending migrations
- Click "Upgrade Now" - A banner appears if schema updates are needed
- Done! - Migrations are applied safely with rollback on failure
The app automatically handles:
- Fresh installations (applies all migrations)
- Legacy installations (detects existing tables and marks appropriate migrations as applied)
- Incremental upgrades (only applies new migrations)
If you prefer manual control, run the full schema refresh:
npx wrangler d1 execute d1-manager-metadata --remote --file=worker/schema.sqlOr apply individual migrations:
npx wrangler d1 execute d1-manager-metadata --remote --file=worker/migrations/002_add_color_tags.sql| Migration | When to Run |
|---|---|
001_add_job_history.sql |
If you don't have job history tables yet |
002_add_color_tags.sql |
If you don't have color tags yet |
003_add_error_message_column.sql |
If you have job history but no error_message column |
004_add_webhooks.sql |
If you don't have the webhooks table yet |
005_add_scheduled_backups.sql |
If you don't have the scheduled_backups table yet |
Note: New installations should use the automated migration system or
schema.sql.
When upgrading, check if new wrangler.toml bindings are required:
| Version | Required Bindings |
|---|---|
| v1.0.0 | METADATA (D1 database) |
| v1.1.0 | Same as v1.0.0 |
| v1.2.0+ | METADATA (D1), BACKUP_BUCKET (R2, optional), BACKUP_DO (Durable Object, optional) |
| v2.0.0 | Same as v1.2.0 (optional cron trigger for scheduled backups) |
| v2.1.0 | Same as v2.0.0 (optional [ai] binding for AI Search) |
If you want R2 backup features (backup before delete/rename, scheduled backups, restore from R2), you must add a Durable Object binding to your wrangler.toml. The Durable Object handles long-running backup operations that exceed Worker CPU limits.
If you skip this step, the app will work normally but R2 backup/restore features will be unavailable. You can still use Download/Import for manual backups.
Upgrading from v1.0.0 or v1.1.x to v2.0.0:
- Add the following to your
wrangler.toml:
# R2 bucket for backups
[[r2_buckets]]
binding = "BACKUP_BUCKET"
bucket_name = "d1-manager-backups"
# Durable Object for async backup operations (REQUIRED for R2 backups)
# The Durable Object handles large backup operations that exceed Worker limits
[[durable_objects.bindings]]
name = "BACKUP_DO"
class_name = "BackupDO"
script_name = "d1-manager"
# Migration tag tells Cloudflare to create the Durable Object class
# This MUST be included on first deploy with Durable Objects
[[migrations]]
tag = "v2_backup_do"
new_classes = ["BackupDO"]- Create the R2 bucket:
npx wrangler r2 bucket create d1-manager-backups- Deploy to apply the Durable Object migration:
npx wrangler deployNote: The
[[migrations]]block withtag = "v2_backup_do"is required on your first deploy after adding the Durable Object. Cloudflare uses this to initialize the Durable Object class. You can remove it fromwrangler.tomlafter the first successful deploy, but keeping it is harmless.
Optional: Add scheduled backups
To enable automated scheduled backups, also add the cron trigger:
[triggers]
crons = ["0 * * * *"] # Runs hourly to check for due backupsgit pull origin mainnpm installnpm run buildnpx wrangler deploydocker pull writenotenow/d1-manager:latestdocker run -d -p 8080:8080 \
-e ACCOUNT_ID=your_account_id \
-e API_KEY=your_api_token \
-e TEAM_DOMAIN=https://yourteam.cloudflareaccess.com \
-e POLICY_AUD=your_aud_tag \
writenotenow/d1-manager:latestSee DOCKER_README.md for complete Docker instructions.
| Endpoint | Description |
|---|---|
GET /api/databases |
List databases |
POST /api/databases |
Create database |
DELETE /api/databases/:dbId |
Delete database |
PUT /api/databases/:dbId/replication |
Set read replication mode |
GET /api/tables/:dbId/list |
List tables |
GET /api/tables/:dbId/foreign-keys |
Get FK graph with optional schemas and cycles |
POST /api/tables/:dbId/:tableName/strict-check |
Validate STRICT mode compatibility |
POST /api/query/:dbId/execute |
Execute SQL |
GET /api/webhooks |
List configured webhooks |
POST /api/webhooks |
Create a new webhook |
PUT /api/webhooks/:id |
Update a webhook |
DELETE /api/webhooks/:id |
Delete a webhook |
POST /api/webhooks/:id/test |
Send a test webhook |
GET /api/migrations/status |
Get migration status |
POST /api/migrations/apply |
Apply pending migrations |
GET /api/metrics |
Get D1 analytics (query volume, latency, storage) |
GET /api/scheduled-backups |
List all scheduled backups |
POST /api/scheduled-backups |
Create or update a scheduled backup |
DELETE /api/scheduled-backups/:dbId |
Delete a scheduled backup |
POST /api/drizzle/:dbId/introspect |
Introspect database and generate Drizzle schema |
GET /api/drizzle/:dbId/migrations |
Get Drizzle migration status |
POST /api/drizzle/:dbId/push |
Push schema changes to database (supports dry-run) |
POST /api/drizzle/:dbId/check |
Validate schema against database state |
GET /api/drizzle/:dbId/export |
Export Drizzle schema as TypeScript file |
GET /api/r2-backup/orphaned |
List backups from deleted databases |
DELETE /api/r2-backup/:databaseId/bulk |
Bulk delete R2 backups |
📚 Full API docs: Wiki - API Reference
D1 Manager supports integration with external observability platforms via Cloudflare's native OpenTelemetry export. This allows you to send traces and logs to services like Grafana Cloud, Datadog, Honeycomb, Sentry, and Axiom.
Cloudflare Workers natively supports exporting OpenTelemetry-compliant traces and logs to any OTLP endpoint.
Step 1: Create a destination in Cloudflare Dashboard
- Go to Workers Observability
- Click Add destination
- Configure your provider's OTLP endpoint and authentication headers
Common OTLP Endpoints:
| Provider | Traces Endpoint | Logs Endpoint |
|---|---|---|
| Grafana Cloud | https://otlp-gateway-{region}.grafana.net/otlp/v1/traces |
https://otlp-gateway-{region}.grafana.net/otlp/v1/logs |
| Honeycomb | https://api.honeycomb.io/v1/traces |
https://api.honeycomb.io/v1/logs |
| Axiom | https://api.axiom.co/v1/traces |
https://api.axiom.co/v1/logs |
| Sentry | https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/traces |
https://{HOST}/api/{PROJECT_ID}/integration/otlp/v1/logs |
| Datadog | Coming soon | https://otlp.{SITE}.datadoghq.com/v1/logs |
Step 2: Update wrangler.toml
[observability]
enabled = true
[observability.traces]
enabled = true
destinations = ["your-traces-destination"]
[observability.logs]
enabled = true
destinations = ["your-logs-destination"]Use the built-in webhook system to send event notifications to any HTTP endpoint (Slack, Discord, custom services).
Configure webhooks in the UI under the Webhooks tab.
📚 Full observability docs: Wiki - Observability
| Frontend | Backend |
|---|---|
| React 19.2.0 | Cloudflare Workers |
| TypeScript 5.9.3 | Cloudflare D1 |
| Vite 7.2.4 | Cloudflare Access |
| Tailwind CSS + shadcn/ui | Cloudflare R2 + Durable Objects |
"Failed to list databases"
- Verify
ACCOUNT_IDis correct - Ensure API token has D1 Edit permission (not just Read)
Authentication loop
- Check
TEAM_DOMAINincludeshttps:// - Verify
POLICY_AUDmatches your Access application
📚 More solutions: Wiki - Troubleshooting
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open a Pull Request
📚 Guidelines: Wiki - Contributing Guide
MIT License - see LICENSE for details.
- 🐛 Bug Reports: GitHub Issues
- 💡 Feature Requests: GitHub Discussions
- 📧 Email: [email protected]
Made with ❤️ for the Cloudflare community