-
Notifications
You must be signed in to change notification settings - Fork 2
Database Management
Complete guide to managing Cloudflare D1 databases with D1 Database Manager.
D1 Manager provides comprehensive database management capabilities:
- π Grid/List database views with search, badges (FTS5), and color tags
- β Create new databases
- βοΈ Rename databases (migration-based with automatic safety backups)
- ποΈ Delete databases (with optional R2 safety backup)
- βοΈ R2 Backup/Restore + Backup Hub - Cloud backups with multi-select restore/download and orphaned-backup visibility
- β° Scheduled R2 Backups - Daily/weekly/monthly schedules with next-run/last-run tracking
- π¦ Bulk operations (download, delete, optimize)
- π€ Upload/import SQL files (upload or paste)
- β‘ Optimize database performance
Use the search box to quickly find databases by name:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β π Search databases... β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Showing 3 of 12 databases matching "prod"
Features:
- Instant filtering - Results update as you type
- Case-insensitive - "PROD", "prod", and "Prod" all match
- Partial matching - Search "app" matches "my-app-db", "app-data", etc.
- Clear button - Click X to clear the search and show all databases
This is particularly useful when managing many databases, helping you quickly locate specific databases without scrolling through a long list.
Toggle between grid and list layouts. Each database entry shows:
βββββββββββββββββββββββββββββββββββ
β [β] π my-database productionβ
β β
β UUID: a1b2c3d4-... β
β Created: Nov 2, 2024 β
β Size: 2.4 MB β
β Tables: 12 β
β β
β [Browse] [Query] [Rename] β
βββββββββββββββββββββββββββββββββββ
Card/List Information:
- Checkbox - For bulk operations
- Name - Database name
- Badges - FTS5 indicator, replication, STRICT/FTS5 counts, version
- Color - Custom color tag (where set)
- UUID - Unique identifier
- Created - Creation date
- Size - Database file size
- Tables - Number of tables
Quick Actions (grid and list):
- Browse / Query / Rename / Clone - Primary actions
- Import / Download / Optimize / Delete - Secondary actions
- Backup/Restore (R2) - Create or restore backups (when R2 is configured)
- Backup & Restore Hub - Open unified hub to see undo + R2 backups (includes backup counts and orphaned backups)
Use checkboxes to select multiple databases for bulk operations:
- Click checkbox on each database card
- Or click "Select All" to select all databases
- Perform bulk operations (see below)
- Click "Create Database" button
- Enter a database name
- Click "Create"
Name Requirements:
- 3-63 characters
- Lowercase letters (a-z)
- Numbers (0-9)
- Hyphens (-) allowed (not at start/end)
- Must be unique
Valid Names:
β
my-database
β
app-db-2024
β
production-data
β
test-db-001
Invalid Names:
β MyDatabase (uppercase)
β db (too short)
β -database (starts with hyphen)
β database- (ends with hyphen)
β my_database (underscores not allowed)
POST /api/databases
Content-Type: application/json
{
"name": "my-database"
}D1 doesn't natively support database renaming, so D1 Manager uses a migration-based approach with built-in safety backups (undo snapshot and optional R2 backup).
1. Validate new name
2. Create new database with desired name
3. Export all data from original database
4. Import data into new database
5. Verify data integrity
6. Delete original database
- Click "Rename" on a database card or row (grid/list)
- Review the important warnings
- Choose backup option (R2 backup, download, or skip) β automatic R2 backup is recommended
- Enter the new database name
- Check the confirmation boxes
- Click "Rename Database"
The rename dialog shows real-time progress:
Step 1/6: Validating and preparing... 16%
Step 2/6: Creating new database... 33%
Step 3/6: Exporting data... 50%
Step 4/6: Importing data... 66%
Step 5/6: Verifying data integrity... 83%
Step 6/6: Cleaning up... 100%
After importing data, the system automatically verifies integrity:
β Table Count Verification
- Ensures all tables were migrated
- Checks source and target table counts match
β Row Count Verification
- Validates row counts for each table
- Confirms no data loss during migration
β Schema Verification
- Checks column counts match
- Ensures table structures are identical
If Verification Fails:
- New database is automatically deleted
- Original database remains untouched
- Detailed error message shows what failed
π« Databases with FTS5 tables cannot be renamed
Cloudflare D1's export API does not support virtual tables (FTS5). If your database contains FTS5 (Full-Text Search) tables, the rename operation will be immediately blocked with a clear error message listing the FTS5 tables.
Workaround:
- Create a new database manually
- Migrate non-FTS5 tables using the migration wizard
- Recreate FTS5 tables manually in the new database
- Delete the original database
See FTS5-Full-Text-Search for more information about FTS5 tables.
- Both databases exist during migration
- Counts toward your D1 quota temporarily
- Original deleted after successful migration
- DEFAULT values (like
datetime('now')) are not preserved - Column structure, types, and constraints are migrated correctly
- All data is migrated
- Indexes are recreated
- Always download a backup first
- Migration can fail if quota exceeded
- Network issues can interrupt process
You have two backup options before renaming:
-
Automatic R2 backup (recommended) β shown inline in the dialog; stored in R2 with
Before Renamesource tag and visible in the Backup & Restore hub (including orphaned backups) - Download backup β export as SQL for an offline copy
See R2-Backup-Restore for complete backup documentation.
If the rename fails:
- The new database is automatically deleted
- The original database remains unchanged
- Error message displayed with details
POST /api/databases/:dbId/rename
Content-Type: application/json
{
"newName": "my-new-database"
}- Select database using checkbox
- Click "Delete Selected"
- Review the confirmation dialog
- Click "Delete Database"
- (Optional, recommended) Create an R2 backup from the dialog before confirming
- Select multiple databases using checkboxes
- Click "Delete Selected"
- Review the list of databases to delete
- Click "Delete X Databases"
For bulk deletes:
Deleting database 1 of 3...
Deleting database 2 of 3...
Deleting database 3 of 3...
Progress bar shows completion percentage.
Safety Backups:
- Delete dialogs offer an R2 backup option (when configured) and automatically create an undo snapshot
- Backups appear in the Backup & Restore hub with
Delete Databasesource tags, including orphaned backups for deleted databases
The d1-manager-metadata database is protected:
- Hidden - Doesn't appear in database list
- Delete Protection - Returns 403 Forbidden if deletion attempted
- Rename Protection - Returns 403 Forbidden if rename attempted
- Export Protection - Silently skipped in bulk exports
DELETE /api/databases/:dbIdSelect All:
- Click "Select All" button
- All databases selected instantly
Clear Selection:
- Click "Clear Selection" button
- All checkboxes unchecked
Export multiple databases in your chosen format (SQL, JSON, or CSV) as a ZIP archive.
Steps:
- Select databases using checkboxes
- Choose export format from dropdown (SQL/JSON/CSV)
- Click "Download Selected"
- Wait for export (progress shows current database name)
- ZIP file downloads automatically
Format Selection:
βββββββββββ ββββββββββββββββββββββββ
β SQL βΌ β β Download Selected β
βββββββββββ ββββββββββββββββββββββββ
Progress Indicator:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β» Exporting my-database... 2 / 5 databases β
β ββββββββββββββββββββββββββββββββββββββββββββββ 40% β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SQL Format (Default)
- Complete database dump with CREATE TABLE + INSERT statements
- Fully compatible with D1 import
- Best for exact database recreation
databases-sql-1704614400000.zip
βββ my-database.sql
βββ test-database.sql
βββ production-db.sql
JSON Format
- Portable JSON with metadata and data as arrays of objects
- Preserves column types in metadata
- Good for data transformation pipelines
databases-json-1704614400000.zip
βββ my-database.json
βββ test-database.json
βββ production-db.json
JSON structure:
{
"metadata": {
"databaseName": "my-database",
"exportedAt": "2024-01-07T10:00:00.000Z",
"tables": [{
"name": "users",
"columns": [
{"name": "id", "type": "INTEGER"},
{"name": "email", "type": "TEXT"}
],
"rowCount": 100
}]
},
"tables": {
"users": [
{"id": 1, "email": "alice@example.com"},
{"id": 2, "email": "bob@example.com"}
]
}
}CSV Format
- ZIP containing
_metadata.json+ CSV per table - Spreadsheet-compatible for data analysis
- Empty tables included in metadata (no CSV file)
databases-csv-1704614400000.zip
βββ my-database/
βββ _metadata.json
βββ users.csv
βββ orders.csv
Metadata includes column types for schema reconstruction:
{
"databaseName": "my-database",
"tables": [{
"name": "users",
"columns": [
{"name": "id", "type": "INTEGER"},
{"name": "email", "type": "TEXT"}
],
"rowCount": 100
}]
}Rate Limiting:
- 300ms delay between each database export
- Exponential backoff (2sβ4sβ8s) on 429 errors
- Prevents API throttling for large batch exports
Progress Tracking:
Preparing... 0%
Exporting my-database (1/3)... 33%
Exporting test-db (2/3)... 66%
Exporting production (3/3)... 100%
Download Complete
Delete multiple databases with progress tracking.
Steps:
- Select databases using checkboxes
- Click "Delete Selected"
- Review confirmation dialog
- Click "Delete X Databases"
- Watch progress
Error Handling:
- Continues even if one database fails
- Shows which databases failed with error messages
- Successful deletions are completed
API Call:
POST /api/databases/bulk-delete
Content-Type: application/json
{
"databaseIds": ["uuid1", "uuid2", "uuid3"]
}Run ANALYZE on multiple databases to improve query performance.
What it Does:
- Runs
PRAGMA optimize(equivalent to ANALYZE) - Updates query statistics
- Helps SQLite query planner choose better execution plans
- No data modification
Steps:
- Select databases using checkboxes
- Click "Optimize Selected"
- Review optimization dialog
- Click "Optimize X Databases"
- Wait for completion
Progress:
Running ANALYZE (Database 1 of 3)...
Running ANALYZE (Database 2 of 3)...
Running ANALYZE (Database 3 of 3)...
Note about VACUUM:
- VACUUM is not available via D1 REST API
- D1 automatically manages space reclamation
- For manual VACUUM, use Wrangler CLI:
wrangler d1 execute my-database --remote --command="VACUUM"When to Optimize:
- After bulk data imports
- After creating new indexes
- After significant schema changes
- Periodically for large databases
Upload SQL files to create new databases or import into existing ones.
- Click "Upload Database" button
- Select SQL file (up to 5GB)
- Choose import mode:
- Create new database - Creates a new database
- Import into existing - Adds to existing database
Steps:
- Select "Create new database"
- Enter database name
- Choose SQL file
- Click "Upload"
Process:
1. Create new database
2. Parse SQL file
3. Execute CREATE TABLE statements
4. Execute INSERT statements
5. Create indexes
6. Verify import
Steps:
- Select "Import into existing database"
- Choose target database from dropdown
- Select SQL file
- Click "Upload"
Behavior:
- Executes SQL statements in order
- Adds to existing tables (doesn't replace)
- Creates new tables if they don't exist
- May fail if conflicts with existing schema
D1 Manager accepts standard SQL dumps:
-- Comments are supported
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT UNIQUE
);
INSERT INTO users (name, email) VALUES
('Alice', 'alice@example.com'),
('Bob', 'bob@example.com');
CREATE INDEX idx_users_email ON users(email);- Maximum file size: 5GB
- API limitation: D1 REST API limits
- For larger databases, use Wrangler CLI:
wrangler d1 execute my-database --remote --file=large-dump.sqlCommon errors:
- Syntax errors - Invalid SQL
- Duplicate keys - Data conflicts with existing rows
- Type mismatches - Data doesn't match column types
- Quota exceeded - Account D1 storage limit reached
POST /api/databases/import
Content-Type: multipart/form-data
{
"file": [SQL file],
"createNew": true,
"databaseName": "my-database"
}Or for import:
POST /api/databases/import
Content-Type: multipart/form-data
{
"file": [SQL file],
"createNew": false,
"targetDatabaseId": "database-uuid"
}Click on a database card to view:
- All tables in the database
- Table count
- Quick access to tables
Metadata shown on each card:
UUID:
a1b2c3d4-e5f6-7890-abcd-ef1234567890
Unique identifier for API calls and Wrangler commands.
Created Date:
Nov 2, 2024
When the database was created.
File Size:
2.4 MB
Total storage used (data + indexes + overhead).
Table Count:
12 tables
Number of tables in the database.
Version Badge:
- production - Production databases (green)
- alpha - Alpha databases (yellow)
- beta - Beta databases (yellow)
GET /api/databases/:dbId/infoResponse:
{
"result": {
"uuid": "a1b2c3d4-...",
"name": "my-database",
"version": "production",
"created_at": "2024-11-02T12:00:00.000Z",
"file_size": 2457600,
"num_tables": 12
}
}Use descriptive, consistent names:
β
production-users-db
β
staging-app-data
β
test-integration-2024
β
analytics-warehouse
Prefix databases by environment:
prod-app-main
prod-app-analytics
stage-app-main
stage-app-analytics
dev-app-main
test-app-main
-
Before Major Operations:
- Always backup before rename
- Backup before bulk imports
- Backup before schema changes
-
Regular Backups:
- Daily backups for production
- Weekly backups for staging
- On-demand for development
-
Backup Methods:
- R2 Cloud Backup - Recommended for production (see R2-Backup-Restore)
- Bulk download as ZIP
- Individual SQL exports
- Wrangler CLI exports
-
Optimize Regularly:
- After bulk data changes
- After index creation
- Monthly for active databases
-
Monitor Size:
- Watch database file size
- Plan for growth
- Archive old data
-
Clean Up:
- Delete unused databases
- Remove test databases
- Keep development databases small
Causes:
- Name already exists
- Invalid name format
- Quota exceeded
- API permissions issue
Solutions:
# Check quota
wrangler d1 list
# Verify API token has D1 Edit permission
# Check name follows naming rulesCause: Not enough space for temporary duplicate
Solution:
# Free up space by deleting unused databases
# Or upgrade your Cloudflare planCause: Invalid SQL in uploaded file
Solution:
- Validate SQL syntax
- Check for unsupported SQLite features
- Remove comments if causing issues
Causes:
- API credentials incorrect
- Network issue
- Worker not deployed
Solutions:
# Verify secrets
npx wrangler secret list
# Check Worker status
npx wrangler tail
# Test API endpoint
curl https://your-worker.workers.dev/api/databasesGET /api/databasesPOST /api/databases
Content-Type: application/json
{"name": "my-database"}GET /api/databases/:dbId/infoPOST /api/databases/:dbId/rename
Content-Type: application/json
{"newName": "new-name"}DELETE /api/databases/:dbIdPOST /api/databases/export
Content-Type: application/json
{
"databases": [
{"uuid": "id1", "name": "db1"},
{"uuid": "id2", "name": "db2"}
]
}POST /api/databases/import
Content-Type: multipart/form-data
{
"file": [SQL file],
"createNew": true|false,
"databaseName": "name" (if createNew),
"targetDatabaseId": "uuid" (if !createNew)
}- Table Operations - Manage tables within databases
- Query Console - Execute SQL queries
- Bulk Operations - Advanced bulk operations
- API Reference - Complete API documentation
Need Help? See Troubleshooting or open an issue.
- Database Management
- R2 Backup Restore
- Scheduled Backups
- Table Operations
- Query Console
- Schema Designer
- Column Management
- Bulk Operations
- Job History
- Time Travel
- Read Replication
- Undo Rollback
- Foreign Key Visualizer
- ER Diagram
- Foreign Key Dependencies
- Foreign Key Navigation
- Circular Dependency Detector
- Cascade Impact Simulator
- AI Search
- FTS5 Full Text Search
- Cross Database Search
- Index Analyzer
- Database Comparison
- Database Optimization