Django database and media backup management commands, powered by rclone.
django-rclone bridges Django's database layer with rclone's file transfer layer. You get native database dumps piped directly to any of rclone's 70+ supported cloud storage backends -- no temp files, no intermediate archives, no Python reimplementations of what rclone already does.
django-dbbackup is a mature and well-regarded backup solution. It wraps Django Storages for upload, implements GPG encryption in Python, handles gzip compression, and parses filenames with regex to manage backups.
django-rclone takes a different approach: delegate everything that isn't Django-specific to rclone.
| Concern | django-dbbackup | django-rclone |
|---|---|---|
| Storage backends | Django Storages (S3, GCS, etc.) | rclone (70+ backends natively) |
| Encryption | GPG subprocess wrapper in Python | rclone crypt remote |
| Compression | gzip in Python | rclone compress remote or --compress flag |
| Media backup | Tar archive, then upload | rclone sync (incremental, no archiving) |
| Backup listing | Filename regex parsing | rclone lsjson (structured JSON) |
| Temp files | SpooledTemporaryFile |
None -- pipes directly via rclone rcat |
The result is significantly less code doing significantly less work. Storage abstraction, encryption, compression, and incremental sync are all rclone's problem -- django-rclone only owns what Django must own: database connectors, management commands, and signals.
- Python 3.12+
- Django 5.2+
- rclone installed and configured
pip install django-rcloneAdd to your INSTALLED_APPS:
INSTALLED_APPS = [
# ...
"django_rclone",
]Configure your rclone remote (see rclone docs):
rclone configThen point django-rclone at it:
DJANGO_RCLONE = {
"REMOTE": "myremote:backups",
}# Backup the default database
python manage.py dbbackup
# Backup a specific database
python manage.py dbbackup --database analytics
# Backup and clean old backups beyond retention count
python manage.py dbbackup --clean
# Restore from the latest backup
python manage.py dbrestore
# Restore a specific backup
python manage.py dbrestore --input-path default-2024-01-15-120000.dump
# Non-interactive restore (for automation)
python manage.py dbrestore --noinput --input-path default-2024-01-15-120000.dump# Sync MEDIA_ROOT to remote (incremental -- only changed files transfer)
python manage.py mediabackup
# Sync remote back to MEDIA_ROOT
python manage.py mediarestore# List all database backups
python manage.py listbackups
# Filter by database
python manage.py listbackups --database default
# List media files on remote
python manage.py listbackups --mediaAll settings live under the DJANGO_RCLONE dict in your Django settings:
DJANGO_RCLONE = {
# Required -- rclone remote and base path
"REMOTE": "myremote:backups",
# Optional -- rclone binary and config
"RCLONE_BINARY": "rclone", # Path to rclone binary
"RCLONE_CONFIG": None, # Path to rclone.conf (None uses default)
"RCLONE_FLAGS": [], # Extra flags for every rclone call
# Database backup settings
"DB_BACKUP_DIR": "db", # Subdirectory for DB backups
"DB_FILENAME_TEMPLATE": "{database}-{datetime}.{ext}", # Must start with {database}
"DB_DATE_FORMAT": "%Y-%m-%d-%H%M%S",
"DB_CLEANUP_KEEP": 10, # Keep N most recent backups per database
# Media backup settings
"MEDIA_BACKUP_DIR": "media", # Subdirectory for media backups
# Database connector overrides
"CONNECTORS": {}, # Per-database connector class overrides
"CONNECTOR_MAPPING": {}, # Engine-to-connector class overrides
}django-rclone does not implement encryption or compression. Instead, configure these at the rclone level where they belong:
Encryption -- use a crypt remote:
rclone config create myremote-crypt crypt remote=myremote:backups password=your-passwordThen set "REMOTE": "myremote-crypt:" in your Django settings.
Compression -- use a compress remote:
rclone config create myremote-compressed compress remote=myremote:backupsOr pass --compress-level via RCLONE_FLAGS.
See Storage Providers for provider-specific configuration notes (Cloudflare R2, etc.).
| Database | Connector | Dump tool | Format |
|---|---|---|---|
| PostgreSQL | PgDumpConnector |
pg_dump / pg_restore |
Custom (binary) |
| PostGIS | PgDumpGisConnector |
pg_dump / pg_restore |
Custom (binary) |
| MySQL / MariaDB | MysqlDumpConnector |
mysqldump / mysql |
SQL text |
| SQLite | SqliteConnector |
sqlite3 .dump |
SQL text |
| MongoDB | MongoDumpConnector |
mongodump / mongorestore |
Archive (binary) |
GIS backends (postgis, spatialite, gis/mysql) and django-prometheus wrappers are also mapped automatically. See connectors documentation for the full engine mapping table.
django-rclone sends Django signals before and after each operation:
from django_rclone.signals import pre_db_backup, post_db_backup
@receiver(post_db_backup)
def notify_on_backup(sender, database, path, **kwargs):
logger.info("Database %s backed up to %s", database, path)Available signals: pre_db_backup, post_db_backup, pre_db_restore, post_db_restore, pre_media_backup, post_media_backup, pre_media_restore, post_media_restore.
Management Commands (dbbackup, dbrestore, mediabackup, mediarestore, listbackups)
| |
DB Connectors rclone.py
(pg, mysql, sqlite, (subprocess wrapper)
mongodb)
| |
Database binary rclone binary
(70+ storage backends)
Database dumps stream directly from the dump process into rclone rcat via Unix pipes. No intermediate files are written. Restores work in reverse: rclone cat streams into the database restore process. Subprocess finalization is centralized and deadlock-safe (pipe draining + stderr collection).
Media backups use rclone sync, which is incremental by default -- only changed files are transferred.
Contributions are welcome. This project enforces 100% test coverage -- all new code must be fully covered by tests. The CI pipeline will fail if coverage drops below 100%.
CI also includes subprocess guardrail tests to prevent wait()-based pipe deadlocks and to keep raw Popen(...) usage confined to the wrapper modules.
uv sync # Install runtime + dev deps (tests/lint/type/docs)
uv run pytest --cov --cov-branch # Unit tests (integration excluded by default)
uv sync --group integration # Install DB integration test dependencies
uv run pytest tests/integration -m integration -q # Integration tests (requires tools/services)
uv run ruff check . # Lint
uv run ruff format --check . # Check formatting
uv run ty check # Type check
uv run mkdocs build --strict # Build docsFor full integration setup details (Docker services, required binaries, env vars), see docs/testing.md.
MIT