DockerVault is a containerized backup system for Docker volumes and host paths. It provides automatic detection of containers, volumes, and Compose stacks, with flexible scheduling, GFS retention policies, and remote storage synchronization.
- Docker Integration — Automatic detection of containers, volumes, and Compose stacks
- Flexible Targets — Back up containers, volumes, host paths, or entire stacks
- Dependency Management — Respects
depends_onrelationships when stopping/starting containers - Cron Scheduling — Automated backups with duration estimation
- GFS Retention — Grandfather-Father-Son retention strategy per backup target
- Remote Storage — Sync to SSH, S3, WebDAV, FTP, or 40+ providers via Rclone
- Real-time UI — WebSocket-based live updates in the web interface
- Komodo Integration — Optional integration with Komodo for container orchestration
- Security First — Docker socket and volumes mounted read-only
- Docker 20.10+
- Docker Compose 2.0+
- Linux host (for Docker socket access)
-
Clone the repository
git clone https://github.com/Serph91P/DockerVault.git cd DockerVault -
Configure environment
cp .env.example .env
Edit
.envwith your settings:# Docker group ID (find with: getent group docker | cut -d: -f3) DOCKER_GID=999 # Backup storage location BACKUP_PATH=/path/to/backups # Web interface port PORT=8080
-
Start DockerVault
docker compose up -d
-
Access the web interface at
http://localhost:8080
Tip
Use docker compose logs -f to monitor startup and check for any configuration issues.
DockerVault uses a GFS (Grandfather-Father-Son) retention strategy. Each backup target can have its own policy:
| Option | Description | Default |
|---|---|---|
keep_last |
Keep the last N backups | 3 |
keep_daily |
Keep one backup per day for N days | 7 |
keep_weekly |
Keep one backup per week for N weeks | 4 |
keep_monthly |
Keep one backup per month for N months | 6 |
keep_yearly |
Keep one backup per year for N years | 2 |
Configure defaults via environment variables:
DEFAULT_KEEP_LAST=3
DEFAULT_KEEP_DAILY=7
DEFAULT_KEEP_WEEKLY=4
DEFAULT_KEEP_MONTHLY=6
DEFAULT_KEEP_YEARLY=2Sync backups to external storage providers:
| Type | Description | Example |
|---|---|---|
| Local/NFS | Local directory or NFS mount | /mnt/nas/backups |
| SSH/SFTP | SSH server with rsync | user@server:/backups |
| S3 | AWS S3, MinIO, Backblaze B2 | s3://bucket/path |
| WebDAV | Nextcloud, ownCloud | https://cloud.example.com/dav/ |
| FTP/FTPS | FTP server | ftp://server/path |
| Rclone | 40+ providers (GDrive, Dropbox, OneDrive, ...) | remote:path |
Schedule format: Minute Hour Day Month Weekday
| Expression | Description |
|---|---|
0 2 * * * |
Daily at 02:00 |
0 3 * * 0 |
Sundays at 03:00 |
0 */6 * * * |
Every 6 hours |
30 1 1 * * |
1st of every month at 01:30 |
Enable optional integration with Komodo:
KOMODO_ENABLED=true
KOMODO_API_URL=http://komodo:8080
KOMODO_API_KEY=your-api-keyDockerVault follows security best practices:
- Docker socket — Mounted read-only (
/var/run/docker.sock:ro) - Docker volumes — Mounted read-only (
/var/lib/docker/volumes:ro) - Root user required — Container runs as root to access Docker volumes (Docker's volume directory permissions require root access)
DockerVault supports end-to-end encryption for your backups using AES-256-CBC with envelope encryption:
- Per-backup keys — Each backup gets a unique Data Encryption Key (DEK)
- Asymmetric wrapping — DEKs are encrypted with your public key using age
- Disaster recovery — Backups can be restored without DockerVault using standard command-line tools
- Navigate to Settings → Backup Encryption
- Click Set Up Encryption to generate a new key pair
- Download your private key and store it securely (password manager, encrypted drive)
- Confirm that you have saved the private key
Caution
Your private key is only shown once during setup. If lost, encrypted backups cannot be recovered.
With DockerVault:
- Navigate to Backups and select the backup
- Click Restore — DockerVault handles decryption automatically
Without DockerVault (Disaster Recovery):
If you lose access to DockerVault, you can still recover encrypted backups using standard command-line tools:
# Prerequisites: age (https://github.com/FiloSottile/age) and openssl
# 1. Save your private key to a file
cat > private_key.txt << 'EOF'
AGE-SECRET-KEY-1XXXXXX...
EOF
chmod 600 private_key.txt
# 2. Decrypt the DEK (Data Encryption Key)
age -d -i private_key.txt backup.tar.gz.key > dek.txt
# 3. Decrypt the backup
openssl enc -d -aes-256-cbc -pbkdf2 -iter 100000 \
-in backup.tar.gz.enc \
-out backup.tar.gz \
-pass file:dek.txt
# 4. Extract the backup
tar xzf backup.tar.gz
# 5. Clean up (don't leave keys lying around)
rm dek.txt private_key.txtTip
The downloaded private key file includes these recovery instructions.
DockerVault runs as root inside the container to access Docker volumes and the Docker socket. The Docker volume directory (/var/lib/docker/volumes) is owned by root, and reading volume data at the filesystem level requires root access. Inside the container, the backend process runs as the dockervault user under supervisord, and only the Docker socket interaction requires elevated privileges.
Mounting /var/run/docker.sock into the container grants full Docker API access. This is required for container discovery, volume enumeration, and stop/start operations during backups. To reduce risk:
- Do not expose the DockerVault port to the public internet without a reverse proxy and TLS
- Restrict network access to trusted clients only
- Monitor Docker daemon audit logs for unexpected API calls
- Deploy behind a reverse proxy (nginx, Traefik, Caddy) with TLS termination
- Set
COOKIE_SECURE=trueso session cookies are only sent over HTTPS - Set
CORS_ORIGINSto the exact frontend origin (e.g.,https://vault.example.com) - Back up the credential encryption key file (
/app/data/.credential_key) — losing it makes encrypted remote storage credentials unrecoverable
services:
dockervault:
image: ghcr.io/serph91p/dockervault:latest
container_name: dockervault
restart: unless-stopped
user: root
environment:
- TZ=Europe/Berlin
- COOKIE_SECURE=true
- CORS_ORIGINS=https://vault.example.com
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- backup-data:/app/data
- /path/to/backups:/backups
- /var/lib/docker/volumes:/var/lib/docker/volumes:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.dockervault.rule=Host(`vault.example.com`)"
- "traefik.http.routers.dockervault.entrypoints=websecure"
- "traefik.http.routers.dockervault.tls.certresolver=letsencrypt"
- "traefik.http.services.dockervault.loadbalancer.server.port=80"
networks:
- traefik
- backup-network
volumes:
backup-data:
name: dockervault-data
networks:
traefik:
external: true
backup-network:
driver: bridgevault.example.com {
reverse_proxy dockervault:80
}
Start Caddy in the same Docker network as DockerVault, and it handles TLS certificates automatically.
| Variable | Default | Description |
|---|---|---|
CORS_ORIGINS |
http://localhost |
Comma-separated list of allowed CORS origins |
COOKIE_SECURE |
false |
Set session cookie Secure flag. Set to true behind a TLS-terminating reverse proxy |
CREDENTIAL_ENCRYPTION_KEY |
Auto-generated | Fernet key for encrypting remote storage credentials at rest. If empty, a key is generated and saved to /app/data/.credential_key |
ALLOWED_HOOK_COMMANDS |
pg_dump,pg_dumpall,mysqldump,mongodump,redis-cli,mariadb-dump |
Comma-separated allowlist of binaries permitted in pre/post backup hooks |
LOG_FORMAT |
text |
Log output format: text or json (structured logging for production) |
LOG_LEVEL |
INFO |
Logging level: DEBUG, INFO, WARNING, ERROR, CRITICAL |
SHUTDOWN_TIMEOUT |
30 |
Seconds to wait for in-progress backups to finish during shutdown |
cd backend
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
uvicorn app.main:app --reloadcd frontend
npm install
npm run devWhen running, API documentation is available at:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
DockerVault/
├── backend/
│ └── app/
│ ├── api/ # REST API endpoints
│ ├── backup_engine.py # Backup logic
│ ├── docker_client.py # Docker SDK wrapper
│ ├── remote_storage.py # Remote storage backends
│ ├── retention.py # Retention manager
│ ├── scheduler.py # APScheduler integration
│ └── websocket.py # Real-time updates
├── frontend/
│ └── src/
│ ├── components/ # React components
│ ├── pages/ # Application pages
│ └── api/ # API client
├── docker-compose.yml
└── Dockerfile
- FastAPI — Backend framework
- React — Frontend library
- Docker SDK for Python — Docker integration
- APScheduler — Job scheduling
- TailwindCSS — UI styling