Skip to content

Commit b11716b

Browse files
committed
Update README with new destination drivers and RDS PostgreSQL source
1 parent 23cf39d commit b11716b

File tree

1 file changed

+221
-39
lines changed

1 file changed

+221
-39
lines changed

README.md

Lines changed: 221 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -14,15 +14,22 @@ Currently implemented sources are:
1414
* folders via SSH (using tar)
1515
* MySQL databases (using mysqldump)
1616
* MySQL databases via SSH (using mysqldump)
17-
* RDS database snapshots (using mysqldump)
1817
* PostgreSQL databases (using pg_dump)
18+
* RDS MySQL database snapshots (using mysqldump)
19+
* RDS PostgreSQL database snapshots (using pg_dump)
1920
* Azure Managed Disks
21+
* LVM snapshots over SSH
2022

2123
Currently implemented destinations are:
2224

2325
* an S3 bucket (uses aws-cli)
2426
* a GS bucket (uses gsutil)
25-
* a Samba share (uses pysmbc)
27+
* a Samba share (uses smbclient)
28+
* a local filesystem path
29+
* a Backblaze B2 bucket (uses b2sdk)
30+
* a Minio/S3-compatible bucket (uses minio SDK; works with DigitalOcean Spaces etc.)
31+
* a Dropbox folder (uses dropbox SDK)
32+
* a Google Drive folder (uses google-api-python-client with service account)
2633

2734
Currently implemented notifications are:
2835

@@ -57,7 +64,7 @@ For security purposes, the script is designed to run as a non-privileged user. B
5764
Installation
5865
------------
5966

60-
1. Install me. Unpack me, run 'python setup.py' as root.
67+
1. Install me. Unpack me, run `pip install .` as root (or in a virtualenv).
6168

6269
1. Create a configuration file listing the folders/databases you want to back up, an encryption passphrase, and one or more places you want the encrypted dump file uploaded to, and details of any notification methods you wish to use. See docs further down for examples. You can install the dependencies now, or you can wait until you get errors later :)
6370

@@ -255,65 +262,92 @@ Parameters available in 'pgsql':
255262
| dbpass | Password to connect to PostgreSQL with. |
256263

257264

258-
Source - RDS Database Snapshots
259-
-----------------------
260-
261-
You can specify one or more Amazon RDS databases to be backed up, where they have automated rolling backups enabled. Currently, this only supports MySQL-based RDS instances.
265+
Source - RDS MySQL Database Snapshots
266+
-------------------------------------
262267

263-
The last automatic snapshot of the given database will be identified using the AWS credentials provided. That will then be restored to a temporary instance, in a security group that allows access to the agent machine. The agent machine (running this script), will then 'mysqldump' the data to a '.sql.gz' file, and destroy the temporary instance. This avoids the need to run any backup queries that would adversely affect the performance of the live database.
268+
You can specify one or more Amazon RDS MySQL instances to be backed up using automated snapshots. The most recent automatic snapshot is restored to a temporary RDS instance, `mysqldump` is run against it, then the temporary instance is deleted. This avoids any load on the live database.
264269

265-
WARNING: This can take a very long time in many cases.
270+
WARNING: This can take a very long time.
266271

267272
```json
268273
{
269274
"id": "livecompanydb",
270275
"name": "Live Company Data",
271-
"type": "pgsql",
272-
"dbhost": "localhost",
276+
"type": "rds",
277+
"instancename": "livedb1",
278+
"region": "eu-west-1",
279+
"security_group": "sg-xxxxxxxx",
280+
"instance_class": "db.t3.small",
273281
"dbname": "livecompany",
274282
"dbuser": "backups",
275283
"dbpass": "zzzuserwithreadonlyperms",
276-
"instancename": "livedb1",
277-
"region": "eu-west-1",
278-
"security_group": "livedbbackup",
279-
"instance_class": "db.m1.small",
280284
"passphrase": "64b0c7405f2d8051e2b9f02aa4898acc"
281285
}
282286
```
283287

284-
This module uses the 'boto' package, and expects auth credentials to be provided in the '~/.boto' file. This needs to be configured:
288+
AWS credentials are taken from the environment or `~/.aws/credentials`. You can also supply them directly in the config:
285289

290+
```json
291+
{
292+
"credentials": {
293+
"aws_access_key_id": "YOURACCESSKEY",
294+
"aws_secret_access_key": "YOURSECRETKEY"
295+
}
296+
}
286297
```
287-
# cat >/home/backups/.boto <<EOF
288-
[Credentials]
289-
aws_access_key_id = YOURACCESSKEY
290-
aws_secret_access_key = YOURSECRETKEY
291-
EOF
292-
# chown backups /home/backups/.boto
293-
# chmod 400 /home/backups/.boto
294-
```
295-
296-
By default, the '--events' flag is passed to mysqldump. This may break older versions of mysqldump (prior to version 5.1, IIRC), so you can disable this flag with the 'noevents' parameter.
297298

298-
The 'security_group' must allow access from the host running mysqldump.
299+
The `security_group` must allow inbound access from the host running this script.
299300

300301
Parameters available in 'rds':
301302

302303
| Config key | Purpose |
303304
|------------|---------|
304305
| name | Description of data being backed up (for reporting purposes). |
305-
| dbhost | Name of the mySQL host to connect to. |
306-
| dbname | Name of the mySQL database to back up. |
307-
| dbuser | Username to connect to mySQL as. |
308-
| dbpass | Password to connect to mySQL with. |
309-
| defaults | The location of an 'mysqlclient' credentials file to use instead of creating a temporary one with using above 'db*' variables. |
310-
| noevents | Don't pass the '--events' flag to 'mysqldump'. |
311-
| aws_access_key_id | AWS access key |
312-
| aws_secret_access_key | AWS secret access key |
313-
| instancename | RDS instance name to clone backup of. |
314-
| region | AWS hosting region of RDS database. |
315-
| security_group | VPC security group for replica to provide access to. |
316-
| instance_class | RDS instance class to create replica as (defaults to 'db.m1.small'). |
306+
| instancename | RDS instance identifier to restore a snapshot of. |
307+
| region | AWS region of the RDS instance. |
308+
| security_group | VPC security group ID to assign to the temporary instance. |
309+
| instance_class | RDS instance class for the temporary instance (default: `db.t3.small`). |
310+
| dbname | MySQL database name to dump. |
311+
| dbuser | MySQL username. |
312+
| dbpass | MySQL password. |
313+
| noevents | Don't pass the `--events` flag to `mysqldump`. |
314+
| credentials | Object with `aws_access_key_id` and `aws_secret_access_key` (optional). |
315+
316+
317+
Source - RDS PostgreSQL Database Snapshots
318+
------------------------------------------
319+
320+
Identical workflow to the RDS MySQL source above, but uses `pg_dump` instead of `mysqldump`. Use `type: rds-pgsql`.
321+
322+
```json
323+
{
324+
"id": "livecompanydb",
325+
"name": "Live Company Data",
326+
"type": "rds-pgsql",
327+
"instancename": "livedb1",
328+
"region": "eu-west-1",
329+
"security_group": "sg-xxxxxxxx",
330+
"instance_class": "db.t3.small",
331+
"dbname": "livecompany",
332+
"dbuser": "backups",
333+
"dbpass": "zzzuserwithreadonlyperms",
334+
"passphrase": "64b0c7405f2d8051e2b9f02aa4898acc"
335+
}
336+
```
337+
338+
Parameters available in 'rds-pgsql':
339+
340+
| Config key | Purpose |
341+
|------------|---------|
342+
| name | Description of data being backed up (for reporting purposes). |
343+
| instancename | RDS instance identifier to restore a snapshot of. |
344+
| region | AWS region of the RDS instance. |
345+
| security_group | VPC security group ID to assign to the temporary instance. |
346+
| instance_class | RDS instance class for the temporary instance (default: `db.t3.small`). |
347+
| dbname | PostgreSQL database name to dump. |
348+
| dbuser | PostgreSQL username. |
349+
| dbpass | PostgreSQL password. |
350+
| credentials | Object with `aws_access_key_id` and `aws_secret_access_key` (optional). |
317351

318352

319353
Source - Volume snapshots
@@ -529,6 +563,154 @@ Parameters available in 'samba':
529563
| suffix | Suffix for created files. |
530564

531565

566+
Destination - Local Filesystem
567+
------------------------------
568+
569+
You can specify a local filesystem path to copy backups to. This is useful for writing to a mounted NFS share, USB drive, or any other locally accessible storage.
570+
571+
```json
572+
{
573+
"id": "local-backup",
574+
"type": "local",
575+
"path": "/mnt/backup-drive",
576+
"retention_copies": 7
577+
}
578+
```
579+
580+
Parameters available in 'local':
581+
582+
| Config key | Purpose |
583+
|------------|---------|
584+
| path | Local filesystem path to copy backups to. |
585+
| retention_copies | How many timestamped backup directories to keep. |
586+
| retention_days | How many days of backups to keep. |
587+
588+
589+
Destination - Backblaze B2
590+
--------------------------
591+
592+
You can specify a Backblaze B2 bucket to back up to using the native B2 SDK.
593+
594+
```json
595+
{
596+
"id": "b2-backup",
597+
"type": "b2",
598+
"bucket": "my-backup-bucket",
599+
"credentials": {
600+
"application_key_id": "your-application-key-id",
601+
"application_key": "your-application-key"
602+
},
603+
"retention_copies": 5,
604+
"retention_days": 30
605+
}
606+
```
607+
608+
Create an application key at https://secure.backblaze.com/app_keys.htm with read/write access to the target bucket.
609+
610+
Parameters available in 'b2':
611+
612+
| Config key | Purpose |
613+
|------------|---------|
614+
| bucket | B2 bucket name. |
615+
| credentials.application_key_id | B2 application key ID. |
616+
| credentials.application_key | B2 application key. |
617+
| retention_copies | How many copies of older backups to keep. |
618+
| retention_days | How many days of backups to keep. |
619+
620+
621+
Destination - Minio / S3-Compatible
622+
-------------------------------------
623+
624+
You can specify a Minio bucket (or any S3-compatible service such as DigitalOcean Spaces) to back up to using the native Minio SDK.
625+
626+
```json
627+
{
628+
"id": "minio-backup",
629+
"type": "minio",
630+
"endpoint": "nyc3.digitaloceanspaces.com",
631+
"bucket": "my-backup-bucket",
632+
"secure": true,
633+
"credentials": {
634+
"access_key": "your-access-key",
635+
"secret_key": "your-secret-key"
636+
},
637+
"retention_copies": 5
638+
}
639+
```
640+
641+
For a local Minio instance, set `"secure": false` and use the host:port as the endpoint (e.g. `"localhost:9000"`).
642+
643+
Parameters available in 'minio':
644+
645+
| Config key | Purpose |
646+
|------------|---------|
647+
| endpoint | Minio/S3-compatible endpoint (host or host:port, no scheme). |
648+
| bucket | Bucket name. |
649+
| secure | Whether to use TLS (default: `true`). |
650+
| credentials.access_key | Access key. |
651+
| credentials.secret_key | Secret key. |
652+
| retention_copies | How many copies of older backups to keep. |
653+
| retention_days | How many days of backups to keep. |
654+
655+
656+
Destination - Dropbox
657+
---------------------
658+
659+
You can specify a Dropbox folder to back up to using the official Dropbox SDK.
660+
661+
```json
662+
{
663+
"id": "dropbox-backup",
664+
"type": "dropbox",
665+
"access_token": "your-long-lived-access-token",
666+
"folder": "/backups",
667+
"retention_copies": 7
668+
}
669+
```
670+
671+
Generate a long-lived access token from https://www.dropbox.com/developers/apps.
672+
673+
Parameters available in 'dropbox':
674+
675+
| Config key | Purpose |
676+
|------------|---------|
677+
| access_token | Dropbox long-lived access token. |
678+
| folder | Root folder path within Dropbox (default: `/backups`). |
679+
| retention_copies | How many timestamped backup directories to keep. |
680+
681+
682+
Destination - Google Drive
683+
--------------------------
684+
685+
You can specify a Google Drive folder to back up to using a service account and `google-api-python-client`.
686+
687+
```json
688+
{
689+
"id": "gdrive-backup",
690+
"type": "gdrive",
691+
"creds_file": "/etc/backups/gdrive-service-account.json",
692+
"folder_id": "your-google-drive-folder-id",
693+
"retention_copies": 10
694+
}
695+
```
696+
697+
To set up:
698+
1. Create a service account at https://console.cloud.google.com
699+
2. Enable the Google Drive API for the project
700+
3. Download the JSON key file and place it on the backup host
701+
4. Share the target Drive folder with the service account email address (with Editor permission)
702+
5. Use the folder ID from the Drive URL as `folder_id`
703+
704+
Parameters available in 'gdrive':
705+
706+
| Config key | Purpose |
707+
|------------|---------|
708+
| creds_file | Path to the service account JSON key file. |
709+
| folder_id | Google Drive folder ID to store backups in. |
710+
| retention_copies | How many timestamped backup directories to keep. |
711+
| retention_days | How many days of backups to keep. |
712+
713+
532714
Notification - Email
533715
--------------------
534716

0 commit comments

Comments
 (0)