You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* a Minio/S3-compatible bucket (uses minio SDK; works with DigitalOcean Spaces etc.)
31
+
* a Dropbox folder (uses dropbox SDK)
32
+
* a Google Drive folder (uses google-api-python-client with service account)
26
33
27
34
Currently implemented notifications are:
28
35
@@ -57,7 +64,7 @@ For security purposes, the script is designed to run as a non-privileged user. B
57
64
Installation
58
65
------------
59
66
60
-
1. Install me. Unpack me, run 'python setup.py' as root.
67
+
1. Install me. Unpack me, run `pip install .` as root (or in a virtualenv).
61
68
62
69
1. Create a configuration file listing the folders/databases you want to back up, an encryption passphrase, and one or more places you want the encrypted dump file uploaded to, and details of any notification methods you wish to use. See docs further down for examples. You can install the dependencies now, or you can wait until you get errors later :)
63
70
@@ -255,65 +262,92 @@ Parameters available in 'pgsql':
255
262
| dbpass | Password to connect to PostgreSQL with. |
256
263
257
264
258
-
Source - RDS Database Snapshots
259
-
-----------------------
260
-
261
-
You can specify one or more Amazon RDS databases to be backed up, where they have automated rolling backups enabled. Currently, this only supports MySQL-based RDS instances.
265
+
Source - RDS MySQL Database Snapshots
266
+
-------------------------------------
262
267
263
-
The last automatic snapshot of the given database will be identified using the AWS credentials provided. That will then be restored to a temporary instance, in a security group that allows access to the agent machine. The agent machine (running this script), will then 'mysqldump' the data to a '.sql.gz' file, and destroy the temporary instance. This avoids the need to run any backup queries that would adversely affect the performance of the live database.
268
+
You can specify one or more Amazon RDS MySQL instances to be backed up using automated snapshots. The most recent automatic snapshot is restored to a temporary RDS instance, `mysqldump` is run against it, then the temporary instance is deleted. This avoids any load on the live database.
264
269
265
-
WARNING: This can take a very long time in many cases.
270
+
WARNING: This can take a very long time.
266
271
267
272
```json
268
273
{
269
274
"id": "livecompanydb",
270
275
"name": "Live Company Data",
271
-
"type": "pgsql",
272
-
"dbhost": "localhost",
276
+
"type": "rds",
277
+
"instancename": "livedb1",
278
+
"region": "eu-west-1",
279
+
"security_group": "sg-xxxxxxxx",
280
+
"instance_class": "db.t3.small",
273
281
"dbname": "livecompany",
274
282
"dbuser": "backups",
275
283
"dbpass": "zzzuserwithreadonlyperms",
276
-
"instancename": "livedb1",
277
-
"region": "eu-west-1",
278
-
"security_group": "livedbbackup",
279
-
"instance_class": "db.m1.small",
280
284
"passphrase": "64b0c7405f2d8051e2b9f02aa4898acc"
281
285
}
282
286
```
283
287
284
-
This module uses the 'boto' package, and expects auth credentials to be provided in the '~/.boto' file. This needs to be configured:
288
+
AWS credentials are taken from the environment or `~/.aws/credentials`. You can also supply them directly in the config:
285
289
290
+
```json
291
+
{
292
+
"credentials": {
293
+
"aws_access_key_id": "YOURACCESSKEY",
294
+
"aws_secret_access_key": "YOURSECRETKEY"
295
+
}
296
+
}
286
297
```
287
-
# cat >/home/backups/.boto <<EOF
288
-
[Credentials]
289
-
aws_access_key_id = YOURACCESSKEY
290
-
aws_secret_access_key = YOURSECRETKEY
291
-
EOF
292
-
# chown backups /home/backups/.boto
293
-
# chmod 400 /home/backups/.boto
294
-
```
295
-
296
-
By default, the '--events' flag is passed to mysqldump. This may break older versions of mysqldump (prior to version 5.1, IIRC), so you can disable this flag with the 'noevents' parameter.
297
298
298
-
The 'security_group' must allow access from the host running mysqldump.
299
+
The `security_group` must allow inbound access from the host running this script.
299
300
300
301
Parameters available in 'rds':
301
302
302
303
| Config key | Purpose |
303
304
|------------|---------|
304
305
| name | Description of data being backed up (for reporting purposes). |
305
-
| dbhost | Name of the mySQL host to connect to. |
306
-
| dbname | Name of the mySQL database to back up. |
307
-
| dbuser | Username to connect to mySQL as. |
308
-
| dbpass | Password to connect to mySQL with. |
309
-
| defaults | The location of an 'mysqlclient' credentials file to use instead of creating a temporary one with using above 'db*' variables. |
310
-
| noevents | Don't pass the '--events' flag to 'mysqldump'. |
311
-
| aws_access_key_id | AWS access key |
312
-
| aws_secret_access_key | AWS secret access key |
313
-
| instancename | RDS instance name to clone backup of. |
314
-
| region | AWS hosting region of RDS database. |
315
-
| security_group | VPC security group for replica to provide access to. |
316
-
| instance_class | RDS instance class to create replica as (defaults to 'db.m1.small'). |
306
+
| instancename | RDS instance identifier to restore a snapshot of. |
307
+
| region | AWS region of the RDS instance. |
308
+
| security_group | VPC security group ID to assign to the temporary instance. |
309
+
| instance_class | RDS instance class for the temporary instance (default: `db.t3.small`). |
310
+
| dbname | MySQL database name to dump. |
311
+
| dbuser | MySQL username. |
312
+
| dbpass | MySQL password. |
313
+
| noevents | Don't pass the `--events` flag to `mysqldump`. |
314
+
| credentials | Object with `aws_access_key_id` and `aws_secret_access_key` (optional). |
315
+
316
+
317
+
Source - RDS PostgreSQL Database Snapshots
318
+
------------------------------------------
319
+
320
+
Identical workflow to the RDS MySQL source above, but uses `pg_dump` instead of `mysqldump`. Use `type: rds-pgsql`.
321
+
322
+
```json
323
+
{
324
+
"id": "livecompanydb",
325
+
"name": "Live Company Data",
326
+
"type": "rds-pgsql",
327
+
"instancename": "livedb1",
328
+
"region": "eu-west-1",
329
+
"security_group": "sg-xxxxxxxx",
330
+
"instance_class": "db.t3.small",
331
+
"dbname": "livecompany",
332
+
"dbuser": "backups",
333
+
"dbpass": "zzzuserwithreadonlyperms",
334
+
"passphrase": "64b0c7405f2d8051e2b9f02aa4898acc"
335
+
}
336
+
```
337
+
338
+
Parameters available in 'rds-pgsql':
339
+
340
+
| Config key | Purpose |
341
+
|------------|---------|
342
+
| name | Description of data being backed up (for reporting purposes). |
343
+
| instancename | RDS instance identifier to restore a snapshot of. |
344
+
| region | AWS region of the RDS instance. |
345
+
| security_group | VPC security group ID to assign to the temporary instance. |
346
+
| instance_class | RDS instance class for the temporary instance (default: `db.t3.small`). |
347
+
| dbname | PostgreSQL database name to dump. |
348
+
| dbuser | PostgreSQL username. |
349
+
| dbpass | PostgreSQL password. |
350
+
| credentials | Object with `aws_access_key_id` and `aws_secret_access_key` (optional). |
317
351
318
352
319
353
Source - Volume snapshots
@@ -529,6 +563,154 @@ Parameters available in 'samba':
529
563
| suffix | Suffix for created files. |
530
564
531
565
566
+
Destination - Local Filesystem
567
+
------------------------------
568
+
569
+
You can specify a local filesystem path to copy backups to. This is useful for writing to a mounted NFS share, USB drive, or any other locally accessible storage.
570
+
571
+
```json
572
+
{
573
+
"id": "local-backup",
574
+
"type": "local",
575
+
"path": "/mnt/backup-drive",
576
+
"retention_copies": 7
577
+
}
578
+
```
579
+
580
+
Parameters available in 'local':
581
+
582
+
| Config key | Purpose |
583
+
|------------|---------|
584
+
| path | Local filesystem path to copy backups to. |
585
+
| retention_copies | How many timestamped backup directories to keep. |
586
+
| retention_days | How many days of backups to keep. |
587
+
588
+
589
+
Destination - Backblaze B2
590
+
--------------------------
591
+
592
+
You can specify a Backblaze B2 bucket to back up to using the native B2 SDK.
593
+
594
+
```json
595
+
{
596
+
"id": "b2-backup",
597
+
"type": "b2",
598
+
"bucket": "my-backup-bucket",
599
+
"credentials": {
600
+
"application_key_id": "your-application-key-id",
601
+
"application_key": "your-application-key"
602
+
},
603
+
"retention_copies": 5,
604
+
"retention_days": 30
605
+
}
606
+
```
607
+
608
+
Create an application key at https://secure.backblaze.com/app_keys.htm with read/write access to the target bucket.
0 commit comments