Skip to content

Commit e85d434

Browse files
committed
update to aws-cli:v2
1 parent 23b086f commit e85d434

File tree

5 files changed

+15
-48
lines changed

5 files changed

+15
-48
lines changed

Dockerfile

Lines changed: 5 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,25 @@
1-
FROM mohamnag/aws-cli
2-
MAINTAINER Luca Mattivi <luca@smartdomotik.com>
1+
FROM amazon/aws-cli:2.15.35
2+
LABEL Author="Luca Mattivi <luca@smartdomotik.com>"
33

4-
# change these to fit your need
5-
RUN apt-get update -q && apt-get install cron --yes
6-
7-
# m h dom mon dow
8-
ENV BACKUP_CRON_SCHEDULE="* * * * *"
4+
RUN yum update -y && yum install tar gzip -y
95

106
ENV BACKUP_TGT_DIR=/backup/
117
ENV BACKUP_SRC_DIR=/data/
12-
ENV BACKUP_FILE_NAME='host_volumes'
8+
ENV BACKUP_FILE_NAME='backup'
139

1410
# bucket/path/to/place/
1511
ENV BACKUP_S3_BUCKET=
1612
ENV AWS_DEFAULT_REGION=
1713
ENV AWS_ACCESS_KEY_ID=
1814
ENV AWS_SECRET_ACCESS_KEY=
1915

20-
ADD crontab /etc/cron.d/backup-cron
2116
ADD backup.sh /opt/backup.sh
2217
ADD restore.sh /opt/restore.sh
23-
ADD cron.sh /opt/cron.sh
24-
25-
RUN chmod 0644 /etc/cron.d/backup-cron
26-
# Create the log file to be able to run tail
27-
RUN touch /var/log/cron.log
2818
RUN chmod +x /opt/*.sh
2919

3020
VOLUME $BACKUP_TGT_DIR
3121
VOLUME $BACKUP_SRC_DIR
3222

3323
WORKDIR /opt/
3424

35-
CMD /opt/cron.sh
25+
ENTRYPOINT ["/opt/backup.sh"]

README.md

Lines changed: 8 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,6 @@ This avoids to upload multiple backups that are all equals.
44

55
You can also exclude one or more directories from the backup just adding an empty file `exclude_dir_from_backup` inside every directory.
66

7-
Image runs as a cron job by default evey minute. Period may be changed by tuning `BACKUP_CRON_SCHEDULE` environment variable.
8-
9-
May also be run as a one time backup job by using `backup.sh` script as command.
10-
117
Following environemnt variables should be set for backup to work:
128
```
139
BACKUP_S3_BUCKET= // no trailing slash at the end!
@@ -27,7 +23,7 @@ Flowing environment variables can be set to change the functionality:
2723
BACKUP_CRON_SCHEDULE=* * * * *
2824
BACKUP_TGT_DIR=/backup/ // always with trailing slash at the end!
2925
BACKUP_SRC_DIR=/data/ // always with trailing slash at the end!
30-
BACKUP_FILE_NAME=host_volumes
26+
BACKUP_FILE_NAME=backup
3127
```
3228
## Usage
3329
### Backup
@@ -38,14 +34,9 @@ If you want to store files on S3 under a subdirectory, just add it to the `BACKU
3834

3935
#### Examples
4036

41-
Mount the dir you want to be backed up on `BACKUP_SRC_DIR` and run image as daemon for periodic backup:
42-
```
43-
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/backedup/:/data/ mohamnag/s3-dir-backup
44-
```
45-
46-
or for one time backup (using default values and not keeping the backup archive):
37+
Mount the dir you want to be backed up on `BACKUP_SRC_DIR` and run image for one time backup (using default values and not keeping the backup archive):
4738
```
48-
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/backedup/:/data/ mohamnag/s3-dir-backup /opt/backup.sh
39+
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/backedup/:/data/ leen15/docker-s3-dir-backup
4940
```
5041

5142
### Restore
@@ -65,25 +56,24 @@ Works exactly like auto restore but container will stop after restoring and ther
6556
If you know the file path of backup (relative to `BACKUP_S3_BUCKET`) you can use this functionality to restore that specific status. Container will stop after restoring and there will be no future backups.
6657

6758
#### Examples
68-
To run any of the restore tasks, proper environment variables shall be set and `/opt/restore.sh` shall be run as command.
59+
To run any of the restore tasks, proper environment variables shall be set and `/opt/restore.sh` shall be run as command.
6960

7061
Restore an specific backup and exit:
7162
```
72-
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_FILE_PATH=2016-02-23/2016-02-23-12-00-01.tar.gz -v /dir/to/be/restored/:/data/ mohamnag/s3-dir-backup /opt/restore.sh
63+
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_FILE_PATH=2016-02-23/2016-02-23-12-00-01.tar.gz -v /dir/to/be/restored/:/data/ --entrypoint /opt/restore.sh leen15/docker-s3-dir-backup
7364
```
7465

7566
Restore latest backup and exit:
7667
```
77-
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/restored/:/data/ mohamnag/s3-dir-backup /opt/restore.sh
68+
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/restored/:/data/ --entrypoint /opt/restore.sh leen15/docker-s3-dir-backup
7869
```
7970

8071
Restoring an specific backup and start scheduled backup:
8172
```
82-
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_FILE_PATH=2016-02-23/2016-02-23-12-00-01.tar.gz -e RESTORE_RESUME_BACKUP=1 -v /dir/to/be/restored/:/data/ mohamnag/s3-dir-backup /opt/restore.sh
73+
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_FILE_PATH=2016-02-23/2016-02-23-12-00-01.tar.gz -e RESTORE_RESUME_BACKUP=1 -v /dir/to/be/restored/:/data/ --entrypoint /opt/restore.sh leen15/docker-s3-dir-backup
8374
```
8475

8576
Restoring latest and starting scheduled backup:
8677
```
87-
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_RESUME_BACKUP=1 -v /dir/to/be/restored/:/data/ mohamnag/s3-dir-backup /opt/restore.sh
78+
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_RESUME_BACKUP=1 -v /dir/to/be/restored/:/data/ --entrypoint /opt/restore.sh leen15/docker-s3-dir-backup
8879
```
89-

backup.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ eval "export COMPARE_DST_FULL_PATH=${COMPARE_DIR}${BACKUP_FILE_NAME}.tar.gz"
1313
BACKUP_DST_DIR=$(dirname "${BACKUP_DST_FULL_PATH}")
1414

1515
mkdir -p ${COMPARE_DIR}
16-
echo "Gzipping ${BACKUP_SRC_DIR} into ${COMPARE_DST_FULL_PATH}"
16+
echo "Gzipping ${BACKUP_SRC_DIR} into ${COMPARE_DST_FULL_PATH}"
1717
tar -czf ${COMPARE_DST_FULL_PATH} --exclude-tag-all=exclude_dir_from_backup -C ${BACKUP_SRC_DIR} .
1818

1919
if cmp -s -i 8 "$BACKUP_DST_FULL_PATH" "$COMPARE_DST_FULL_PATH"
@@ -24,7 +24,7 @@ else
2424
mkdir -p ${BACKUP_DST_DIR}
2525
mv "$COMPARE_DST_FULL_PATH" "$BACKUP_DST_FULL_PATH"
2626
#echo "archive created, uploading..."
27-
/usr/bin/aws s3 sync ${BACKUP_TGT_DIR} s3://${BACKUP_S3_BUCKET} --region ${AWS_DEFAULT_REGION}
27+
/usr/local/bin/aws s3 sync ${BACKUP_TGT_DIR} s3://${BACKUP_S3_BUCKET} --region ${AWS_DEFAULT_REGION}
2828
fi
2929

3030

cron.sh

Lines changed: 0 additions & 11 deletions
This file was deleted.

crontab

Lines changed: 0 additions & 2 deletions
This file was deleted.

0 commit comments

Comments
 (0)