|
| 1 | +[](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1948926) |
| 2 | +[](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2410961) |
| 3 | + |
1 | 4 | [](https://www.tidyverse.org/lifecycle/#stable) |
2 | 5 |
|
3 | 6 |
|
4 | 7 | # Neotoma Anonymized Backups |
5 | 8 |
|
6 | | -This repository generates a container service for Neotoma that copies the [Neotoma database](https://neotomadb.org) into a container and overwrites sensitive data using a random `md5` hash. The container then uploads the data to a Neotoma [AWS S3 bucket]() where the snapshot is made publically available. |
| 9 | +This repository generates a container service for Neotoma that copies the [Neotoma Paleoecology Database](https://neotomadb.org) into a Docker container and overwrites sensitive data using a random `md5` hash. The bash script running in the container then uploads the data to a Neotoma AWS S3 bucket where the snapshot is made publically available through a URL that is shared on the Neotoma website. |
| 10 | + |
| 11 | +The compressed file (`neotoma_clean_{DATETIME}.tr.gz`) includes a [bash script](archives/regenbash.sh) that will re-build the database in a user's local Postgres instance. Currently the bash script only runs for Mac and Linux. There is an experimental [Windows batch script](archives/experimental_win_restore.bat) that can be used with caution. |
| 12 | + |
| 13 | +We welcome any user contributions see the [contributors guide](CONTRIBUTING.md). |
| 14 | + |
| 15 | +## Restoring the Database |
7 | 16 |
|
8 | | -The compressed file (XXXX) includes a small README and a script to re-build the database in a local Postgres instance. |
| 17 | +The most recent snapshot of the Neotoma Database will always be tagged as `neotoma_clean_latest` in the compressed file, but the actual SQL file used to restore the database will be named with the date the snapshot was taken. Generally, the snapshots will be taken every month. If there is a need for a more recent snapshot, please contact the database administrators to request a newer snapshot. |
9 | 18 |
|
10 | | -The following installation instructions were tested on PostgreSQL version 16, using script regenbash.sh (mac and linux). |
11 | | -Alternatively, the commands can be entered directly in the command line. PostgreSQL must already be installed. |
| 19 | +### Postgres Extensions Used |
12 | 20 |
|
13 | | -## Postgres Extensions Used |
| 21 | +The Docker container uses Postgres 15, and the current RDS database version is PostgreSQL v15.14. The local database requires the following extensions to be installed before you can restore Neotoma locally: |
14 | 22 |
|
15 | | -* [pg_trgm](https://www.postgresql.org/docs/current/pgtrgm.html) |
| 23 | +* [pg_trgm](https://www.postgresql.org/docs/current/pgtrgm.html): Helps with full-text searching of publications. |
16 | 24 | * [intarray](https://www.postgresql.org/docs/9.1/intarray.html) |
17 | | -* [unaccent](https://www.postgresql.org/docs/current/unaccent.html) |
18 | | -* External: [postgis](https://postgis.net/) |
19 | | -* External: [vector/pgvector](https://github.com/pgvector/pgvector) |
| 25 | +* [unaccent](https://www.postgresql.org/docs/current/unaccent.html): Helps with searches for terms that may include accents (sitenames, contact names). |
| 26 | +* External: [postgis](https://postgis.net/): Helps manage spatial data. |
20 | 27 |
|
21 | | -These extensions are used to improve functionality within the Neotoma Database. External tools such as `postgis` and `pgvector` must be installed prior to creation within the Postgres server. We include the bash script in an effort to help users make the restoration process as simple as possible. |
| 28 | +These extensions are used to improve functionality within the Neotoma Database. The `pg_grgm`, `intarray`, and `unaccent` extensions are included with PostgreSQL. External tools such as `postgis` must be installed prior to creation within the Postgres server. |
22 | 29 |
|
23 | | -## Restoring the Database |
| 30 | +The [regenbash.sh](archives/regenbash.sh) script automates some of the creation of the extensions within the restored database. |
| 31 | + |
| 32 | +### Restoring from the Cloud |
| 33 | + |
| 34 | +The *most recent* version of the clean database is always uploaded as a `.tar.gz` file to Neotoma S3 cloud storage. You can download it directly by clicking the badge below. Note that this download is over 2 Gigs in size. |
| 35 | + |
| 36 | +[](https://neotoma-remote-store.s3.us-east-2.amazonaws.com/neotoma_clean_latest.tar.gz) |
| 37 | + |
| 38 | +Once the file is downloaded, you can extract it locally. The file archive contains the following files (the terminal date for the sql file may differ): |
24 | 39 |
|
25 | | -1. If you haven't already, [download the backup](https://neotomaprimarybackup.s3.us-east-2.amazonaws.com/clean_dump.tar.gz) to your local drive. |
| 40 | +* dbsetup.sql |
| 41 | +* experimental_win_restore.bat |
| 42 | +* regenbash.sh |
| 43 | +* neotoma_clean_2025-07-01.sql |
26 | 44 |
|
27 | | -2. Unzip the snapshot file (with commandline, or a tool): |
| 45 | +Once you execute `regenbash.sh` (Mac/Linux) or `experimental_win_restore.bat` (Windows) the database will be restored from the text file to your local database within a database `neotoma` at which point you can use the database from whichever database management system you'd like to use. |
28 | 46 |
|
29 | | - `gunzip clean_dump.tar.gz` |
| 47 | +## AWS Infrastructure |
30 | 48 |
|
31 | | -2. Enter the folder and restore database using the command `bash regenbash.sh`. For help, use: `bash regenbash.sh --help` |
| 49 | +The backup itself is generated through AWS. There are two steps, the first is packaging the Docker image and sending it to ECR, the second is initiating the Batch job, which will run the scripts in the Docker container. |
32 | 50 |
|
33 | | - `bash regenbash.sh` |
| 51 | + |
34 | 52 |
|
35 | | - The script performs the following actions. A password prompt will appear at each step: |
36 | | - |
37 | | - The database "neotoma" is first dropped if it exists; |
38 | | - The new "neotoma" is created; |
39 | | - Extenesions are installled; |
40 | | - The snapshot file (neotoma_ndb_only_2024-03-18.sql) is loaded into the new database. |
| 53 | +All files (with the exception of files that directly expose secrets) are available in this repository. All secrets are contained in a `parameters.yaml` file in the `./infrastructure` folder. We provide a [`parameters-template.yaml`](./infrastructure/parameters-template.json) file for convenience, so that users can see which key-value pairs are needed for full implementation of the workflow. |
41 | 54 |
|
42 | | -3. Alternatively, instead of using the script, the commands can be entered directly via command line: |
| 55 | +### Docker Configuration |
43 | 56 |
|
44 | | - dropdb neotoma -h localhost -U username |
45 | | - createdb neotoma -h localhost -U username |
46 | | - psql -h localhost -d neotoma -U username -c "CREATE EXTENSION postgis;" |
47 | | - psql -h localhost -d neotoma -U username -c "CREATE EXTENSION pg_trgm;" |
48 | | - psql -h localhost -d neotoma -U username -f neotoma_ndb_only_2024-03-18.sql |
| 57 | +The Docker [configuration file](batch.Dockerfile) sets up a container with PostgreSQL 15 and PostGIS. The Docker container sets up the system, creates a connection to a containerized Postgres database, and then uses `pg_dump` to create a plaintext SQL dump of the remote Neotoma database that is restored within the container. To sanitize the database of sensitive data we execute the script [`app/scrubbed_database.sh`](app/scrubbed_database.sh). The SQL statements write over rows in the Data Stewards tables as well as the Contacts tables. |
49 | 58 |
|
| 59 | +The Docker container is built and deployed to the AWS ECR using the script [`build-and-push.sh`](build-and-push.sh). For this script to work, the user must have the AWS CLI installed, and have permissions to access Neotoma AWS services. |
50 | 60 |
|
| 61 | +### AWS Infrastructure Builder |
51 | 62 |
|
52 | | -4. To view database using command line interactive terminal: |
| 63 | +The scripts [`deploy.sh`](deploy.sh) and [`update.sh`](update.sh) are used to deploy the [Batch Infrastructure](infrastructure/batch-infrastructure.yaml) configuration to CloudFormation, which will then be used to define the AWS Batch run when a job is submitted. |
53 | 64 |
|
54 | | - psql neotoma username |
| 65 | +Within the infrastructure file there is a defined `ScheduleRule`, which uses the EventBridge [`cron()`](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-scheduled-rule-pattern.html) scheduler to execute the backup snapshot at 2am on the first day of each month. Single instances of the job can also be executed using [`test_job.sh`](test_job.sh). |
55 | 66 |
|
56 | | - Meta-command \d ("describe") will list all the tables in the publice schema. To view the schema (ndb) and tables in the database, |
57 | | - expand the search path by entering the command: |
| 67 | +## Final Overview |
58 | 68 |
|
59 | | - SET search_path TO 'ndb', public; |
| 69 | +With this repository, we implement a monthly backup system using AWS infrastructure to provide Neotoma users with a sanitized version of the database for local use on their personal systems. |
0 commit comments