Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#ensure that needed psql.sh script (which is used within the docker container (which is based on unix system)) is correctly set tp LF
/sql/pg_restore.sh text eol=lf
#ensure that *.sh scripts (which are used on docker containers (unix) or directly on unix) are correctly set to LF lineendings
*.sh text eol=lf

#ensure that data dump is not modified by git
/sql/data binary
71 changes: 71 additions & 0 deletions .github/workflows/docker-publish-db-hardrestore.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
name: Build and Push Docker image for Database Hardrestore

on:
workflow_dispatch:

permissions:
id-token: write
contents: write
packages: write
attestations: write

jobs:
build-and-push:
runs-on: ubuntu-latest

steps:
- name: Checkout repository
uses: actions/checkout@v6

- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: prepare docker metadata
uses: docker/metadata-action@v5
id: meta
with:
images: |
ghcr.io/${{ github.repository }}/db-hardrestore
tags: |
type=sha
type=ref,event=tag
type=semver,pattern={{version}}

- name: Build and push image
id: push
uses: docker/build-push-action@v6
with:
context: sql/
file: sql/Dockerfile-hardrestore-without-extensions
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

- name: Get first tag
id: firsttag
uses: actions/github-script@v8
env:
ALLTAGS: ${{ steps.meta.outputs.tags }}
with:
script: |
const firsttag = process.env.ALLTAGS.split('\n')[0]
core.setOutput('firsttag', firsttag)

- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
image: ${{ steps.firsttag.outputs.firsttag }}
format: 'cyclonedx-json'
output-file: 'sbom.cyclonedx.json'

- name: Attest
uses: actions/attest-sbom@v3
id: attest
with:
subject-name: ghcr.io/${{ github.repository }}/db-hardrestore
subject-digest: ${{ steps.push.outputs.digest }}
sbom-path: 'sbom.cyclonedx.json'
2 changes: 1 addition & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ services:
- postgres_data:/var/lib/postgresql/data # Persistent data

healthcheck:
test: ["CMD-SHELL", "pg_isready -U \"$$POSTGRES_USER\""]
test: ["CMD-SHELL", "pg_isready --user=\"$$POSTGRES_USER\" --dbname=\"$$POSTGRES_DB\""] # specifying user & db will not fail the healthcheck if they don't exist but will add a logentry
interval: 60s
timeout: 10s
retries: 3
Expand Down
7 changes: 7 additions & 0 deletions sql/Dockerfile-hardrestore-without-extensions
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
FROM alpine/psql:15.5

WORKDIR /db_restore/
COPY data /db_restore/
COPY --chmod=777 pg_hardrestore_by_connection_uri.sh /db_restore/

ENTRYPOINT ["/db_restore/pg_hardrestore_by_connection_uri.sh"]
20 changes: 20 additions & 0 deletions sql/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Dataset and scripts for docker compose

When running the docker compose dev setup, it will mount some scripts and the `data`-file into the database container. The postgres db in the container will then set itself up with the default dataset. That dataset is compatible for use with the GeospatialAnalyzer backend when the backend is configured with the default `topic.json`. The data is restored from the `data`-file in this repository which is created with [`pg_dump`](https://www.postgresql.org/docs/15/app-pgdump.html) and uses the custom postgres backup file format.

# Dockerfile-hardrestore-without-extensions - Hardrestore database utility image

This directory contains a Dockerfile to generate a database utility container image. It uses the same default dataset used in the docker compose dev setup. It is especially useful for initializing / restoring a database for use with the geospatialanalyzer in an orchestrated and isolated environment (e.g. kubernetes cluster).

This image does not contain any database itself. It takes a [postgres connection string](https://www.postgresql.org/docs/15/libpq-connect.html#LIBPQ-CONNSTRING) pointing to an external database server and restores the default data dump to that database. It will first _wipe clean all relations from the specified database that are also contained in the dump_. Just deploy the image as a container to an environment were it can reach the postgres database and set the `POSTGRES_DB_URI` environment variable with the connection details.

The database specified has to contain the `postgis` and the `postgis_raster` extensions. This image can be used to bring testing data to an existing or by other means automatically created database. It is used for this goal for testing kubernetes deployments with managed postgres clusters, cf. [the kubernetes deployment configuration for the geospatialanalyzer backend](https://github.com/geobakery/gsa-deployment/blob/main/README.md#database-initialization-or-restoration).

Build command (from the parent folder; remove `sql` from both paths if run in this folder):
```bash
docker buildx build -f sql/Dockerfile-hardrestore-without-extensions -t <your-tag> sql/
```

## Building custom utility images

To supply a custom dataset fork the repository and switch out the `data` file in this directory before building the utility Dockerfile. You can use any file compatbile with [`pg_restore`](https://www.postgresql.org/docs/15/app-pgrestore.html).
26 changes: 26 additions & 0 deletions sql/pg_hardrestore_by_connection_uri.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#!/bin/sh
set -e

waiting_for()
{
if [ $i -eq 6 ]; then
>&2 echo "$1 still not ready. Aborting."
exit 1
fi

>&2 echo "Waiting for $1 ..."
sleep 5
i=$(($i+1))
}

i=0
until pg_isready --dbname="$POSTGRES_DB_URI"; do
waiting_for "Database (pg_ready)"
done

i=0
until psql --dbname="$POSTGRES_DB_URI" -c "\dx" | grep -w postgis; do
waiting_for "postgis extension"
done

pg_restore --dbname="$POSTGRES_DB_URI" --no-owner --clean --if-exists --verbose ./data
5 changes: 3 additions & 2 deletions sql/pg_restore.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@ set -e
echo "Starting database restore from custom format dump..."

# Restore the database dump
pg_restore -U "postgres" -d "postgres" -v '/docker-entrypoint-initdb.d/data' 2>&1 || {
# use docker image env vars, c.f. https://hub.docker.com/_/postgres#initialization-scripts
pg_restore --username="$POSTGRES_USER" --dbname="$POSTGRES_DB" --no-owner -v '/docker-entrypoint-initdb.d/data' 2>&1 || {
echo "Note: Some warnings are expected during restore (e.g., extensions already exist)"
}

echo "Database restore completed!"
echo "Database restore completed!"