Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions .env-k8s-sample
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
PROJECT_ID="ncpi-rti-p01-007-ohsu"
REGION="us-central1" # Choose your preferred region
CLUSTER_NAME="my-hapi-fhir-cluster"
NUM_NODES=3 # Adjust as needed

ZONE="<your-gke-cluster-zone>"
NAMESPACE="fhir" # helm namespace
CLOUD_SQL_INSTANCE="<your-cloud-sql-instance-connection-name>"
DATABASE_NAME="<your-database-name>"
DATABASE_USER="<your-database-username>"
DATABASE_PASSWORD="<your-database-password>"
CHART_REPO="https://hapifhir.github.io/hapi-fhir-jpaserver-starter/"
CHART_NAME="hapi-fhir-jpaserver"

# Machine Type for the Cloud SQL (Postgres) instance
DATABASE_TIER="db-custom-1-3840"
DATABASE_VERSION=POSTGRES_14

# ingress, letsencrypt
NAMESPACE="fhir"
EMAIL="your-email@example.com"
DOMAIN1="hapi.example.com"
DOMAIN2="google-fhir.example.com"
54 changes: 54 additions & 0 deletions README-hapi-k8s.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Deploying HAPI FHIR JPA Server on GKE with Cloud SQL

This script deploys the HAPI FHIR JPA server starter on a Google Kubernetes Engine (GKE) cluster using Cloud SQL for PostgreSQL as the database.

## Prerequisites

* A Google Cloud project with billing enabled.
* A GKE cluster created.
* A Cloud SQL for PostgreSQL instance created.
* A service account with the necessary permissions to access Cloud SQL.
* `kubectl` and `helm` installed and configured.
* The `hapi-fhir-jpaserver` Helm chart added to your Helm repositories. (If using the repo from examples, make sure its added)

Before running:

* Install gcloud: Ensure the Google Cloud SDK is installed and configured with the correct credentials for your project ( ncpi-rti-p01-007-ohsu ).
* Enable Kubernetes Engine API: Make sure the Kubernetes Engine API is enabled in your project.
* Choose a Region: Select an appropriate region ( REGION ) for your cluster. Consider latency and availability requirements.
* Node Count: Adjust NUM_NODES based on your application's resource needs. Start small and scale up as needed.

* Important Considerations:

* Networking: This script uses the default VPC network. For more complex networking scenarios (e.g., private clusters), you'll need to specify additional network parameters.
* Node Pools: This script uses cluster autoscaling. For more fine-grained control over node configuration (machine type, etc.), you'll need to explicitly create node pools using gcloud container node-pools create .
* Security: Consider adding appropriate security settings (e.g., IP allowlisting, Kubernetes Roles and RBAC) to secure your cluster.

After running this script successfully, you can use gcloud container clusters get-credentials to authenticate with your newly created cluster. Remember to always review the Google Cloud documentation for best practices and more advanced configuration options.

## Configuration

Before running the script, update the `deploy_hapi_fhir.sh` script with your project-specific values:

* `PROJECT_ID`: Your Google Cloud project ID.
* `CLUSTER_NAME`: The name of your GKE cluster.
* `ZONE`: The zone of your GKE cluster.
* `NAMESPACE`: The Kubernetes namespace for the deployment (default: `fhir`).
* `CLOUD_SQL_INSTANCE`: The connection name of your Cloud SQL instance.
* `DATABASE_NAME`: The name of your PostgreSQL database.
* `DATABASE_USER`: The username for your PostgreSQL database.
* `DATABASE_PASSWORD`: The password for your PostgreSQL database.


## Deployment

Run the script: `./deploy_hapi_k8s.sh`.


## Verification

After the script completes, verify the deployment:

```bash
kubectl get pods -n fhir
```
37 changes: 37 additions & 0 deletions create_hapi_k8s.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
#!/bin/bash

# https://bertvv.github.io/cheat-sheets/Bash.html#writing-robust-scripts-and-debugging
set -euo pipefail

# Check if required environment variables are set
# See .env-k8s-sample

# --- Configuration ---
# see .env-k8s-sample file
: "${PROJECT_ID:?Need to set PROJECT_ID}"
: "${REGION:?Need to set REGION}"
: "${CLUSTER_NAME:?Need to set CLUSTER_NAME}"
: "${NUM_NODES:?Need to set NUM_NODES}"


# --- Create GKE Cluster ---

gcloud container clusters create "${CLUSTER_NAME}" \
--project="${PROJECT_ID}" \
--region="${REGION}" \
--cluster-autoscaling \
Copy link
Collaborator

@lbeckman314 lbeckman314 Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed --cluster-autoscaling to --enable-autoscaling to resolve this error:

➜ ./create_hapi_k8s.sh

ERROR: (gcloud.container.clusters.create) unrecognized arguments: --cluster-autoscaling (did you mean '--enable-autoscaling'?)

--min-nodes=1 \
--max-nodes="${NUM_NODES}"


# --- Check Cluster Status ---

echo "Checking cluster status..."
gcloud container clusters describe "${CLUSTER_NAME}" \
--project="${PROJECT_ID}" \
--region="${REGION}"


echo "Cluster creation complete. You can now connect to your cluster using:"
echo "gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${REGION} --project ${PROJECT_ID}"

160 changes: 160 additions & 0 deletions create_hapi_k8s_ingress.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
#!/bin/bash

# https://bertvv.github.io/cheat-sheets/Bash.html#writing-robust-scripts-and-debugging
set -euo pipefail

# Check if required environment variables are set
# See .env-k8s-sample

# --- Configuration ---
# see .env-k8s-sample file
: "${NAMESPACE:?Need to set NAMESPACE}"
: "${EMAIL:?Need to set EMAIL}"
: "${DOMAIN1:?Need to set DOMAIN1}"
: "${DOMAIN2:?Need to set DOMAIN2}"

# Create the namespace
kubectl create namespace $NAMESPACE

# Install cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl create namespace cert-manager
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.11.0 --set installCRDs=true

# Create the ClusterIssuer
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: $EMAIL
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
EOF

# Create the Ingress resource
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- $DOMAIN1
- $DOMAIN2
secretName: tls-secret
rules:
- host: $DOMAIN1
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hapi
port:
number: 8080
- host: $DOMAIN2
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: google-fhir
port:
number: 8080
EOF

# Create the Service resource for google-fhir-proxy
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: google-fhir-proxy
namespace: $NAMESPACE
spec:
selector:
app: google-fhir-proxy
ports:
- protocol: TCP
port: 80
targetPort: 8080
EOF

# Create the ServiceAccount for google-fhir-proxy
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: google-fhir-proxy-sa
namespace: $NAMESPACE
EOF

# Create the Role for google-fhir-proxy
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: $NAMESPACE
name: google-fhir-proxy-role
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
EOF

# Create the RoleBinding for google-fhir-proxy
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: google-fhir-proxy-rolebinding
namespace: $NAMESPACE
subjects:
- kind: ServiceAccount
name: google-fhir-proxy-sa
namespace: $NAMESPACE
roleRef:
kind: Role
name: google-fhir-proxy-role
apiGroup: rbac.authorization.k8s.io
EOF

# Create the Deployment for google-fhir-proxy
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: google-fhir-proxy
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: google-fhir-proxy
template:
metadata:
labels:
app: google-fhir-proxy
spec:
serviceAccountName: google-fhir-proxy-sa
containers:
- name: google-fhir-proxy
image: your-google-fhir-proxy-image
ports:
- containerPort: 8080
EOF
47 changes: 47 additions & 0 deletions create_hapi_k8s_postgres.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
#!/bin/bash

# https://bertvv.github.io/cheat-sheets/Bash.html#writing-robust-scripts-and-debugging
set -euo pipefail


# Check if required environment variables are set
# See .env-k8s-sample
: "${CLOUD_SQL_INSTANCE:?Need to set CLOUD_SQL_INSTANCE}"
: "${DATABASE_NAME:?Need to set DATABASE_NAME}"
: "${DATABASE_PASSWORD:?Need to set DATABASE_PASSWORD}"
: "${DATABASE_TIER:?Need to set DATABASE_TIER}"
: "${DATABASE_VERSION:?Need to set DATABASE_VERSION}"

Possibility of only logging in if user is not already authenticated (avoids browser opening every debugging run)?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line should be removed or commented out:

➜ ./create_hapi_k8s_postgres.sh
./create_hapi_k8s_postgres.sh: line 15: syntax error near unexpected token `('


# 1. Authentication (if necessary)
# Skip logging in if user is already authenticated
# ref: https://stackoverflow.com/a/78138012/7656815
if gcloud projects list &> /dev/null; then
echo "User is authenticated with gdcloud"
else
echo "Logging in with gcould auth application-default login..."
gcloud auth application-default login
fi

# 2. Create Cloud SQL instance
gcloud sql instances create ${CLOUD_SQL_INSTANCE} \
--database-version=${DATABASE_VERSION} \
--region=us-central1 \
--tier=${DATABASE_TIER} \
--activation-policy=ALWAYS \
--database-flags="cloudsql.iam_authentication=on" #Consider Private IP for better security

# 3. Create a database within the instance
gcloud sql databases create ${DATABASE_NAME} --instance=${CLOUD_SQL_INSTANCE}

# 4. Create a user
gcloud sql users create hapi_user --instance=${CLOUD_SQL_INSTANCE} --password=${DATABASE_PASSWORD}

# 5. Connect to the instance and grant privileges
gcloud sql connect ${CLOUD_SQL_INSTANCE} --user=postgres <<EOF
GRANT ALL PRIVILEGES ON DATABASE ${DATABASE_NAME} TO hapi_user;
EOF

# 6. Check instance status
gcloud sql instances describe ${CLOUD_SQL_INSTANCE}
46 changes: 46 additions & 0 deletions deploy_hapi_k8s.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/bin/bash

# https://bertvv.github.io/cheat-sheets/Bash.html#writing-robust-scripts-and-debugging
set -euo pipefail


# Check if required environment variables are set
# See .env-k8s-sample
: "${PROJECT_ID:?Need to set PROJECT_ID}"
: "${CLUSTER_NAME:?Need to set CLUSTER_NAME}"
: "${ZONE:?Need to set ZONE}"
: "${NAMESPACE:?Need to set NAMESPACE}"
: "${CLOUD_SQL_INSTANCE:?Need to set CLOUD_SQL_INSTANCE}"
: "${DATABASE_NAME:?Need to set DATABASE_NAME}"
: "${DATABASE_USER:?Need to set DATABASE_USER}"
: "${DATABASE_PASSWORD:?Need to set DATABASE_PASSWORD}"
: "${CHART_REPO:?Need to set CHART_REPO}"
: "${CHART_NAME:?Need to set CHART_NAME}"

# --- Functions ---

create_secret() {
local secret_name="$1"
local key="$2"
local value="$3"
kubectl create secret generic "$secret_name" --from-literal="$key=$value" -n "$NAMESPACE" || true
}

deploy_chart() {
helm install "$CHART_NAME" "$CHART_REPO/$CHART_NAME" \
-n "$NAMESPACE" \
--set spring.datasource.url="jdbc:postgresql://${CLOUD_SQL_INSTANCE}:5432/${DATABASE_NAME}" \
--set spring.datasource.username="${DATABASE_USER}" \
--set spring.datasource.password=$(kubectl get secret cloud-sql-credentials -o jsonpath="{.data.password}" -n "$NAMESPACE" | base64 --decode)
}

# --- Main Script ---

# Create Kubernetes secrets for Cloud SQL credentials
create_secret cloud-sql-credentials password "${DATABASE_PASSWORD}"

# Deploy the Helm chart
deploy_chart

echo "Deployment complete. Check the pods' status using:"
echo "kubectl get pods -n $NAMESPACE"