-
-
Notifications
You must be signed in to change notification settings - Fork 88
Description
I just wanted to write down here for future users how to implement DocumentDB on AWS in combination with this great container.
DocumentDB by default uses TLS (it can be disabled but it is industry best practice to use it).
There is an environment variable, MONGO_TLS that you can specify which will enable TLS communication. However, in combintion with AWS DocumentDB, this does not work out of the box because AWS uses their own CA to provide the certificates.
So, sharing here how we fixed that since it took me a bit of messing around to find it.
In essence we need to add the correct CA's to the keystore of the OS itself. There is however one caveat, keytool expects a single certificate and not a bundle, as AWS provides. You need to retrieve the entire bundle, split the certificates and import the ones you need.
We are running this container on AWS, so we also decided to only import the certificates that are needed according to the region that we run in.
I created this script"
#!/bin/bash
# Define variables
URL="https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem"
PEM_FILE="$(mktemp)" # Temporary file for downloaded PEM
CERTS_DIR="$(mktemp -d)" # Temporary directory for split certs
DEFAULT_PASSWORD="changeit"
echo "Using temporary directory: $CERTS_DIR"
# Download the PEM file
echo "Downloading certificates from $URL..."
curl -s -o "$PEM_FILE" "$URL"
if [ $? -ne 0 ]; then
echo "Error: Failed to download certificate bundle."
exit 1
fi
echo "Download complete. Splitting certificates to find the ones for $AWS_REGION..."
# Split the PEM file into individual certificates
awk 'BEGIN {c=0;} /-----BEGIN CERTIFICATE-----/ {c++} {print > "'$CERTS_DIR'/cert" c ".pem"}' "$PEM_FILE"
# Iterate through each certificate
for CERT in "$CERTS_DIR"/*.pem; do
# Extract Common Name (CN) properly
CN=$(openssl x509 -in "$CERT" -noout -subject | sed -n 's/.*CN = \(.*\)/\1/p')
# Only process certificates that contain our current region in the CN
if ! echo "$CN" | grep -q "$AWS_REGION"; then
echo "Skipping certificate $CN (not for region $AWS_REGION)"
continue
fi
# Clean up CN for use as filename and alias
CERT_FILENAME=$(echo "$CN" | tr -d ' ' | tr '/' '_' | tr -d '"')
CERT_FILE="$CERTS_DIR/$CERT_FILENAME.pem"
# Rename certificate file
mv "$CERT" "$CERT_FILE"
echo "Importing $CN..."
# Import into Java keystore using your specified command
keytool -importcert -cacerts -file "$CERT_FILE" -alias "$CN" -storepass "$DEFAULT_PASSWORD" -noprompt
if [ $? -eq 0 ]; then
echo "Successfully imported $CN"
else
echo "Failed to import $CN"
fi
done
echo "All certificates have been imported. Cleaning up temporary files..."
# Cleanup temporary files
rm -rf "$PEM_FILE" "$CERTS_DIR"
echo "Cleanup complete. Process finished."The next step is to ensure that this script runs upon every container start. We can do that according to this documentation:
https://docs.linuxserver.io/general/container-customization/#custom-scripts
You need to make sure a directory named /custom-cont-init.d is mounted to your container that contains the above script, e.g. import-aws-rds-ca.sh.
The container would start, it would fail because it would not be able to connect to DocumentDB, but it will retry connecting. In the meantime our init script would run, so after a few tries, the container will start just fine.
If you are using AWS in combination with CDK/EFS/ECS/... we built a lambda that mounts this same directory and copies the file from our project to the directory upon deployment.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status