-
-
Notifications
You must be signed in to change notification settings - Fork 861
Description
Provide environment information
This issue pertains to the Helm chart configuration and is independent of a specific local development environment (Node.js, OS, etc.). The relevant environment details are related to the deployment tools:
Helm Chart Version: 4.0.0-beta.15
K8S Version: v1.33.1-eks-595af52
Describe the bug
The Helm chart, in its current state, has several critical bugs and design flaws that prevent a secure and robust deployment when using external services. The following issues are based on commit 1019c9c
.
1. Incorrect Value Paths for External Services
The helper templates consistently use incorrect paths for external service configurations. For example, the trigger-v4.postgres.hostname
helper attempts to access .Values.postgres.host
, while the correct path in values.yaml
is .Values.postgres.external.host
. This bug makes it impossible to configure external PostgreSQL, Redis, or ClickHouse instances without manually patching the templates.
From _helpers.tpl#L102-L108
:
{{- define "trigger-v4.postgres.hostname" -}}
{{- if .Values.postgres.host }}
{{- .Values.postgres.host }}
{{- else if .Values.postgres.deploy }}
{{- printf "%s-postgres" .Release.Name }}
{{- end }}
{{- end }}
2. Unused External S3 Credentials
When s3.deploy
is set to false
, the credentials configured under s3.external.accessKeyId
and s3.external.secretAccessKey
in values.yaml
are ignored. The webapp
deployment template lacks the logic to create the necessary S3_ACCESS_KEY_ID
and S3_SECRET_ACCESS_KEY
environment variables from these values, forcing a manual and error-prone setup using extraEnvVars
.
3. Inadequate Handling of Sensitive Information
The chart requires sensitive information, such as passwords for external databases, to be provided as plaintext strings in values.yaml
. This is a significant security risk. The chart should support referencing Kubernetes secrets (e.g., using a secretKeyRef
pattern) for all sensitive values.
4. Missing TLS Support for Redis
The webapp
deployment hardcodes the REDIS_TLS_DISABLED
environment variable to "true"
, which prevents connections to any Redis service requiring TLS (like AWS ElastiCache with in-transit encryption), a standard for production.
From webapp.yaml#L192-L195
:
- name: REDIS_TLS_DISABLED
# @todo: Support TLS
value: "true"
5. Missing Password Authentication for Redis
The webapp
deployment does not use the password configured in values.yaml
(e.g., redis.external.password
) when connecting to Redis. This makes it impossible to connect to any password-protected Redis instance.
6. SSL/TLS Certificate Issues with External PostgreSQL (e.g., AWS RDS)
Connecting to a PostgreSQL database that uses a custom Certificate Authority (CA), like AWS RDS, fails with a UNABLE_TO_GET_ISSUER_CERT_LOCALLY
error because the CA is not trusted by the container.
A potential fix could be to implement a standard pattern for Node.js applications:
- Add a
postgres.external.caCertSecretRef
field invalues.yaml
to reference a secret containing the CA cert. - Mount this secret as a volume into the
webapp
pod. - Set the
NODE_EXTRA_CA_CERTS
environment variable to point to the mounted certificate file.
Note: This is a proposed solution. It would need to be tested to confirm the application stack correctly utilizes the NODE_EXTRA_CA_CERTS
variable.
Reproduction repo
To reproduce
The bugs can be reproduced by attempting to configure the Helm chart to use external, managed services.
-
Create a
values.yaml
file with the following content, designed to use external services:# values.yaml # Disable internal deployments for key services postgres: deploy: false external: host: "my-external-rds-host.com" port: 5432 database: "triggerdb" username: "user" password: "SuperSecretPassword" # Problem: Plaintext redis: deploy: false external: host: "my-external-redis.com" port: 6379 password: "AnotherSuperSecretPassword" # Problem: Not used s3: deploy: false external: endpoint: "https://s3.us-west-2.amazonaws.com" accessKeyId: "AKIAIOSFODNN7EXAMPLE" # Problem: Not used secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" # Problem: Not used # Attempt to enable Redis TLS (Problem: no option exists) # Attempt to provide custom CA for Postgres (Problem: no option exists)
-
Run
helm template
to render the Kubernetes manifests with these values:helm template triggerdotdev ./hosting/k8s/helm --values ./values.yaml
-
Inspect the generated YAML output for the
webapp
Deployment. You will observe:- The
DATABASE_URL
environment variable is not correctly formed or is missing, because the helper template reads from the wrong value path. - The
REDIS_URL
does not contain the password. - The
REDIS_TLS_DISABLED
variable is hardcoded to"true"
. - No environment variables for S3 credentials (
S3_ACCESS_KEY_ID
,S3_SECRET_ACCESS_KEY
) are present.
- The
Additional information
The core of these issues is that the Helm chart's logic does not fully or correctly support the "external service" configuration paths outlined in its own values.yaml. This makes it unsuitable for production environments where managed services are the norm.