-
Notifications
You must be signed in to change notification settings - Fork 176
Closed
Description
Hi,
I just updated to the latest version of this chart to remove dependencies from bitnami and have issues that the connection string for redis (standalone) isn't correctl configured:
values.yaml
config:
cookieName: "oauth2-proxy"
existingConfig: oauth2-proxy
# kics-scan ignore-line
existingSecret: oauth2-proxy
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
metrics:
serviceMonitor:
enabled: true
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- auth.${DOMAIN}
tls:
- secretName: oauth2-proxy-tls
hosts:
- auth.${DOMAIN}
sessionStorage:
type: redis
redis:
existingSecret: oauth2-proxy-redis
# disable wait for redis for now
initContainers:
waitForRedis:
enabled: false
# Sub-Chart configuration
redis:
enabled: true
replicas: 1
# configure storage
redis:
resources:
limits:
cpu: 300m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
auth: true
existingSecret: oauth2-proxy-redis
authKey: redis-password
exporter:
enabled: true
serviceMonitor:
enabled: true
prometheusRule:
enabled: true
rules:
- alert: RedisPodDown
expr: |
redis_up{job="{{ include "redis-ha.fullname" . }}"} == 0
for: 5m
labels:
severity: critical
annotations:
description: Redis pod {{ "{{ $labels.pod }}" }} is down
summary: Redis pod {{ "{{ $labels.pod }}" }} is down
persistentVolume:
size: 1GiThe Kubernetes Manifests looked then like this:
pod/oauth2-proxy-587b8fb895-q4h4g 0/1 CrashLoopBackOff 1 (13s ago) 15s
pod/oauth2-proxy-redis-server-0 4/4 Running 0 22m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/oauth2-proxy ClusterIP 10.96.127.91 <none> 80/TCP,44180/TCP 124d
service/oauth2-proxy-redis ClusterIP None <none> 6379/TCP,26379/TCP,9121/TCP 22m
service/oauth2-proxy-redis-announce-0 ClusterIP 10.96.137.32 <none> 6379/TCP,26379/TCP,9121/TCP 22m
service/oauth2-proxy-redis-announce-1 ClusterIP 10.96.115.250 <none> 6379/TCP,26379/TCP,9121/TCP 22m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/oauth2-proxy 0/1 1 0 15s
NAME DESIRED CURRENT READY AGE
replicaset.apps/oauth2-proxy-587b8fb895 1 1 0 15s
NAME READY AGE
statefulset.apps/oauth2-proxy-redis-server 1/1 22mAnd the oauth2-proxy logs (wait-for-redis) showed following error message:
kubectl logs -f oauth2-proxy-7549d9866-6v2w5 -n oauth2-proxy -c wait-for-redis
Checking standalone Redis...
Using standard Redis connection...
Checking Redis at oauth2-proxy-redis-ha:6379... Elapsed time: 0s
Redis is down at oauth2-proxy-redis-ha:6379. Retrying in 5 seconds.
Checking Redis at oauth2-proxy-redis-ha:6379... Elapsed time: 5s
Redis is down at oauth2-proxy-redis-ha:6379. Retrying in 5 seconds.
Checking Redis at oauth2-proxy-redis-ha:6379... Elapsed time: 10s
Redis is down at oauth2-proxy-redis-ha:6379. Retrying in 5 secondsAs we an see here the wait-for-redis is looking for a redis service "-ha" and therefore can't connect to it, as the created service doesn't contain the -ha.
Any idea on what is here going wrong?
Workaround that it still works:
values.yaml
sessionStorage:
redis:
standalone:
connectionUrl: redis://oauth2-proxy-redis:6379Best regards,
Jan
Metadata
Metadata
Assignees
Labels
No labels