You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
EDIT: As I have kept testing, I have realised that the Supabase image keeps reverting to listening to localhost/127.0.0.1 when SSL full verify is on. It listens to 0.0.0.0 with no SSL. I have tested the behaviour of the standard PG 17 image and there the container always listens 0.0.0.0, with or without SSL. Furthermore the Supabase DB was spitting out a few errors:
host: 'hs48k0g8kswkwwkcgg0wsock', port: 5432, hostaddr: '10.0.1.2': connection failed: connection to server at "10.0.1.2", port 5432 failed: root certificate file "/etc/ssl/certs/coolify-ca.crt" does not exist
Either provide the file or change sslmode to disable server certificate verification.
So looks like something wrong/odd with the Supabase image, and was there or thereabout a while ago without realising it.
Apologies if this has been already answered somewhere else, I genuinely couldn't find an answer to it anywhere. I have been at it for about a day and still can't figure out what's happening.
I have a compose setup inherited from CookieCutter and use to deploy a Django app that I want to connect to a DB deployed via Coolify, and not via Compose. (Supabase image)
to use the full name of the DB as it appears in the UI as a host in my connection string, so in my case postgresql-database-hs48k0g8kswkwwkcgg0wsock
switch on the Connect To Predefined Network option in the Django stack. Done.
Using docker network inspect coolify I have verified that my DB and my app containers are on the same network, and they are.
Although I see that the "Name" of my DB in the output is hs48k0g8kswkwwkcgg0wsock and not postgresql-database-hs48k0g8kswkwwkcgg0wsock
Since I created a custom user and custom db I am using a connection string like this in my DATABASE_URL env var:
But no dice, I keep getting signs that the Django container can't talk to my Postgres.
host: 'hs48k0g8kswkwwkcgg0wsock', port: 5432, hostaddr: '10.0.1.8': connection failed: connection to server at "10.0.1.8", port 5432 failed: Connection refused
Now, it looks like Postgres is only listening on localhost, which somehow could explain why outsiders can't reach out.
2026-02-12 20:53:07.803 UTC [1] LOG: starting PostgreSQL 17.4 on aarch64-unknown-linux-gnu, compiled by gcc (GCC) 13.2.0, 64-bit
2026-02-12 20:53:07.810 UTC [1] LOG: listening on IPv6 address "::1", port 5432
2026-02-12 20:53:07.810 UTC [1] LOG: listening on IPv4 address "127.0.0.1", port 5432
2026-02-12 20:53:07.825 UTC [1] LOG: listening on Unix socket "/run/postgresql/.s.PGSQL.5432"
The strange thing is that even when making the DB public, I keep getting:
host: 'hs48k0g8kswkwwkcgg0wsock', port: 5432, hostaddr: 'fde4:d481:aa6e::8': connection failed: connection to server at "fde4:d481:aa6e::8", port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
And the proxy logs indicate this:
2026/02/13 13:34:18 [error] 30#30: *21 no live upstreams while connecting to upstream, client: 10.0.1.1, server: 0.0.0.0:5432, upstream: "hs48k0g8kswkwwkcgg0wsock", bytes from/to client:0/0, bytes from/to upstream:0/0
Could anyone shed some light on this please? Because this leaves me completely puzzled.
Thanks 🙏
My compose for reference with the bits taken out from CookieCutter.
volumes:
production_postgres_data: {}production_postgres_data_backups: {}# production_traefik: {} # commented as not using traefikservices:
django:
build:
context: .dockerfile: ./compose/production/django/Dockerfileimage: seo_optimiser_production_django# depends_on:# - postgres# - redisenv_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgrescommand: /startexpose:
- "5000"# <== I have added this. It is not part of the original setupawscli:
build:
context: .dockerfile: ./compose/production/aws/Dockerfileenv_file:
- ./.envs/.production/.djangovolumes:
- production_postgres_data_backups:/backups:zqcluster:
build:
context: .dockerfile: ./compose/production/django/Dockerfileimage: seo_optimiser_app:latestcommand: python manage.py qclusterenv_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres# depends_on:# - postgresrestart: alwaysdepends_on:
- django# postgres:# build:# context: .# dockerfile: ./compose/production/postgres/Dockerfile# image: seo_optimiser_production_postgres# volumes:# - production_postgres_data:/var/lib/postgresql/18/docker# - production_postgres_data_backups:/backups# env_file:# - ./.envs/.production/.postgres# traefik:# build:# context: .# dockerfile: ./compose/production/traefik/Dockerfile# image: seo_optimiser_production_traefik# depends_on:# - django# volumes:# - production_traefik:/etc/traefik/acme# ports:# - "0.0.0.0:80:80"# - "0.0.0.0:443:443"# redis:# image: docker.io/redis:7.2
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
EDIT: As I have kept testing, I have realised that the Supabase image keeps reverting to listening to localhost/127.0.0.1 when SSL full verify is on. It listens to 0.0.0.0 with no SSL. I have tested the behaviour of the standard PG 17 image and there the container always listens 0.0.0.0, with or without SSL. Furthermore the Supabase DB was spitting out a few errors:
So looks like something wrong/odd with the Supabase image, and was there or thereabout a while ago without realising it.
Apologies if this has been already answered somewhere else, I genuinely couldn't find an answer to it anywhere. I have been at it for about a day and still can't figure out what's happening.
I have a compose setup inherited from CookieCutter and use to deploy a Django app that I want to connect to a DB deployed via Coolify, and not via Compose. (Supabase image)
According to this doc: https://coolify.io/docs/knowledge-base/docker/compose#connect-to-predefined-networks
I need:
postgresql-database-hs48k0g8kswkwwkcgg0wsockUsing
docker network inspect coolifyI have verified that my DB and my app containers are on the same network, and they are.Although I see that the "Name" of my DB in the output is
hs48k0g8kswkwwkcgg0wsockand notpostgresql-database-hs48k0g8kswkwwkcgg0wsockSince I created a custom user and custom db I am using a connection string like this in my
DATABASE_URLenv var:I have also tried with the Network log output Name, and also of course the standard string as it appears in the UI:
But no dice, I keep getting signs that the Django container can't talk to my Postgres.
Now, it looks like Postgres is only listening on localhost, which somehow could explain why outsiders can't reach out.
The strange thing is that even when making the DB public, I keep getting:
And the proxy logs indicate this:
Could anyone shed some light on this please? Because this leaves me completely puzzled.
Thanks 🙏
My compose for reference with the bits taken out from CookieCutter.
Beta Was this translation helpful? Give feedback.
All reactions