-
I have searched the existing issues, both open and closed, to make sure this is not a duplicate report.
The bugThe .env file now contains a specific version of Immich in order for it to work (1.135.3). So this was a typical update for me. I believe I wasn't affected by the recent changes, since I don't use a custom IMMICH_MEDIA_LOCATION. However, version v1.136.0 causes the immich_server container to remain permanently in a starting state. In the meantime, I also tried deleting everything and restoring from scratch using the official guide: The OS that Immich Server is running onUbuntu 24.04.2 LTS Version of Immich Serverv1.136.0 Version of Immich Mobile Appv.1.135.1 Platform with the issue
Your docker-compose.yml contentservices:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
env_file:
- .env
ports:
- 2283:2283
depends_on:
- redis
- database
restart: unless-stopped
healthcheck:
disable: false
immich-machine-learning:
container_name: immich_machine_learning
# For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
# Example tag: ${IMMICH_VERSION:-release}-cuda
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
# extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
# file: hwaccel.ml.yml
# service: cpu # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
volumes:
- model-cache:/cache
env_file:
- .env
restart: unless-stopped
healthcheck:
disable: false
redis:
container_name: immich_redis
image: docker.io/valkey/valkey:8-bookworm@sha256:facc1d2c3462975c34e10fccb167bfa92b0e0dbd992fc282c29a61c3243afb11
healthcheck:
test: redis-cli ping || exit 1
restart: unless-stopped
database:
container_name: immich_postgres
image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:5f6a838e4e44c8e0e019d0ebfe3ee8952b69afc2809b2c25f7b0119641978e91
env_file:
- .env
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: '--data-checksums'
volumes:
# Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
restart: unless-stopped
volumes:
model-cache:
immich_pgdata:
external: true Your .env content# The location where your uploaded files are stored
UPLOAD_LOCATION=/mnt/data/immich-library
# The location where your database files are stored
DB_DATA_LOCATION=immich_pgdata
# The Immich version to use. You can pin this to a specific version like "v1.71.0"
IMMICH_VERSION=v1.135.3
# Connection secret for postgres. You should change it to a random password
DB_PASSWORD=XXXXXXXXX
# The values below this line do not need to be changed
###################################################################################
DB_HOSTNAME=immich_postgres
DB_USERNAME=user
DB_DATABASE_NAME=user
REDIS_HOSTNAME=immich_redis Reproduction steps
Relevant log outputStarting api worker
Starting microservices worker
[Nest] 7 - 07/26/2025, 4:56:23 PM LOG [Microservices:EventRepository] Initialized websocket server
[Nest] 7 - 07/26/2025, 4:56:23 PM LOG [Microservices:DatabaseRepository] targetLists=1, current=1 for clip_index of 24373 rows
[Nest] 7 - 07/26/2025, 4:56:23 PM LOG [Microservices:DatabaseRepository] targetLists=1, current=1 for face_index of 27807 rows
[Nest] 7 - 07/26/2025, 4:56:23 PM LOG [Microservices:DatabaseRepository] Running migrations, this may take a while
[Nest] 18 - 07/26/2025, 4:56:24 PM LOG [Api:EventRepository] Initialized websocket server
Query failed : {
durationMs: 4.304889000000912,
error: PostgresError: constraint "FK_420bec36fc02813bddf5c8b73d4" for table "asset_job_status" does not exist
at ErrorResponse (/usr/src/app/server/node_modules/postgres/cjs/src/connection.js:794:26)
at handle (/usr/src/app/server/node_modules/postgres/cjs/src/connection.js:480:6)
at Socket.data (/usr/src/app/server/node_modules/postgres/cjs/src/connection.js:315:9)
at Socket.emit (node:events:518:28)
at addChunk (node:internal/streams/readable:561:12)
at readableAddChunkPushByteMode (node:internal/streams/readable:512:3)
at Readable.push (node:internal/streams/readable:392:5)
at TCP.onStreamRead (node:internal/stream_base_commons:189:23) {
severity_local: 'ERROR',
severity: 'ERROR',
code: '42704',
file: 'pg_constraint.c',
line: '872',
routine: 'get_relation_constraint_oid'
},
sql: 'ALTER TABLE "asset_job_status" RENAME CONSTRAINT "FK_420bec36fc02813bddf5c8b73d4" TO "asset_job_status_assetId_fkey";',
params: []
}
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1750676477029-AlbumAssetUpdateId" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM ERROR [Microservices:DatabaseRepository] Kysely migrations failed: PostgresError: constraint "FK_420bec36fc02813bddf5c8b73d4" for table "asset_job_status" does not exist
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1750694237564-AlbumAssetAuditTable" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1750780093818-AddAlbumToAssetDeleteTrigger" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1751035357937-MemorySyncChanges" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1751304834247-StackSyncChanges" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1751924596408-AddOverrides" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1752004072340-UpdateIndexOverrides" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1752152941084-PeopleAuditTable" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1752161055253-RenameGeodataPKConstraint" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1752161055254-AddActivityAssetFk" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1752169992364-AddIsPendingSyncReset" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM LOG [Microservices:DatabaseRepository] Migration "1752250924342-UserMetadataSync" succeeded
[Nest] 7 - 07/26/2025, 4:56:26 PM WARN [Microservices:DatabaseRepository] Migration "1752267649968-StandardizeNames" failed
PostgresError: constraint "FK_420bec36fc02813bddf5c8b73d4" for table "asset_job_status" does not exist
at ErrorResponse (/usr/src/app/server/node_modules/postgres/cjs/src/connection.js:794:26)
at handle (/usr/src/app/server/node_modules/postgres/cjs/src/connection.js:480:6)
at Socket.data (/usr/src/app/server/node_modules/postgres/cjs/src/connection.js:315:9)
at Socket.emit (node:events:518:28)
at addChunk (node:internal/streams/readable:561:12)
at readableAddChunkPushByteMode (node:internal/streams/readable:512:3)
at Readable.push (node:internal/streams/readable:392:5)
at TCP.onStreamRead (node:internal/stream_base_commons:189:23) {
severity_local: 'ERROR',
severity: 'ERROR',
code: '42704',
file: 'pg_constraint.c',
line: '872',
routine: 'get_relation_constraint_oid'
}
microservices worker error: PostgresError: constraint "FK_420bec36fc02813bddf5c8b73d4" for table "asset_job_status" does not exist, stack: PostgresError: constraint "FK_420bec36fc02813bddf5c8b73d4" for table "asset_job_status" does not exist
at ErrorResponse (/usr/src/app/server/node_modules/postgres/cjs/src/connection.js:794:26)
at handle (/usr/src/app/server/node_modules/postgres/cjs/src/connection.js:480:6)
at Socket.data (/usr/src/app/server/node_modules/postgres/cjs/src/connection.js:315:9)
at Socket.emit (node:events:518:28)
at addChunk (node:internal/streams/readable:561:12)
at readableAddChunkPushByteMode (node:internal/streams/readable:512:3)
at Readable.push (node:internal/streams/readable:392:5)
at TCP.onStreamRead (node:internal/stream_base_commons:189:23)
microservices worker exited with code 1
Killing api process
Initializing Immich v1.136.0
Detected CPU Cores: 4 Additional informationNo response |
Beta Was this translation helpful? Give feedback.
Replies: 22 comments 19 replies
-
Did you modify the database manually somehow? |
Beta Was this translation helpful? Give feedback.
-
Nope, didn’t touch the DB manually — don’t even know how to. |
Beta Was this translation helpful? Give feedback.
-
I have a nearly identical setup and have the same issue. I have also never done any manual intervention to the database. |
Beta Was this translation helpful? Give feedback.
-
I had the issue yesterday. I wanted to do some more testing today but the upgrade did not error anymore. I started by rolling back to a zfs snapshot that I took before starting to upgrade yesterday so I know the system is in exactly the same state as before when it was failing. Can't explain it but I have an error free 1.36.0 system now so nothing to see here. |
Beta Was this translation helpful? Give feedback.
-
Originally reported in #20152 |
Beta Was this translation helpful? Give feedback.
-
I intentionally chose a clear and simple title (“Immich server stuck at starting state”) so that less experienced users — like myself — can actually find it when searching. |
Beta Was this translation helpful? Give feedback.
-
Ya just wanted to try to help by showing the other incidents as well! Cheers mate |
Beta Was this translation helpful? Give feedback.
-
i also have the same root problem, i guess. but slightly different. i updated from 1.135.3 oto 1.136.0 on my proxmox-lxc. never touched the database manually. the update itself went trough without any issues (at least no error message during the actual update). when the server restartet after update the error occured.
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
both the server and the mobile app are updated to v1.136.0 ? Version of Immich Mobile App |
Beta Was this translation helpful? Give feedback.
-
Any steps to update successfully immich to 1.136.0 on truenas? |
Beta Was this translation helpful? Give feedback.
-
I’m not sure how it will work on TrueNAS, but in my setup I’d try the jobs for Storage Template Migration first, then the Migration from the web admin panel, and finally attempt the update again using the usual CLI commands. |
Beta Was this translation helpful? Give feedback.
-
And successfully updated? |
Beta Was this translation helpful? Give feedback.
-
Meh, just did it with zero luck |
Beta Was this translation helpful? Give feedback.
-
@chamdim @deThommy Could you post your Immich DB schema from the current working Immich version, e.g. v1.135.3 is fine. We'd like to see if there is any lingering schema drift/discrepancies. Docker command (assuming default user and DB): docker exec -ti --user postgres immich_postgres pg_dump --schema-only -d immich It seems in @auberginepop's case there was no drift; perhaps something else if the update succeeded eventually.
|
Beta Was this translation helpful? Give feedback.
-
Here is the dump as requested: BTW, just tested with v1.137.1 — unfortunately, it didn't fix the issue on my end. |
Beta Was this translation helpful? Give feedback.
-
Looks partial/cropped. It begins with CREATE INDEX and has no CREATE TABLE entries. Could you check and re-upload? |
Beta Was this translation helpful? Give feedback.
-
What should I check? I ran the following command: The strange part is that my instance works just fine. Any idea what could be causing this? |
Beta Was this translation helpful? Give feedback.
-
To all having
You can check the schema drift and generate the fixing SQL statementsMore info in #20530 (comment) docker exec immich_server sh -c 'DB_URL=postgres://$DB_USERNAME:$DB_PASSWORD@database:5432/$DB_DATABASE_NAME npm run migrations:debug && cat migrations.sql' Output looks like this: > immich@1.135.3 migrations:debug
> node ./dist/bin/migrations.js debug
Wrote migrations.sql
-- UP
ALTER TABLE "albums_assets_assets" ADD CONSTRAINT "FK_4bd1303d199f4e72ccdf998c621" FOREIGN KEY ("assetsId") REFERENCES "assets" ("id") ON UPDATE CASCADE ON DELETE CASCADE; -- missing in target
ALTER TABLE "asset_files" ADD CONSTRAINT "FK_e3e103a5f1d8bc8402999286040" FOREIGN KEY ("assetId") REFERENCES "assets" ("id") ON UPDATE CASCADE ON DELETE CASCADE; -- missing in target Generated SQL commands (after Applying the SQL/schema fixBefore applying you can post/share generated SQL commands for review. Have DB backup before applying. When applying:
Which means an extra cleanup step is needed for dangling records, similar to #20530 (comment). |
Beta Was this translation helpful? Give feedback.
-
@skatsubo you also asked for my schema, youll find it attached. i also tried updating to the latest version without success, so i rolled back to 1.135.3. im running immich on proxmox and installed it using the ve helper-script (so no docker). |
Beta Was this translation helpful? Give feedback.
-
I also have an issue after upgrades. Deployment: on proxmox, as LXC, upgraded with the update script I have Specifically, the
A quick google search led me to this webpack issue. Not sure if it's related, but I thought I might link it. Though it doesn't look like a DB issue, I have also followed the DB diagnostics posted above and discovered a minor schema drift with one geo index. I've fixed it. My current migration history is this:
As you can see, this is a fairly fresh installation. The only caveat is that I have migrated it from a docker-compose setup a few months ago. But it didn't have the Any help would be appreciated here :) Also, great work on immich so far, guys! I really love what you've done, you're rockstars! 🤘 EDIT Ehh, how is it, that as soon as one publishes an issue, they immediately find a solution to it? |
Beta Was this translation helpful? Give feedback.
-
I ll post my full fault finding and procedure if this help anyone else. Thanks to @skatsubo i only saw the error and the missing constraint. the log error: then i gave: and the output: then i docker exec into the immich_postgres: inside immich_postgres: inside immich_postgres: inside immich_postgres: exit the immich_postgres container with DONE! |
Beta Was this translation helpful? Give feedback.
I ll post my full fault finding and procedure if this help anyone else. Thanks to @skatsubo i only saw the error and the missing constraint.
the log error:
[Nest] 7 - 07/26/2025, 4:56:26 PM ERROR [Microservices:DatabaseRepository] Kysely migrations failed: PostgresError: constraint "FK_420bec36fc02813bddf5c8b73d4" for table "asset_job_status" does not exist
then i gave:
docker exec immich_server sh -c 'DB_URL=postgres://user:password@immich_postgres:5432/user npm run migrations:generate | sed -En "s/Wrote (.*)/\1/p" | xargs sed -n "/function up/,/^}/ s/.*await sql.\(.*\).\.execute.*/\1/p"
and the output:
ALTER TABLE "asset_job_status" ADD CONSTRAINT "FK_420bec36fc02813bddf5c8b73d4" FOREIG…