Skip to content

Optimisations(I WON AGAINST THAT BOT HHAHAHAH)#39

Open
dumbutdumber wants to merge 14 commits intop-stream:masterfrom
dumbutdumber:master
Open

Optimisations(I WON AGAINST THAT BOT HHAHAHAH)#39
dumbutdumber wants to merge 14 commits intop-stream:masterfrom
dumbutdumber:master

Conversation

@dumbutdumber
Copy link
Contributor

@dumbutdumber dumbutdumber commented Mar 1, 2026

Description

This PR fixes/changes a bunch of stuff to the backend that were unoptimized. Look at fixes for DETAILED explanations.

DISCLAIMER: AI was used in the help of making this pr, be it research, identification of bugs or fixes themselves. I can guarantee that its been manually reviewed by me IF it has been generated by AI. Note: This was mainly due to the fact that I found a lot of these fixes before hand and wrote them down and forgot where I kept my text file, after finding it I used an AI agent to do a quick check and do a broader check where I came in and did another double check. I also did use AI to summarize all the changes I made(most of them are simple 1-2 lines changes that just repeat over like 20 files)

Fixes # (issue)

1) PrismaClient singleton fix

Updated Prisma client initialization to reuse a single instance in dev/HMR.

Identified by AI. FIx by me do

2) UUID v7 adoption

Replaced random UUID generation paths with UUIDv7.

  • Why this was updated: Time-ordered UUIDs improve index locality and insertion behavior, which can improve write/index performance at scale.
  • Affected file: multiple files
  • Source: https://www.rfc-editor.org/rfc/rfc9562

^^ The above source is not the one I used(I used a blog but after asking AI cause I could not find the blog again) )

3) Added missing DB indexes

Added hash/composite indexes for high-frequency query patterns.

The Type of index was selected by AI, I tried my best to confirm if these were the correct choices however I am not the most confident when it comes to index types so Please be mindful.

4) Connection pool configuration

Added explicit pool size and timeout settings for PostgreSQL adapter usage.

  • Why this was updated: Prevents unbounded waits, improves load behavior, and makes connection management simpler.
  • Affected file: server/utils/prisma.ts
  • Source: https://node-postgres.com/apis/pool

This add a new env called DB_POOL_MAX, this is mainly for the central community DB so the default is quite High for self hosters.

5) Batched loop-based upserts in transactions

Converted iterative upserts to transactional batches where applicable.

Just general optimization made by me :))

6) Converted find+create/update to upsert (progress routes)

Replaced conditional write flows with direct upsert() logic.

Just general optimization made by me :))

7) Removed redundant queries

Removed unnecessary pre/post reads around mutations where possible.

Just updates a bunch of queries to use better optimizations also do note that the above source is not the only ones. I used a
bunch for case by case

8) Added select clauses

Applied narrower select projections for existence checks and payload shaping.

Added select for certain calls since we were never using all the data that we got back from the db so its better to just get less info

9) Fixed user deletion cascade coverage

Expanded user delete transaction to include missing related tables.

U were not deleting certain tables related to some ID when a user deleted account.

10) Enabled relation joins preview support

Enabled relationJoins in Prisma schema to support join-based relation load strategy.

Note there is some ups or downs with this for now, I tried simulation an actual db with 50k users and 200 users sending req
every second and my implementation won but in reality it can depend, future update may be needed incase performance drops

11) Idempotency/race-condition hardening

Applied upsert-based patterns to TOCTOU-prone endpoints.

This will force all logged in devices when they relog to be registered as NEW devices. Note this was the main AI recc, I confirmed everything and tho it is VERYYYY edge case dependent I still went ahead since edge case handling is good.

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update

@qodo-code-review
Copy link

Review Summary by Qodo

Database and query optimizations for improved performance and connection stability

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Replace UUID v4 with UUID v7 for better index locality and write performance
• Add database indexes on high-frequency query patterns (user_id, composite keys)
• Convert iterative upserts to transactional batches for reduced round-trips
• Replace find+create/update patterns with direct upsert operations
• Add select clauses to narrow query results and reduce data transfer
• Implement PrismaClient singleton pattern to prevent connection explosion
• Configure explicit PostgreSQL connection pool with size and timeout settings
• Remove redundant pre/post reads around mutations to reduce database chatter
Diagram
flowchart LR
  A["UUID v4 Generation"] -->|"Replace with"| B["UUID v7 Time-Ordered"]
  C["Iterative Upserts"] -->|"Convert to"| D["Transactional Batches"]
  E["Find + Create/Update"] -->|"Replace with"| F["Direct Upsert"]
  G["Full Row Selects"] -->|"Narrow to"| H["Select Specific Fields"]
  I["Multiple Connections"] -->|"Implement"| J["PrismaClient Singleton"]
  K["Unbounded Pool"] -->|"Configure"| L["Explicit Pool Settings"]
  B -->|"Improves"| M["Index Performance"]
  D -->|"Reduces"| N["DB Round-trips"]
  F -->|"Eliminates"| O["Race Conditions"]
  H -->|"Decreases"| P["Data Transfer"]
  J -->|"Prevents"| Q["Connection Explosion"]
  L -->|"Improves"| R["Load Behavior"]
Loading

Grey Divider

File Changes

1. server/utils/prisma.ts ✨ Enhancement +8/-2

PrismaClient singleton and connection pool configuration

server/utils/prisma.ts


2. server/routes/auth/register/complete.ts ✨ Enhancement +2/-2

Replace randomUUID with uuidv7 for better performance

server/routes/auth/register/complete.ts


3. server/routes/auth/login/start/index.ts ✨ Enhancement +1/-0

Add select clause to narrow query results

server/routes/auth/login/start/index.ts


View more (22)
4. server/routes/users/@me.ts ✨ Enhancement +8/-0

Add select clause for specific user fields

server/routes/users/@me.ts


5. server/routes/users/[id]/bookmarks.ts ✨ Enhancement +21/-17

Convert iterative upserts to transactional batch operations

server/routes/users/[id]/bookmarks.ts


6. server/routes/users/[id]/group-order.ts ✨ Enhancement +2/-2

Replace randomUUID with uuidv7 generation

server/routes/users/[id]/group-order.ts


7. server/routes/users/[id]/index.ts ✨ Enhancement +25/-1

Add missing cascade deletes for user data cleanup

server/routes/users/[id]/index.ts


8. server/routes/users/[id]/lists/index.post.ts ✨ Enhancement +49/-26

Add UUID generation and transactional list creation with validation

server/routes/users/[id]/lists/index.post.ts


9. server/routes/users/[id]/lists/index.patch.ts ✨ Enhancement +5/-0

Add relationLoadStrategy and UUID generation for list items

server/routes/users/[id]/lists/index.patch.ts


10. server/routes/users/[id]/progress.ts ✨ Enhancement +37/-51

Convert find+create/update to upsert with transactional batches

server/routes/users/[id]/progress.ts


11. server/routes/users/[id]/progress/[tmdb_id]/index.ts ✨ Enhancement +17/-24

Replace conditional writes with direct upsert operations

server/routes/users/[id]/progress/[tmdb_id]/index.ts


12. server/routes/users/[id]/progress/import.ts ✨ Enhancement +50/-49

Convert iterative upserts to transactional batch with error handling

server/routes/users/[id]/progress/import.ts


13. server/routes/users/[id]/sessions.ts ✨ Enhancement +8/-0

Add select clause to narrow session query results

server/routes/users/[id]/sessions.ts


14. server/routes/sessions/[sid]/index.ts ✨ Enhancement +8/-22

Remove redundant queries and use update return value directly

server/routes/sessions/[sid]/index.ts


15. server/routes/users/[id]/settings.ts ✨ Enhancement +1/-0

Add select clause to narrow user existence check

server/routes/users/[id]/settings.ts


16. server/routes/users/[id]/watch-history.ts ✨ Enhancement +13/-1

Add select clause and narrow query results for watch history

server/routes/users/[id]/watch-history.ts


17. server/routes/users/[id]/watch-history/[tmdbid]/index.ts ✨ Enhancement +40/-64

Convert iterative upserts to transactional batch operations

server/routes/users/[id]/watch-history/[tmdbid]/index.ts


18. server/utils/auth.ts ✨ Enhancement +19/-8

Replace randomUUID with uuidv7 and implement session upsert

server/utils/auth.ts


19. server/utils/challenge.ts ✨ Enhancement +2/-2

Replace randomUUID with uuidv7 for challenge code generation

server/utils/challenge.ts


20. server/utils/playerStatus.ts Formatting +1/-1

Simplify type annotation for player status store

server/utils/playerStatus.ts


21. prisma/schema.prisma ✨ Enhancement +11/-3

Add database indexes and unique constraints for query optimization

prisma/schema.prisma


22. prisma/migrations/20260301145729_add_unique_constraints/migration.sql ✨ Enhancement +24/-0

Create hash indexes and unique constraints for performance

prisma/migrations/20260301145729_add_unique_constraints/migration.sql


23. package.json Dependencies +6/-1

Add uuidv7, pg, and related dependencies for optimizations

package.json


24. .env.example 📝 Documentation +3/-0

Document new DB_POOL_MAX environment variable

.env.example


25. server/routes/lists/[id].get.ts Additional files +1/-0

...

server/routes/lists/[id].get.ts


Grey Divider

Qodo Logo

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 1, 2026

Code Review by Qodo

🐞 Bugs (19) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. List route missing null-check 🐞 Bug ✓ Correctness
Description
The public list GET handler accesses listInfo.public without verifying listInfo exists; if the
list id is invalid, this will throw a runtime TypeError and return a 500 instead of a controlled
404/403. This is a user-visible correctness bug.
Code

server/routes/lists/[id].get.ts[R5-9]

const listInfo = await prisma.lists.findUnique({
+    relationLoadStrategy: 'join',
 where: {
   id: id,
 },
Evidence
findUnique() can return null, but the route immediately dereferences listInfo.public without
guarding, which can crash the handler for missing lists.

server/routes/lists/[id].get.ts[3-23]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`listInfo` may be `null` from `findUnique`, but the code dereferences `listInfo.public` unconditionally. This can cause a 500 error for unknown IDs.
### Issue Context
Public list route should respond with a controlled error (typically 404 for not found, 403 for not public).
### Fix Focus Areas
- server/routes/lists/[id].get.ts[3-20]
### Suggested change
- If `!listInfo`, return/throw a 404.
- Then check `listInfo.public` and return/throw 403 as appropriate.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Over-broad NULL→'\n' migration 🐞 Bug ✓ Correctness
Description
The normalization migration updates season_id/episode_id to the '\n' sentinel for every row with
NULL in those columns, not just movie rows. This can change semantics for non-movie rows that
legitimately omit seasonId/episodeId, and can break future upserts because app code still uses NULL
for non-movies when ids are omitted (leading to duplicates/misclassification).
Code

prisma/migrations/20260301172358_normalize_progress_movie_sentinels/migration.sql[R16-23]

+-- Now convert remaining NULL rows to '\n' (covers both fully-NULL and mixed cases)
+UPDATE "progress_items"
+SET "season_id" = E'\n'
+WHERE "season_id" IS NULL;
+
+UPDATE "progress_items"
+SET "episode_id" = E'\n'
+WHERE "episode_id" IS NULL;
Evidence
The migration rewrites all NULL season_id and episode_id values to the sentinel independently, which
can affect non-movie rows and partial-null rows. But application code uses NULL for non-movies when
seasonId/episodeId are omitted, and the schema allows these columns to be nullable; after migration,
stored '\n' will no longer match future upserts that search with NULL. Additionally, progress
cleanup logic treats episode_id='\n' as a movie item, so non-movie rows rewritten to '\n' risk being
handled as movies in cleanup.

prisma/migrations/20260301172358_normalize_progress_movie_sentinels/migration.sql[16-23]
prisma/migrations/20260301172358_normalize_progress_movie_sentinels/migration.sql[39-46]
prisma/schema.prisma[64-79]
prisma/schema.prisma[152-168]
server/routes/users/[id]/watch-history/[tmdbid]/index.ts[72-75]
server/routes/users/[id]/progress/[tmdb_id]/index.ts[32-35]
server/routes/users/[id]/progress.ts[162-165]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The migration `20260301172358_normalize_progress_movie_sentinels` rewrites `season_id` and `episode_id` from `NULL` to the movie sentinel `E'\n'` for *all* rows where either column is `NULL`. App code still writes `NULL` for non-movie rows when ids are omitted, so after this migration those rows will no longer match future upserts (and may be misclassified as movies in cleanup logic).
### Issue Context
- `season_id`/`episode_id` are nullable in Prisma schema.
- Non-movie handlers normalize missing ids to `null`.
- Cleanup logic treats `episode_id === '\n'` like a movie.
### Fix Focus Areas
- prisma/migrations/20260301172358_normalize_progress_movie_sentinels/migration.sql[16-23]
- prisma/migrations/20260301172358_normalize_progress_movie_sentinels/migration.sql[39-46]
- server/routes/users/[id]/watch-history/[tmdbid]/index.ts[72-75]
- server/routes/users/[id]/progress/[tmdb_id]/index.ts[32-35]
- server/routes/users/[id]/progress.ts[162-165]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Session ID rotates on login 🐞 Bug ✓ Correctness
Description
makeSession() upserts by (user, device) but updates the session primary key (id) and
created_at on the update path. This makes session IDs unstable and can invalidate existing tokens
or break session-management operations that reference a session id.
Code

server/utils/auth.ts[R43-54]

+    // Atomic upsert — backed by @@unique([user, device]) in schema
+    return await prisma.sessions.upsert({
+      where: {
+        user_device: { user, device },
+      },
+      update: {
+        id: uuidv7(),
+        user_agent: userAgent,
+        created_at: now,
+        accessed_at: now,
+        expires_at: expiryDate,
+      },
Evidence
The session token embeds sid=session.id and getCurrentSession() looks up/bump-by that sid. If
makeSession() updates id for an existing (user,device) row, any previously issued token for that
same device/user will no longer match a row, and UIs/actions holding an older session id can fail
because the id changed server-side.

server/utils/auth.ts[37-64]
server/utils/auth.ts[67-84]
server/utils/auth.ts[107-134]
prisma/schema.prisma[81-92]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`makeSession()` performs an upsert by `(user, device)` but mutates the session primary key (`id`) and `created_at` when updating an existing row. This destabilizes session identifiers and can invalidate previously issued tokens or break session-management flows.
### Issue Context
- JWT embeds `sid=session.id`.
- `getCurrentSession()` fetches/bump-by that `sid`.
### Fix Focus Areas
- server/utils/auth.ts[37-64]
- server/utils/auth.ts[67-84]
- server/utils/auth.ts[107-134]
- prisma/schema.prisma[81-92]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (16)
4. Watch-history PUT masks 400 🐞 Bug ✓ Correctness
Description
The watch-history PUT handler catches all errors (including Zod validation failures) and always
returns a 500. This breaks API semantics and makes client-side input errors look like server
failures.
Code

server/routes/users/[id]/watch-history/[tmdbid]/index.ts[R58-69]

// Accept single object (normal playback) or array (e.g. user import)
const bodySchema = z.union([
  watchHistoryItemSchema,
-        z.array(watchHistoryItemSchema),
+        z.array(watchHistoryItemSchema).max(1000),
]);
const parsed = bodySchema.parse(body);
const items = Array.isArray(parsed) ? parsed : [parsed];
-      const results = [];
-
-      for (const validatedBody of items) {
+      const upsertPromises = items.map(validatedBody => {
  const itemTmdbId = items.length === 1 ? tmdbId : (validatedBody.tmdbId ?? tmdbId);
  const watchedAt = defaultAndCoerceDateTime(validatedBody.watchedAt);
  const now = new Date();
Evidence
The Zod parse() call is inside the same try/catch as DB logic; any parse failure will land in the
catch and be turned into a 500 response.

server/routes/users/[id]/watch-history/[tmdbid]/index.ts[54-135]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The watch-history PUT endpoint wraps request validation and DB writes in one broad try/catch and always throws a 500 on error. This turns invalid input into 500 responses.
### Issue Context
Zod `parse()` throws on invalid input; those should be 4xx errors.
### Fix Focus Areas
- server/routes/users/[id]/watch-history/[tmdbid]/index.ts[54-135]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Watch history upsert may duplicate 🐞 Bug ✓ Correctness
Description
Watch-history writes now use a '\n' sentinel for movie season_id/episode_id and upsert on that
composite key; existing rows stored with NULLs (allowed by schema) will not match and can result in
duplicate movie rows. This risks data bloat and inconsistent reads/cleanup behavior over time.
Code

server/routes/users/[id]/watch-history/[tmdbid]/index.ts[R72-77]

const normSeasonId = validatedBody.meta.type === 'movie' ? '\n' : validatedBody.seasonId ?? null;
const normEpisodeId = validatedBody.meta.type === 'movie' ? '\n' : validatedBody.episodeId ?? null;
-        const existingItem = await prisma.watch_history.findUnique({
-          where: {
-            tmdb_id_user_id_season_id_episode_id: {
-              tmdb_id: itemTmdbId,
-              user_id: userId,
-              season_id: normSeasonId,
-              episode_id: normEpisodeId,
-            },
-          },
-        });
-
const data = {
  duration: parseFloat(validatedBody.duration),
  watched: parseFloat(validatedBody.watched),
Evidence
The schema allows watch_history.season_id and episode_id to be NULL while also being part of a
UNIQUE constraint. The PR’s own progress_items migration documents that NULLs don’t behave as equal
under UNIQUE constraints, motivating the sentinel approach—so if watch_history already contains NULL
movie rows, an upsert keyed on '\n' will not update them.

prisma/schema.prisma[152-168]
server/routes/users/[id]/watch-history/[tmdbid]/index.ts[71-105]
prisma/migrations/20260301172358_normalize_progress_movie_sentinels/migration.sql[1-4]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`watch_history` upserts for movies now use `season_id='\n'` and `episode_id='\n'`. If existing movie rows are stored with NULLs (schema allows NULL), the upsert keyed on `'\n'` will not match and will create a second row.
## Issue Context
- `watch_history.season_id` and `episode_id` are nullable but part of a composite UNIQUE.
- The PR already needed a progress_items migration to normalize NULL→'\n' because UNIQUE treats NULLs as distinct.
## Fix Focus Areas
- server/routes/users/[id]/watch-history/[tmdbid]/index.ts[71-105]
- prisma/schema.prisma[152-168]
- prisma/migrations/20260301172358_normalize_progress_movie_sentinels/migration.sql[1-23]
## Suggested change
- Add a new SQL migration for `watch_history` similar to the progress_items normalization:
- Delete duplicates where one row has NULLs and another has `'\n'` for the same `(tmdb_id,user_id)`.
- Update remaining NULL `season_id`/`episode_id` to `'\n'`.
- Ensure any other `watch_history` writers also normalize consistently (movies always store `'\n'` for both IDs).
- Consider whether `'\n'` is the best sentinel vs a more explicit value (but keep consistent across reads/writes).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Movie '\n' sentinel breaks import 🐞 Bug ✓ Correctness
Description
Switching movie season_id/episode_id to the '\n' sentinel without normalizing existing NULL rows can
cause progress import to fail: it reuses an existing row’s primary key (id) but changes the
composite unique key used for upsert lookups, causing Prisma to attempt a create that collides on
the primary key.
Code

server/routes/users/[id]/progress.ts[R260-287]

+      const progressItem = await prisma.progress_items.upsert({
where: {
tmdb_id_user_id_season_id_episode_id: {
  tmdb_id: tmdbId,
  user_id: userId,
-            season_id: validatedBody.seasonId || null,
-            episode_id: validatedBody.episodeId || null,
+            season_id: validatedBody.meta.type === 'movie' ? '\n' : validatedBody.seasonId || null,
+            episode_id: validatedBody.meta.type === 'movie' ? '\n' : validatedBody.episodeId || null,
},
},
+        update: {
+          duration: BigInt(validatedBody.duration),
+          watched: BigInt(validatedBody.watched),
+          meta: validatedBody.meta,
+          updated_at: now,
+        },
+        create: {
+          id: uuidv7(),
+          tmdb_id: tmdbId,
+          user_id: userId,
+          season_id: validatedBody.meta.type === 'movie' ? '\n' : validatedBody.seasonId || null,
+          episode_id: validatedBody.meta.type === 'movie' ? '\n' : validatedBody.episodeId || null,
+          season_number: validatedBody.seasonNumber || null,
+          episode_number: validatedBody.episodeNumber || null,
+          duration: BigInt(validatedBody.duration),
+          watched: BigInt(validatedBody.watched),
+          meta: validatedBody.meta,
+          updated_at: now,
+        },
Evidence
The schema allows NULL season_id/episode_id but the new write path uses '\n' for movies. In the
import route, existing rows are re-upserted by the composite unique key
(tmdb_id,user_id,season_id,episode_id) while also reusing the existing row’s id. If an existing
movie row stored NULLs, rewriting the season/episode ids to '\n' makes the upsert lookup miss, so
Prisma will try to create a new row using id: existingItem.id and fail due to duplicate primary
key.

prisma/schema.prisma[64-79]
server/routes/users/[id]/progress.ts[260-287]
server/routes/users/[id]/progress/import.ts[85-154]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Progress import can fail because it rewrites movie rows to use the '\n' sentinel but still upserts by the composite unique key while reusing `existingItem.id`. If the existing DB row has NULL season/episode ids, the composite lookup misses and Prisma attempts `create` with an already-taken primary key.
### Issue Context
- `season_id`/`episode_id` are nullable in Prisma schema.
- New code uses '\n' as a sentinel for movies.
- Import path reuses existing `id` while potentially changing season/episode ids.
### Fix Focus Areas
- server/routes/users/[id]/progress/import.ts[85-154]
- server/routes/users/[id]/progress.ts[260-287]
- prisma/schema.prisma[64-79]
### Suggested fixes (pick one)
1) **Data migration (recommended)**
- Add a SQL migration that updates existing movie rows to set `season_id='\n'` and `episode_id='\n'` where they are NULL.
- Do similarly for `watch_history` if using the same sentinel.
- Ensure your JSON `meta` has a reliable discriminator to identify movies.
2) **Code fix in import**
- For updates to existing rows, run `update({ where: { id: existingItem.id }, data: { ... , season_id: '\n', episode_id: '\n' } })` instead of upsert-by-composite.
- Only use composite upsert for brand-new rows (fresh ids).
3) **Alternative schema strategy**
- Avoid sentinels by restructuring uniqueness (e.g., separate movie vs episode uniqueness), but that’s a larger refactor.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. Progress movie key mismatch 🐞 Bug ✓ Correctness
Description
Different endpoints write movie progress_items with episode_id/season_id as either NULL or "\n";
with the new upsert in /progress this can create parallel rows for the same movie and also makes
cleanup misclassify movie rows stored with "\n" as episodes.
Code

server/routes/users/[id]/progress.ts[R260-263]

+      const progressItem = await prisma.progress_items.upsert({
where: {
tmdb_id_user_id_season_id_episode_id: {
tmdb_id: tmdbId,
Evidence
The progress cleanup logic treats “movie rows” as those with a falsy episode_id (NULL), but other
endpoints explicitly normalize movies to use the "\n" sentinel (truthy). The updated /progress
upsert uses NULL for missing IDs, so it will not match/update rows created via the sentinel approach
and may create a second row under a different compound key.

server/routes/users/[id]/progress.ts[162-165]
server/routes/users/[id]/progress.ts[260-288]
server/routes/users/[id]/progress/import.ts[117-126]
server/routes/users/[id]/progress/[tmdb_id]/index.ts[32-35]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`progress_items` movie rows are represented inconsistently across endpoints (`NULL` vs `"\n"` for season_id/episode_id). With the new `upsert()` in `/users/[id]/progress.ts`, this can lead to parallel rows for the same logical movie progress record and also causes the cleanup routine to misclassify movie rows stored with `"\n"` as episodes.
### Issue Context
Other progress endpoints already normalize movies to use `"\n"` (and formats convert it back to null/undefined), but `/users/[id]/progress.ts` still writes `NULL`. Cleanup logic assumes movies have a falsy `episode_id`.
### Fix Focus Areas
- server/routes/users/[id]/progress.ts[162-165]
- server/routes/users/[id]/progress.ts[260-288]
- server/routes/users/[id]/progress/import.ts[117-126]
- server/routes/users/[id]/progress/[tmdb_id]/index.ts[32-35]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. List rename may 500 🐞 Bug ✓ Correctness
Description
With the newly-added unique constraint on lists(user_id,name), renaming a list to an existing name
for that user will raise Prisma P2002, but the PATCH endpoint doesn’t translate it to a 409 (unlike
the POST endpoint). This will surface as a 500 error to clients.
Code

server/routes/users/[id]/lists/index.patch.ts[R64-69]

description:
validatedBody.description !== undefined ? validatedBody.description : list.description,
public: validatedBody.public ?? list.public,
+          updated_at: new Date(),
},
});
Evidence
The schema now enforces uniqueness on (user_id, name). The PATCH route updates name but has no
error handling for unique violations, while the POST route explicitly catches P2002 and returns
409—indicating intended API behavior.

prisma/schema.prisma[44-56]
server/routes/users/[id]/lists/index.patch.ts[54-69]
server/routes/users/[id]/lists/index.post.ts[89-95]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
PATCH /users/[id]/lists can now violate the new unique constraint on (user_id, name) and return an unhandled Prisma P2002 (500) instead of a 409.
### Issue Context
POST already implements both an app-level guard and a P2002 catch to produce a clean 409.
### Fix Focus Areas
- server/routes/users/[id]/lists/index.patch.ts[54-69]
- prisma/schema.prisma[44-56]
### Suggested approach
- Wrap the `$transaction` in a try/catch and map `err.code === 'P2002'` to `createError({statusCode:409,...})`.
- Optionally add an app-level check similar to POST when `validatedBody.name` is provided (check for another list with same name for the user, excluding the current list id).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


9. Wrong upsert where key🐞 Bug ✓ Correctness
Description
makeSession() uses sessions_user_device_unique as the compound-unique selector, but this appears
inconsistent with how Prisma generates compound selectors (field-name based). This is likely to fail
at compile-time or at runtime when login calls sessions.upsert().
Code

server/utils/auth.ts[R43-47]

+    // Atomic upsert — backed by @@unique([user, device]) in schema
+    return await prisma.sessions.upsert({
+      where: {
+        sessions_user_device_unique: { user, device },
+      },
Evidence
In your schema, the sessions uniqueness is declared with map: "sessions_user_device_unique", but
across the codebase compound-unique selectors are referenced by the concatenated field names (not
the map value). For example, bookmarks uses tmdb_id_user_id in code even though the constraint
is mapped to bookmarks_tmdb_id_user_id_unique, strongly suggesting sessions should use something
like user_device rather than sessions_user_device_unique.

server/utils/auth.ts[37-65]
prisma/schema.prisma[81-92]
prisma/schema.prisma[12-23]
server/routes/users/[id]/bookmarks/[tmdbid]/index.ts[40-43]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`prisma.sessions.upsert()` is using `sessions_user_device_unique` as the compound-unique selector key. Prisma client compound unique selector keys are typically generated from field names (and `map:` only affects the DB constraint name), so this is likely an invalid selector and will break login.
## Issue Context
- `sessions` has `@@unique([user, device], map: "sessions_user_device_unique")`.
- Other code (e.g. bookmarks) uses field-based selector keys like `tmdb_id_user_id` despite the constraint being mapped to a different DB name.
## Fix Focus Areas
- server/utils/auth.ts[43-64]
- prisma/schema.prisma[81-92]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


10. Wrong upsert where key🐞 Bug ✓ Correctness
Description
makeSession() uses sessions_user_device_unique as the compound-unique selector, but this appears
inconsistent with how Prisma generates compound selectors (field-name based). This is likely to fail
at compile-time or at runtime when login calls sessions.upsert().
Code

server/utils/auth.ts[R43-47]

+    // Atomic upsert — backed by @@unique([user, device]) in schema
+    return await prisma.sessions.upsert({
+      where: {
+        sessions_user_device_unique: { user, device },
+      },
Evidence
In your schema, the sessions uniqueness is declared with map: "sessions_user_device_unique", but
across the codebase compound-unique selectors are referenced by the concatenated field names (not
the map value). For example, bookmarks uses tmdb_id_user_id in code even though the constraint
is mapped to bookmarks_tmdb_id_user_id_unique, strongly suggesting sessions should use something
like user_device rather than sessions_user_device_unique.

server/utils/auth.ts[37-65]
prisma/schema.prisma[81-92]
prisma/schema.prisma[12-23]
server/routes/users/[id]/bookmarks/[tmdbid]/index.ts[40-43]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`prisma.sessions.upsert()` is using `sessions_user_device_unique` as the compound-unique selector key. Prisma client compound unique selector keys are typically generated from field names (and `map:` only affects the DB constraint name), so this is likely an invalid selector and will break login.
## Issue Context
- `sessions` has `@@unique([user, device], map: "sessions_user_device_unique")`.
- Other code (e.g. bookmarks) uses field-based selector keys like `tmdb_id_user_id` despite the constraint being mapped to a different DB name.
## Fix Focus Areas
- server/utils/auth.ts[43-64]
- prisma/schema.prisma[81-92]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


11. Wrong upsert where key🐞 Bug ✓ Correctness
Description
makeSession() uses sessions_user_device_unique as the compound-unique selector, but this appears
inconsistent with how Prisma generates compound selectors (field-name based). This is likely to fail
at compile-time or at runtime when login calls sessions.upsert().
Code

server/utils/auth.ts[R43-47]

+    // Atomic upsert — backed by @@unique([user, device]) in schema
+    return await prisma.sessions.upsert({
+      where: {
+        sessions_user_device_unique: { user, device },
+      },
Evidence
In your schema, the sessions uniqueness is declared with map: "sessions_user_device_unique", but
across the codebase compound-unique selectors are referenced by the concatenated field names (not
the map value). For example, bookmarks uses tmdb_id_user_id in code even though the constraint
is mapped to bookmarks_tmdb_id_user_id_unique, strongly suggesting sessions should use something
like user_device rather than sessions_user_device_unique.

server/utils/auth.ts[37-65]
prisma/schema.prisma[81-92]
prisma/schema.prisma[12-23]
server/routes/users/[id]/bookmarks/[tmdbid]/index.ts[40-43]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`prisma.sessions.upsert()` is using `sessions_user_device_unique` as the compound-unique selector key. Prisma client compound unique selector keys are typically generated from field names (and `map:` only affects the DB constraint name), so this is likely an invalid selector and will break login.
## Issue Context
- `sessions` has `@@unique([user, device], map: "sessions_user_device_unique")`.
- Other code (e.g. bookmarks) uses field-based selector keys like `tmdb_id_user_id` despite the constraint being mapped to a different DB name.
## Fix Focus Areas
- server/utils/auth.ts[43-64]
- prisma/schema.prisma[81-92]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


12. Wrong upsert where key🐞 Bug ✓ Correctness
Description
makeSession() uses sessions_user_device_unique as the compound-unique selector, but this appears
inconsistent with how Prisma generates compound selectors (field-name based). This is likely to fail
at compile-time or at runtime when login calls sessions.upsert().
Code

server/utils/auth.ts[R43-47]

+    // Atomic upsert — backed by @@unique([user, device]) in schema
+    return await prisma.sessions.upsert({
+      where: {
+        sessions_user_device_unique: { user, device },
+      },
Evidence
In your schema, the sessions uniqueness is declared with map: "sessions_user_device_unique", but
across the codebase compound-unique selectors are referenced by the concatenated field names (not
the map value). For example, bookmarks uses tmdb_id_user_id in code even though the constraint
is mapped to bookmarks_tmdb_id_user_id_unique, strongly suggesting sessions should use something
like user_device rather than sessions_user_device_unique.

server/utils/auth.ts[37-65]
prisma/schema.prisma[81-92]
prisma/schema.prisma[12-23]
server/routes/users/[id]/bookmarks/[tmdbid]/index.ts[40-43]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`prisma.sessions.upsert()` is using `sessions_user_device_unique` as the compound-unique selector key. Prisma client compound unique selector keys are typically generated from field names (and `map:` only affects the DB constraint name), so this is likely an invalid selector and will break login.
## Issue Context
- `sessions` has `@@unique([user, device], map: "sessions_user_device_unique")`.
- Other code (e.g. bookmarks) uses field-based selector keys like `tmdb_id_user_id` despite the constraint being mapped to a different DB name.
## Fix Focus Areas
- server/utils/auth.ts[43-64]
- prisma/schema.prisma[81-92]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


13. Pool max default 100000🐞 Bug ⛯ Reliability
Description
The pg Pool is configured with a default max of 100000 connections when DB_POOL_MAX is not set.
This can exhaust DB resources and destabilize the app under load or even at startup.
Code

server/utils/prisma.ts[R5-10]

+const pool = new Pool({
connectionString: process.env.DATABASE_URL,
+  max: parseInt(process.env.DB_POOL_MAX || '100000', 10),
+  connectionTimeoutMillis: 10000,
+  idleTimeoutMillis: 300000,
});
Evidence
server/utils/prisma.ts hardcodes a very large default pool size and .env.example documents it as
the default. This is a high-risk default because it applies whenever DB_POOL_MAX is absent.

server/utils/prisma.ts[1-10]
.env.example[25-26]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The current default Postgres connection pool max is set to 100000 when `DB_POOL_MAX` is unset. This default is unsafe and can overwhelm typical Postgres configurations.
## Issue Context
`max` is derived from `parseInt(process.env.DB_POOL_MAX || '100000', 10)`. This means self-hosters who do nothing will still get a 100000 max.
## Fix Focus Areas
- server/utils/prisma.ts[1-10]
- .env.example[25-26]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


14. Pool max default 100000🐞 Bug ⛯ Reliability
Description
The pg Pool is configured with a default max of 100000 connections when DB_POOL_MAX is not set.
This can exhaust DB resources and destabilize the app under load or even at startup.
Code

server/utils/prisma.ts[R5-10]

+const pool = new Pool({
connectionString: process.env.DATABASE_URL,
+  max: parseInt(process.env.DB_POOL_MAX || '100000', 10),
+  connectionTimeoutMillis: 10000,
+  idleTimeoutMillis: 300000,
});
Evidence
server/utils/prisma.ts hardcodes a very large default pool size and .env.example documents it as
the default. This is a high-risk default because it applies whenever DB_POOL_MAX is absent.

server/utils/prisma.ts[1-10]
.env.example[25-26]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The current default Postgres connection pool max is set to 100000 when `DB_POOL_MAX` is unset. This default is unsafe and can overwhelm typical Postgres configurations.
## Issue Context
`max` is derived from `parseInt(process.env.DB_POOL_MAX || '100000', 10)`. This means self-hosters who do nothing will still get a 100000 max.
## Fix Focus Areas
- server/utils/prisma.ts[1-10]
- .env.example[25-26]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


15. Pool max default 100000🐞 Bug ⛯ Reliability
Description
The pg Pool is configured with a default max of 100000 connections when DB_POOL_MAX is not set.
This can exhaust DB resources and destabilize the app under load or even at startup.
Code

server/utils/prisma.ts[R5-10]

+const pool = new Pool({
connectionString: process.env.DATABASE_URL,
+  max: parseInt(process.env.DB_POOL_MAX || '100000', 10),
+  connectionTimeoutMillis: 10000,
+  idleTimeoutMillis: 300000,
});
Evidence
server/utils/prisma.ts hardcodes a very large default pool size and .env.example documents it as
the default. This is a high-risk default because it applies whenever DB_POOL_MAX is absent.

server/utils/prisma.ts[1-10]
.env.example[25-26]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The current default Postgres connection pool max is set to 100000 when `DB_POOL_MAX` is unset. This default is unsafe and can overwhelm typical Postgres configurations.
## Issue Context
`max` is derived from `parseInt(process.env.DB_POOL_MAX || '100000', 10)`. This means self-hosters who do nothing will still get a 100000 max.
## Fix Focus Areas
- server/utils/prisma.ts[1-10]
- .env.example[25-26]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


16. Pool max default 100000🐞 Bug ⛯ Reliability
Description
The pg Pool is configured with a default max of 100000 connections when DB_POOL_MAX is not set.
This can exhaust DB resources and destabilize the app under load or even at startup.
Code

server/utils/prisma.ts[R5-10]

+const pool = new Pool({
connectionString: process.env.DATABASE_URL,
+  max: parseInt(process.env.DB_POOL_MAX || '100000', 10),
+  connectionTimeoutMillis: 10000,
+  idleTimeoutMillis: 300000,
});
Evidence
server/utils/prisma.ts hardcodes a very large default pool size and .env.example documents it as
the default. This is a high-risk default because it applies whenever DB_POOL_MAX is absent.

server/utils/prisma.ts[1-10]
.env.example[25-26]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The current default Postgres connection pool max is set to 100000 when `DB_POOL_MAX` is unset. This default is unsafe and can overwhelm typical Postgres configurations.
## Issue Context
`max` is derived from `parseInt(process.env.DB_POOL_MAX || '100000', 10)`. This means self-hosters who do nothing will still get a 100000 max.
## Fix Focus Areas
- server/utils/prisma.ts[1-10]
- .env.example[25-26]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


17. Migration may fail deploy 🐞 Bug ⛯ Reliability
Description
The new migration adds unique indexes on lists(user_id,name) and sessions(user,device) and will
fail if the database already contains duplicates. This can block deploys/startup unless you
pre-deduplicate data.
Code

prisma/migrations/20260301145729_add_unique_constraints/migration.sql[R4-22]

+  - A unique constraint covering the columns `[user_id,name]` on the table `lists` will be added. If there are existing duplicate values, this will fail.
+  - A unique constraint covering the columns `[user,device]` on the table `sessions` will be added. If there are existing duplicate values, this will fail.
+
+*/
+-- CreateIndex
+CREATE INDEX "bookmarks_user_id_idx" ON "bookmarks" USING HASH ("user_id");
+
+-- CreateIndex
+CREATE UNIQUE INDEX "lists_user_id_name_unique" ON "lists"("user_id", "name");
+
+-- CreateIndex
+CREATE INDEX "progress_items_user_id_idx" ON "progress_items" USING HASH ("user_id");
+
+-- CreateIndex
+CREATE INDEX "sessions_user_idx" ON "sessions" USING HASH ("user");
+
+-- CreateIndex
+CREATE UNIQUE INDEX "sessions_user_device_unique" ON "sessions"("user", "device");
+
Evidence
The migration explicitly warns about duplicates causing failure and proceeds to create the unique
indexes. Without a prior data cleanup step, running this migration against an existing DB may be a
deployment blocker.

prisma/migrations/20260301145729_add_unique_constraints/migration.sql[1-6]
prisma/migrations/20260301145729_add_unique_constraints/migration.sql[11-22]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Migration introduces new unique constraints that will fail if duplicates already exist, potentially blocking deployment.
## Issue Context
The migration itself warns about failure on duplicates.
## Fix Focus Areas
- prisma/migrations/20260301145729_add_unique_constraints/migration.sql[1-24]
- prisma/schema.prisma[44-56]
- prisma/schema.prisma[81-92]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


18. Migration may fail deploy 🐞 Bug ⛯ Reliability
Description
The new migration adds unique indexes on lists(user_id,name) and sessions(user,device) and will
fail if the database already contains duplicates. This can block deploys/startup unless you
pre-deduplicate data.
Code

prisma/migrations/20260301145729_add_unique_constraints/migration.sql[R4-22]

+  - A unique constraint covering the columns `[user_id,name]` on the table `lists` will be added. If there are existing duplicate values, this will fail.
+  - A unique constraint covering the columns `[user,device]` on the table `sessions` will be added. If there are existing duplicate values, this will fail.
+
+*/
+-- CreateIndex
+CREATE INDEX "bookmarks_user_id_idx" ON "bookmarks" USING HASH ("user_id");
+
+-- CreateIndex
+CREATE UNIQUE INDEX "lists_user_id_name_unique" ON "lists"("user_id", "name");
+
+-- CreateIndex
+CREATE INDEX "progress_items_user_id_idx" ON "progress_items" USING HASH ("user_id");
+
+-- CreateIndex
+CREATE INDEX "sessions_user_idx" ON "sessions" USING HASH ("user");
+
+-- CreateIndex
+CREATE UNIQUE INDEX "sessions_user_device_unique" ON "sessions"("user", "device");
+
Evidence
The migration explicitly warns about duplicates causing failure and proceeds to create the unique
indexes. Without a prior data cleanup step, running this migration against an existing DB may be a
deployment blocker.

prisma/migrations/20260301145729_add_unique_constraints/migration.sql[1-6]
prisma/migrations/20260301145729_add_unique_constraints/migration.sql[11-22]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Migration introduces new unique constraints that will fail if duplicates already exist, potentially blocking deployment.
## Issue Context
The migration itself warns about failure on duplicates.
## Fix Focus Areas
- prisma/migrations/20260301145729_add_unique_constraints/migration.sql[1-24]
- prisma/schema.prisma[44-56]
- prisma/schema.prisma[81-92]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


19. Migration may fail deploy 🐞 Bug ⛯ Reliability
Description
The new migration adds unique indexes on lists(user_id,name) and sessions(user,device) and will
fail if the database already contains duplicates. This can block deploys/startup unless you
pre-deduplicate data.
Code

prisma/migrations/20260301145729_add_unique_constraints/migration.sql[R4-22]

+  - A unique constraint covering the columns `[user_id,name]` on the table `lists` will be added. If there are existing duplicate values, this will fail.
+  - A unique constraint covering the columns `[user,device]` on the table `sessions` will be added. If there are existing duplicate values, this will fail.
+
+*/
+-- CreateIndex
+CREATE INDEX "bookmarks_user_id_idx" ON "bookmarks" USING HASH ("user_id");
+
+-- CreateIndex
+CREATE UNIQUE INDEX "lists_user_id_name_unique" ON "lists"("user_id", "name");
+
+-- CreateIndex
+CREATE INDEX "progress_items_user_id_idx" ON "progress_items" USING HASH ("user_id");
+
+-- CreateIndex
+CREATE INDEX "sessions_user_idx" ON "sessions" USING HASH ("user");
+
+-- CreateIndex
+CREATE UNIQUE INDEX "sessions_user_device_unique" ON "sessions"("user", "device");
+
Evidence
The migration explicitly warns about duplicates causing failure and proceeds to create the unique
indexes. Without a prior data cleanup step, running this migration against an existing DB may be a
deployment blocker.

prisma/migrations/20260301145729_add_unique_constraints/migration.sql[1-6]
prisma/migrations/20260301145729_add_unique_constraints/migration.sql[11-22]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Migration introduces new unique constraints that will fail if duplicates already exist, potentially blocking deployment.
## Issue Context
The migration itself warns about failure on duplicates.
## Fix Focus Areas
- prisma/migrations/20260301145729_add_unique_constraints/migration.sql[1-24]
- prisma/schema.prisma[44-56]
- prisma/schema.prisma[81-92]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@dumbutdumber
Copy link
Contributor Author

/agentic_review

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 1, 2026

Persistent review updated to latest commit a624b5a

@dumbutdumber
Copy link
Contributor Author

/agentic_review

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 1, 2026

Persistent review updated to latest commit 5ee0f9c

@dumbutdumber
Copy link
Contributor Author

/agentic_review

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 1, 2026

Persistent review updated to latest commit ce98e2f

@dumbutdumber
Copy link
Contributor Author

/agentic_review

Hopefully final one :))

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 1, 2026

Persistent review updated to latest commit 47de1a9

@dumbutdumber
Copy link
Contributor Author

/agentic_review

Final????

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 1, 2026

Persistent review updated to latest commit 118b663

@dumbutdumber
Copy link
Contributor Author

Most recent commit fixed something pas did :( . In like march or somthing(Ai checked this cause I am not going through all those commits) Pas updated import.ts to use \n instead of null but did not do the same for progress. Now qudo forced me to normalize which is correct so I have done it :)).

@dumbutdumber
Copy link
Contributor Author

/agentic_review

FINAL FINAL REVIEW I GIVE UP AFTER THIS

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 1, 2026

Persistent review updated to latest commit 16f9905

@dumbutdumber
Copy link
Contributor Author

/agentic_review

About to crash out

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 1, 2026

Persistent review updated to latest commit 8e8fd83

@Duplicake-fyi
Copy link

I know you're fighting with qodo rn but once it's done... do I really need to review this 🫩

@dumbutdumber
Copy link
Contributor Author

I know you're fighting with qodo rn but once it's done... do I really need to review this 🫩

Uhh this can be a longer time review, since it does break some features.

@dumbutdumber
Copy link
Contributor Author

//agentic_review

Last?

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 2, 2026

Persistent review updated to latest commit cd6704e

@dumbutdumber
Copy link
Contributor Author

/agentic_review

FINAL one

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 2, 2026

Persistent review updated to latest commit cd6704e

@dumbutdumber
Copy link
Contributor Author

/agentic_review

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 2, 2026

Persistent review updated to latest commit 8240079

@dumbutdumber
Copy link
Contributor Author

All bugs show by Qodo are fixed!!!!

@dumbutdumber
Copy link
Contributor Author

To the PR checker person thingamajig:

THIS IS A BREAKING PR!!!!!! What will it break?

  1. Same user and device name-> Users cannot have device name for 2 devices otherwise data is overwritten(Front end check needed or for renaming a check is there)
  2. If current users have the same device name for 2 devices this PR will FAIL. You have to run a script that changes the name to something else before you merge this PR(On the community back end) otherwise to make it global a script can be added to do this automatically on startup from now on.
  3. Responses and saving: New error codes, and the method of saving certain items have been normalized ie changed from null to \n
  4. A LOT OF THINGS will need to be updated on the front end due to this PR (Have fun Pas 😄 )
  5. TAKE YOUR TIME REVIEWING, this pr will likely need a few days to a week to finalize, if you have questions ping me on discord OR Add a comment here.

@dumbutdumber dumbutdumber changed the title Optimisations(Do not review now lemme fight againt that stupid QUDO) Optimisations(I WON AGAINST THAT BOT HHAHAHAH) Mar 2, 2026
@Pasithea0
Copy link
Collaborator

Ok a lot to go through, I did a cursory glance at the changes, but I'll need to take a closer look. Overall this cleaner and good work!
However, P-Stream's goal is still to maintain compatibility with legacy frontends (such as sudo-flix).
You mentioned frontend changes will need to be made, but I'll have to investigate what in particular is different and the paths and structure should be consistent with older versions.
I'll check out the branch and test these changes soon.

@dumbutdumber
Copy link
Contributor Author

okie, the front end changes are not really that big, just how it sends or receives certain data. Also needing to add another check with how the new user and device name is handled.

@Pasithea0 Pasithea0 self-assigned this Mar 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants