-
Notifications
You must be signed in to change notification settings - Fork 6
feat: refactor add bookmark api #623
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
… while opening sidepane
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/pages/api/v1/bookmarks/add/tasks/min-data.ts (1)
1-85: File length creeping past 250‑line guidelineWith the OpenAPI block and full handler logic, this file is ~340+ lines, above the 250‑line guideline for JS/TS files. Not urgent now, but it would be good to plan a split if this route gains more responsibilities (e.g., move logging / error‑response helpers or the OpenAPI block into nearby utility modules).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cspell.json(3 hunks)src/pages/api/v1/bookmarks/add/tasks/min-data.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-nextjs-rules.mdc)
**/*.{ts,tsx,js,jsx}: Must use@supabase/ssrpackage for creating browser and server clients in Next.js Supabase Auth implementation
Browser client must usecreateBrowserClientfrom@supabase/ssrwith NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY
Server client must usecreateServerClientfrom@supabase/ssrwith proper cookie handling usinggetAllandsetAllmethods only
NEVER use individual cookie methods (get,set,remove) for Supabase client configuration
NEVER import from@supabase/auth-helpers-nextjsas it is deprecated
Server client cookiesetAllmethod must include try-catch error handling as it may be called from Server Components
**/*.{ts,tsx,js,jsx}: Runpnpm fix:eslint <changed-files>to auto-fix all issues in changed TypeScript files
Maximum 250 lines per file; split larger files into modules
Only use Functional Programming; never use class-based code in TypeScript files
Functions should usecamelCasenaming convention in TypeScript files
Constants should useUPPER_SNAKE_CASEnaming convention in TypeScript files
UsefetchAPI; never use axios or older alternatives in TypeScript/JavaScript files
Never usevar; useconst/letin TypeScript and JavaScript files
Never useeval()function in TypeScript and JavaScript files
Never useargumentsobject; use rest parameters or array methods in TypeScript files
Use template literals instead of string concatenation in TypeScript files
Use optional chaining (?.) in TypeScript files
Usefor...ofloops in TypeScript files instead of traditionalfororforEach
Files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx}: Usebroadcastfor all realtime events (database changes via triggers, messaging, notifications, game state)
Usepresencesparingly for user state tracking (online status, user counters)
Use topic names that correlate with concepts and tables:scope:entity(e.g.,room:123:messages)
Include unsubscribe/cleanup logic in all implementations
Setprivate: truefor channels using database triggers or RLS policies
Give preference to use private channels over public channels (better security and control)
Implement proper error handling and reconnection logic
Do not usepostgres_changesfor new applications (single-threaded, doesn't scale well); migrate tobroadcast from databaseon existing applications if necessary
Do not create multiple subscriptions without proper cleanup
Do not use database functions (realtime.send,realtime.broadcast_changes) in client code
Use dedicated, granular topics to ensure messages are only sent to relevant listeners and improve scalability
Check if channel state is 'subscribed' before subscribing to prevent multiple subscriptions
Set auth withsupabase.realtime.setAuth()before subscribing to private channels
Configure broadcast options: useself: trueto receive your own broadcast messages andack: truefor server acknowledgment
Monitor channel states (SUBSCRIBED, TIMED_OUT, CLOSED, CHANNEL_ERROR) and rely on built-in automatic reconnection with exponential backoff
Use one channel per logical scope (e.g.,room:123) rather than nested scopes to optimize channel structure
Shard high-volume topics (e.g.,chat:shard:1,chat:shard:2) to distribute load and improve scalability
Enable enhanced logging by settinglog_level: 'info'in Supabase Realtime client configuration for debugging
Replacepostgres_changeswithbroadcastlisteners in migration from postgres_changes to realtime.broadcast_changes
Prefer generating code usingbroadcastoverpostgres_changesin all new implementations
Alw...
Files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
**/*.{js,ts,jsx,tsx,sql}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx,sql}: Use snake_case for event names:entity_action(e.g.,message_created)
Do not use generic event names like 'update' or 'change'; use specific event names followingentity_actionpattern
Files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use
ast-grep --lang ts -p '<pattern>'for finding TypeScript code structure
Files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Only create abstractions when actually needed in TypeScript code
Prefer clear function/variable names over inline comments in TypeScript code
Avoid helper functions when a simple inline expression suffices in TypeScript code
Don't unnecessarily addtry/catchblocks in TypeScript code
Prefer fetching data in RSC (React Server Components); pages can still be static in Next.js
Usenext/fontandnext/scriptwhen applicable in Next.js files
Runpnpm lint:typesto check for type errors in TypeScript files
Only use named exports; never use default exports in TypeScript files
Prefer type deduction over custom interfaces in TypeScript files
Functions with 2+ parameters should use an interface withpropsparameter in TypeScript
Never useanytypes in TypeScript files
Never use@ts-ignoredirectives in TypeScript files
Never use CommonJS in TypeScript files; use ES modules exclusively
Never use enums in TypeScript; use union types instead
Never use namespaces in TypeScript; use modules and imports instead
Use type deduction from immediate parent only; never skip to grandparent types in TypeScript
When child props equal parent props, use type alias (type Child = Parent) instead of redefining in TypeScript
Only export types that are used in other files; check with grep before exporting in TypeScript
Use utility types (Parameters<>,ReturnType<>,Pick<>,Awaited<>) in TypeScript
Files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
**/*.{css,ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Mostly use built-in Tailwind values, occasionally allow dynamic values, rarely use globals in CSS files
Files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
src/pages/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Organize Next.js pages and routes in
/src/pagesdirectory
Files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
🧠 Learnings (4)
📓 Common learnings
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.092Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
📚 Learning: 2025-11-26T10:14:11.932Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/data.tsx:179-185
Timestamp: 2025-11-26T10:14:11.932Z
Learning: In the min-data bookmark API response at src/pages/api/v1/bookmarks/add/tasks/min-data.tsx, bookmarkData.user_id returns a string (UUID) at runtime, not the ProfilesTableTypes object as defined in the TypeScript types. The queue payload in src/pages/api/v1/bookmarks/add/data.tsx correctly uses bookmarkData.user_id directly as a string for the userId field.
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
📚 Learning: 2025-11-26T10:14:58.025Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/utils/api/bookmark/remaining.ts:374-375
Timestamp: 2025-11-26T10:14:58.025Z
Learning: In src/utils/api/bookmark/remaining.ts, the updateBookmarkWithRemainingData function intentionally matches only on id (not id + user_id) because the bookmark id is unique and authorization is verified at the API handler level before calling this utility function.
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
📚 Learning: 2025-11-27T07:54:42.092Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.092Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/min-data.ts
🧬 Code graph analysis (1)
src/pages/api/v1/bookmarks/add/tasks/min-data.ts (4)
src/types/apiTypes.ts (3)
SingleListData(31-46)NextApiRequest(211-213)AddBookmarkMinDataPayloadTypes(217-221)src/utils/api/bookmark/add.ts (7)
getBookmarkBodySchema(48-53)computeCategoryId(222-232)isUrlFromPreferredOgSite(281-294)checkIfUserIsCategoryOwnerOrCollaborator(56-109)checkIfBookmarkExists(112-147)scrapeUrlMetadata(150-219)processUrlMetadata(235-278)src/utils/constants.ts (2)
MAIN_TABLE_NAME(4-4)bookmarkType(219-219)src/async/supabaseCrudHelpers/index.ts (1)
getMediaType(807-828)
🔇 Additional comments (4)
cspell.json (1)
38-38: Spell-check configuration properly updated for new API and queue infrastructure.The changes appropriately address the prior feedback by removing the misspelling "catagories" and add new dictionary entries for terms introduced in this PR:
- Line 38 (
APIV): New API-related acronym for the bookmark API refactor.- Line 96 (
pgmq): Lowercase variant of the existingPGMQentry to handle queue system references.- Line 108 (
postgrest): Lowercase variant of the existingPostgrestentry to handle library references.The case variations (uppercase/lowercase pairs) are appropriate for handling both environment variable conventions and tool/library naming conventions in the codebase.
Also applies to: 96-96, 108-108
src/pages/api/v1/bookmarks/add/tasks/min-data.ts (3)
150-247: Access / duplicate checks correctly use new helper contractsThe way you handle
checkIfUserIsCategoryOwnerOrCollaboratorandcheckIfBookmarkExistsviaCheckResult<boolean>(ok+value) and keep all response sending inside this handler is solid. It removes the previous double-response risk and centralizes status code decisions in one place.
262-288: Minor:user_idinsert shape matches runtime behavior despiteSingleListDatatypingThe insert payload uses
user_id: userId(a UUID string from Supabase auth), which aligns with the actual column shape even thoughSingleListData["user_id"]is typed asProfilesTableTypes. This is consistent with the earlier min‑data flow and queue payload behavior.Based on learnings, this preserves the real runtime contract while relying on the existing (slightly over‑rich)
SingleListDatatyping.
86-147: Default exports in Pages Router API route handlers are required by Next.js and correct — this file follows the established codebase pattern.Next.js Pages Router (
pages/api/) requires a default export for API route handlers. The currentexport default async function handler(...)pattern is correct and consistent with all other API route files in the repository (verified insrc/pages/api/bookmark/,src/pages/api/category/, etc.).The guideline "only use named exports" applies to general TypeScript files but does not apply to Next.js API routes in
src/pages/api/, which are a special case with distinct requirements. Introducing a dual-export pattern would create unnecessary inconsistency with the rest of the codebase without benefit.No changes are needed.
| // Get URL metadata and process it | ||
| console.log("Scraping URL metadata:", { url }); | ||
| const { scrapperResponse, scraperApiError } = await scrapeUrlMetadata(url); | ||
| const { ogImageToBeAdded, iframeAllowedValue } = await processUrlMetadata( | ||
| url, | ||
| isOgImagePreferred, | ||
| scrapperResponse?.data?.OgImage, | ||
| ); | ||
| console.log("URL metadata scraped:", { | ||
| hasTitle: Boolean(scrapperResponse?.data?.title), | ||
| hasOgImage: Boolean(ogImageToBeAdded), | ||
| }); | ||
|
|
||
| // Insert bookmark into database | ||
| const { | ||
| data, | ||
| error: databaseError, | ||
| }: { | ||
| data: SingleListData[] | null; | ||
| error: PostgrestError | VerifyErrors | string | null; | ||
| } = await supabase | ||
| .from(MAIN_TABLE_NAME) | ||
| .insert([ | ||
| { | ||
| url, | ||
| title: scrapperResponse?.data?.title, | ||
| user_id: userId, | ||
| description: scrapperResponse?.data?.description, | ||
| ogImage: ogImageToBeAdded, | ||
| category_id: computedCategoryId, | ||
| meta_data: { | ||
| isOgImagePreferred, | ||
| mediaType: await getMediaType(url), | ||
| favIcon: scrapperResponse?.data?.favIcon, | ||
| iframeAllowed: iframeAllowedValue, | ||
| }, | ||
| type: bookmarkType, | ||
| }, | ||
| ]) | ||
| .select(); | ||
|
|
||
| if (isEmpty(data) || isNull(data)) { | ||
| console.error("Data is empty after insert:", { | ||
| url, | ||
| userId, | ||
| categoryId: computedCategoryId, | ||
| }); | ||
| Sentry.captureException(new Error("Data is empty after insert"), { | ||
| tags: { | ||
| operation: "insert_bookmark", | ||
| userId, | ||
| }, | ||
| }); | ||
| sendErrorResponse(response, 400, "Data is empty after insert"); | ||
| return; | ||
| } | ||
|
|
||
| if (!isNull(databaseError)) { | ||
| console.error("Error inserting bookmark:", databaseError); | ||
| Sentry.captureException(databaseError, { | ||
| tags: { | ||
| operation: "insert_bookmark", | ||
| userId, | ||
| }, | ||
| extra: { | ||
| url, | ||
| categoryId: computedCategoryId, | ||
| }, | ||
| }); | ||
| sendErrorResponse( | ||
| response, | ||
| 500, | ||
| "Failed to insert bookmark", | ||
| databaseError?.message, | ||
| ); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reorder DB error and empty‑data checks to avoid misclassifying failures
Right now you treat “no data returned” as a 400 before checking databaseError. If Supabase returns { data: null, error: ... }, this will be surfaced as a 400 “Data is empty after insert” instead of a 500 “Failed to insert bookmark”, which makes failures harder to diagnose.
Check databaseError first, then fall back to the empty‑data guard:
- if (isEmpty(data) || isNull(data)) {
- console.error("Data is empty after insert:", {
- url,
- userId,
- categoryId: computedCategoryId,
- });
- Sentry.captureException(new Error("Data is empty after insert"), {
- tags: {
- operation: "insert_bookmark",
- userId,
- },
- });
- sendErrorResponse(response, 400, "Data is empty after insert");
- return;
- }
-
- if (!isNull(databaseError)) {
- console.error("Error inserting bookmark:", databaseError);
- Sentry.captureException(databaseError, {
- tags: {
- operation: "insert_bookmark",
- userId,
- },
- extra: {
- url,
- categoryId: computedCategoryId,
- },
- });
- sendErrorResponse(
- response,
- 500,
- "Failed to insert bookmark",
- databaseError?.message,
- );
- return;
- }
+ if (!isNull(databaseError)) {
+ console.error("Error inserting bookmark:", databaseError);
+ Sentry.captureException(databaseError, {
+ tags: {
+ operation: "insert_bookmark",
+ userId,
+ },
+ extra: {
+ url,
+ categoryId: computedCategoryId,
+ },
+ });
+ sendErrorResponse(
+ response,
+ 500,
+ "Failed to insert bookmark",
+ databaseError?.message,
+ );
+ return;
+ }
+
+ if (isEmpty(data) || isNull(data)) {
+ console.error("Data is empty after insert:", {
+ url,
+ userId,
+ categoryId: computedCategoryId,
+ });
+ Sentry.captureException(new Error("Data is empty after insert"), {
+ tags: {
+ operation: "insert_bookmark",
+ userId,
+ },
+ });
+ sendErrorResponse(response, 400, "Data is empty after insert");
+ return;
+ }🤖 Prompt for AI Agents
In src/pages/api/v1/bookmarks/add/tasks/min-data.ts around lines 249 to 325, the
logic checks for empty insert data before checking databaseError which can
misclassify DB failures as a 400; swap the checks so you first test if
databaseError is non-null and handle it (log, Sentry.captureException, send 500
response and return), then after that check for empty/null data and treat that
as the 400 case; ensure you still include the same contextual tags/extra info in
Sentry and return immediately after sending the error response.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
turbo.json(1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.092Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
📚 Learning: 2025-11-21T10:36:41.054Z
Learnt from: navin-moorthy
Repo: timelessco/recollect PR: 610
File: src/pages/api/v1/get-gemini-api-key.ts:64-69
Timestamp: 2025-11-21T10:36:41.054Z
Learning: In the recollect codebase, environment variables should be validated at build time via the Zod schema in ./scripts/env/schema.js. The API_KEY_ENCRYPTION_KEY variable was initially missing from this validation schema despite being used in production code.
Applied to files:
turbo.json
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to {.env,.env.local,.env.example} : Set up environment variables for NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY
Applied to files:
turbo.json
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Supabase pre-populates the following environment variables in both local and hosted environments: SUPABASE_URL, SUPABASE_PUBLISHABLE_OR_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY, and SUPABASE_DB_URL. Users do not need to manually set them
Applied to files:
turbo.json
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
src/utils/api/bookmark/add.ts (1)
242-253: HardencomputeCategoryIdagainst non‑numeric input (avoidNaN).Right now
Number(categoryId)will produceNaNif a non‑numeric string slips through (and it’s not one ofuncategorizedPages), which then flows into the DB as a category id. Even if callers “shouldn’t” send such values, this is a brittle failure mode.Consider guarding the numeric conversion and falling back to
0for invalid inputs:-export const computeCategoryId = ( - updateAccess: boolean, - categoryId: number | string | null | undefined, -): number => - updateAccess === true && - !isNull(categoryId) && - categoryId !== "null" && - categoryId !== 0 && - !uncategorizedPages?.includes(categoryId as string) - ? Number(categoryId) - : 0; +export const computeCategoryId = ( + updateAccess: boolean, + categoryId: number | string | null | undefined, +): number => { + if ( + !updateAccess || + isNull(categoryId) || + categoryId === "null" || + categoryId === 0 || + uncategorizedPages?.includes(String(categoryId)) + ) { + return 0; + } + + const numericId = Number(categoryId); + return Number.isFinite(numericId) ? numericId : 0; +}This keeps the existing semantics while avoiding
NaNleaking into downstream logic.
🧹 Nitpick comments (4)
src/utils/api/bookmark/screenshot.ts (2)
6-6: Consider usingfetchinstead ofaxiosin new utilities.Project guidelines say to prefer the
fetchAPI overaxiosfor TS/JS. Since this is new code, it’d be good to aligncaptureScreenshotanduploadRemainingBookmarkDatawithfetch(JSON body + headers are straightforward here), unless there’s a hard requirement for axios semantics.If you keep axios for consistency with existing calls, a follow‑up cleanup to standardize on
fetchacross bookmark APIs would be worthwhile.Also applies to: 83-103, 219-231
50-76: Add error handling around base64 decode/upload inuploadScreenshot.
uploadScreenshotcurrently assumesbase64infois always valid. If the screenshot service ever returns malformed data,decode(base64info)could throw and bypass Sentry logging for this operation.You already have a robust pattern in
uploadToR2(inremaining.ts); consider mirroring it here:-export const uploadScreenshot = async ( - base64info: string, - uploadUserId: string, -) => { - const imgName = `img-${uniqid?.time()}.jpg`; - const storagePath = `${STORAGE_SCREENSHOT_IMAGES_PATH}/${uploadUserId}/${imgName}`; - - const { error: uploadError } = await r2Helpers.uploadObject( - R2_MAIN_BUCKET_NAME, - storagePath, - new Uint8Array(decode(base64info)), - "image/jpg", - ); - - if (uploadError) { - console.error("R2 upload failed:", uploadError); - Sentry.captureException(uploadError, { - tags: { operation: "screenshot_upload_to_r2" }, - extra: { uploadUserId, storagePath }, - }); - return null; - } - - const { data: storageData } = r2Helpers.getPublicUrl(storagePath); - - return storageData?.publicUrl ?? null; -}; +export const uploadScreenshot = async ( + base64info: string, + uploadUserId: string, +) => { + try { + const imgName = `img-${uniqid?.time()}.jpg`; + const storagePath = `${STORAGE_SCREENSHOT_IMAGES_PATH}/${uploadUserId}/${imgName}`; + + const { error: uploadError } = await r2Helpers.uploadObject( + R2_MAIN_BUCKET_NAME, + storagePath, + new Uint8Array(decode(base64info)), + "image/jpeg", + ); + + if (uploadError) { + console.error("R2 upload failed:", uploadError); + Sentry.captureException(uploadError, { + tags: { operation: "screenshot_upload_to_r2" }, + extra: { uploadUserId, storagePath }, + }); + return null; + } + + const { data: storageData } = r2Helpers.getPublicUrl(storagePath); + return storageData?.publicUrl ?? null; + } catch (error) { + console.error("Error uploading screenshot to R2:", error); + Sentry.captureException(error, { + tags: { operation: "screenshot_upload_to_r2" }, + extra: { uploadUserId }, + }); + return null; + } +};This both guards against decode failures and uses the canonical JPEG MIME type.
src/utils/api/bookmark/remaining.ts (2)
6-6: Consider standardizing onfetchinstead ofaxiosfor image downloads.
processAndUploadImageUrlandprocessAndUploadCoverImageuse axios, but project rules prefer the built‑infetchAPI for TS/JS. These calls are straightforwardGETrequests for binary data and map well tofetch+response.arrayBuffer()(similar to howblurhashFromURLworks).Refactoring to
fetchhere (and in new code generally) would align with the guideline and remove an extra dependency from this hot path.Also applies to: 170-178, 209-216
288-341: Slightly simplify metadata target selection inprocessBookmarkImages.The current condition:
if (!isNil(imageUrlForMetaDataGeneration) || !isNil(ogImageMetaDataGeneration)) { metadata = await generateImageMetadata( currentData?.meta_data?.isOgImagePreferred ? ogImageMetaDataGeneration : imageUrlForMetaDataGeneration, supabase, userId, ); }can still call
generateImageMetadatawithnullwhen the other URL is present andisOgImagePreferreddisagrees, leading to an extra no‑op call.A small cleanup:
- const imageUrlForMetaDataGeneration = isUrlAnImageResult - ? uploadedImageThatIsAUrl - : (currentData?.meta_data?.screenshot ?? uploadedCoverImageUrl); - - let metadata = null; - if ( - !isNil(imageUrlForMetaDataGeneration) || - !isNil(ogImageMetaDataGeneration) - ) { - metadata = await generateImageMetadata( - currentData?.meta_data?.isOgImagePreferred - ? ogImageMetaDataGeneration - : imageUrlForMetaDataGeneration, - supabase, - userId, - ); - } + const imageUrlForMetaDataGeneration = isUrlAnImageResult + ? uploadedImageThatIsAUrl + : currentData?.meta_data?.screenshot ?? uploadedCoverImageUrl; + + const chosenImageUrl = + currentData?.meta_data?.isOgImagePreferred === true + ? ogImageMetaDataGeneration + : imageUrlForMetaDataGeneration; + + let metadata = null; + if (!isNil(chosenImageUrl)) { + metadata = await generateImageMetadata(chosenImageUrl, supabase, userId); + }avoids the redundant call and keeps the intent clear.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/utils/api/bookmark/add.ts(1 hunks)src/utils/api/bookmark/remaining.ts(1 hunks)src/utils/api/bookmark/screenshot.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-nextjs-rules.mdc)
**/*.{ts,tsx,js,jsx}: Must use@supabase/ssrpackage for creating browser and server clients in Next.js Supabase Auth implementation
Browser client must usecreateBrowserClientfrom@supabase/ssrwith NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY
Server client must usecreateServerClientfrom@supabase/ssrwith proper cookie handling usinggetAllandsetAllmethods only
NEVER use individual cookie methods (get,set,remove) for Supabase client configuration
NEVER import from@supabase/auth-helpers-nextjsas it is deprecated
Server client cookiesetAllmethod must include try-catch error handling as it may be called from Server Components
**/*.{ts,tsx,js,jsx}: Runpnpm fix:eslint <changed-files>to auto-fix all issues in changed TypeScript files
Maximum 250 lines per file; split larger files into modules
Only use Functional Programming; never use class-based code in TypeScript files
Functions should usecamelCasenaming convention in TypeScript files
Constants should useUPPER_SNAKE_CASEnaming convention in TypeScript files
UsefetchAPI; never use axios or older alternatives in TypeScript/JavaScript files
Never usevar; useconst/letin TypeScript and JavaScript files
Never useeval()function in TypeScript and JavaScript files
Never useargumentsobject; use rest parameters or array methods in TypeScript files
Use template literals instead of string concatenation in TypeScript files
Use optional chaining (?.) in TypeScript files
Usefor...ofloops in TypeScript files instead of traditionalfororforEach
Files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.tssrc/utils/api/bookmark/add.ts
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx}: Usebroadcastfor all realtime events (database changes via triggers, messaging, notifications, game state)
Usepresencesparingly for user state tracking (online status, user counters)
Use topic names that correlate with concepts and tables:scope:entity(e.g.,room:123:messages)
Include unsubscribe/cleanup logic in all implementations
Setprivate: truefor channels using database triggers or RLS policies
Give preference to use private channels over public channels (better security and control)
Implement proper error handling and reconnection logic
Do not usepostgres_changesfor new applications (single-threaded, doesn't scale well); migrate tobroadcast from databaseon existing applications if necessary
Do not create multiple subscriptions without proper cleanup
Do not use database functions (realtime.send,realtime.broadcast_changes) in client code
Use dedicated, granular topics to ensure messages are only sent to relevant listeners and improve scalability
Check if channel state is 'subscribed' before subscribing to prevent multiple subscriptions
Set auth withsupabase.realtime.setAuth()before subscribing to private channels
Configure broadcast options: useself: trueto receive your own broadcast messages andack: truefor server acknowledgment
Monitor channel states (SUBSCRIBED, TIMED_OUT, CLOSED, CHANNEL_ERROR) and rely on built-in automatic reconnection with exponential backoff
Use one channel per logical scope (e.g.,room:123) rather than nested scopes to optimize channel structure
Shard high-volume topics (e.g.,chat:shard:1,chat:shard:2) to distribute load and improve scalability
Enable enhanced logging by settinglog_level: 'info'in Supabase Realtime client configuration for debugging
Replacepostgres_changeswithbroadcastlisteners in migration from postgres_changes to realtime.broadcast_changes
Prefer generating code usingbroadcastoverpostgres_changesin all new implementations
Alw...
Files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.tssrc/utils/api/bookmark/add.ts
**/*.{js,ts,jsx,tsx,sql}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx,sql}: Use snake_case for event names:entity_action(e.g.,message_created)
Do not use generic event names like 'update' or 'change'; use specific event names followingentity_actionpattern
Files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.tssrc/utils/api/bookmark/add.ts
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use
ast-grep --lang ts -p '<pattern>'for finding TypeScript code structure
Files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.tssrc/utils/api/bookmark/add.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Only create abstractions when actually needed in TypeScript code
Prefer clear function/variable names over inline comments in TypeScript code
Avoid helper functions when a simple inline expression suffices in TypeScript code
Don't unnecessarily addtry/catchblocks in TypeScript code
Prefer fetching data in RSC (React Server Components); pages can still be static in Next.js
Usenext/fontandnext/scriptwhen applicable in Next.js files
Runpnpm lint:typesto check for type errors in TypeScript files
Only use named exports; never use default exports in TypeScript files
Prefer type deduction over custom interfaces in TypeScript files
Functions with 2+ parameters should use an interface withpropsparameter in TypeScript
Never useanytypes in TypeScript files
Never use@ts-ignoredirectives in TypeScript files
Never use CommonJS in TypeScript files; use ES modules exclusively
Never use enums in TypeScript; use union types instead
Never use namespaces in TypeScript; use modules and imports instead
Use type deduction from immediate parent only; never skip to grandparent types in TypeScript
When child props equal parent props, use type alias (type Child = Parent) instead of redefining in TypeScript
Only export types that are used in other files; check with grep before exporting in TypeScript
Use utility types (Parameters<>,ReturnType<>,Pick<>,Awaited<>) in TypeScript
Files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.tssrc/utils/api/bookmark/add.ts
**/*.{css,ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Mostly use built-in Tailwind values, occasionally allow dynamic values, rarely use globals in CSS files
Files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.tssrc/utils/api/bookmark/add.ts
src/utils/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Organize utility functions in
/src/utilsdirectory
Files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.tssrc/utils/api/bookmark/add.ts
🧠 Learnings (11)
📓 Common learnings
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.092Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
📚 Learning: 2025-11-27T07:54:42.092Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.092Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
Applied to files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.ts
📚 Learning: 2025-11-26T10:14:58.025Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/utils/api/bookmark/remaining.ts:374-375
Timestamp: 2025-11-26T10:14:58.025Z
Learning: In src/utils/api/bookmark/remaining.ts, the updateBookmarkWithRemainingData function intentionally matches only on id (not id + user_id) because the bookmark id is unique and authorization is verified at the API handler level before calling this utility function.
Applied to files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.tssrc/utils/api/bookmark/add.ts
📚 Learning: 2025-11-26T10:14:11.932Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/data.tsx:179-185
Timestamp: 2025-11-26T10:14:11.932Z
Learning: In the min-data bookmark API response at src/pages/api/v1/bookmarks/add/tasks/min-data.tsx, bookmarkData.user_id returns a string (UUID) at runtime, not the ProfilesTableTypes object as defined in the TypeScript types. The queue payload in src/pages/api/v1/bookmarks/add/data.tsx correctly uses bookmarkData.user_id directly as a string for the userId field.
Applied to files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/remaining.tssrc/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Must use `supabase/ssr` package for creating browser and server clients in Next.js Supabase Auth implementation
Applied to files:
src/utils/api/bookmark/screenshot.tssrc/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : NEVER import from `supabase/auth-helpers-nextjs` as it is deprecated
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : For external dependencies, prefer importing via `npm:` and `jsr:` specifiers. Minimize the use of imports from `deno.land/x`, `esm.sh`, and `unpkg.com`. If a package comes from those CDNs, replace the CDN hostname with the `npm:` specifier
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Do NOT use bare specifiers when importing dependencies. Use `npm:` or `jsr:` prefixes for external dependencies (e.g., `npm:supabase/supabase-js` instead of bare `supabase/supabase-js`)
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Use Node built-in APIs by importing them with the `node:` specifier (e.g., `import process from "node:process"`). Use Node APIs to fill gaps in Deno APIs
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Always define a version for external imports (e.g., `npm:express4.18.2` instead of `npm:express`)
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/!(*.test|*.spec).ts : Use Web APIs and Deno's core APIs instead of external dependencies (e.g., use fetch instead of Axios, use WebSockets API instead of node-ws)
Applied to files:
src/utils/api/bookmark/add.ts
🧬 Code graph analysis (3)
src/utils/api/bookmark/screenshot.ts (4)
src/utils/constants.ts (6)
STORAGE_SCREENSHOT_IMAGES_PATH(17-18)R2_MAIN_BUCKET_NAME(13-13)SCREENSHOT_API(128-129)MAIN_TABLE_NAME(4-4)NEXT_API_URL(44-44)ADD_REMAINING_BOOKMARK_API(71-72)src/utils/r2Client.ts (1)
r2Helpers(37-166)src/types/apiTypes.ts (1)
SingleListData(31-46)src/utils/helpers.ts (1)
apiCookieParser(301-306)
src/utils/api/bookmark/remaining.ts (6)
src/types/apiTypes.ts (2)
ProfilesTableTypes(166-175)SingleListData(31-46)src/utils/constants.ts (5)
STORAGE_SCRAPPED_IMAGES_PATH(15-16)R2_MAIN_BUCKET_NAME(13-13)IMAGE_JPEG_MIME_TYPE(339-339)getBaseUrl(42-42)MAIN_TABLE_NAME(4-4)src/utils/r2Client.ts (1)
r2Helpers(37-166)src/async/ai/ocr.ts (1)
ocr(15-68)src/async/ai/imageToText.ts (1)
imageToText(15-60)src/utils/helpers.ts (2)
checkIfUrlAnImage(348-351)checkIfUrlAnMedia(354-357)
src/utils/api/bookmark/add.ts (5)
src/types/apiTypes.ts (2)
SingleListData(31-46)ProfilesTableTypes(166-175)src/utils/constants.ts (5)
CATEGORIES_TABLE_NAME(7-7)SHARED_CATEGORIES_TABLE_NAME(8-8)MAIN_TABLE_NAME(4-4)uncategorizedPages(280-286)OG_IMAGE_PREFERRED_SITES(317-326)src/utils/error-utils/bookmark-api-error.ts (1)
formatErrorMessage(4-22)src/utils/helpers.ts (2)
checkIfUrlAnMedia(354-357)checkIfUrlAnImage(348-351)src/async/uploads/iframe-test.ts (1)
canEmbedInIframe(12-50)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: 🔬 Lint ESLint
| import imageToText from "../../../async/ai/imageToText"; | ||
| import ocr from "../../../async/ai/ocr"; | ||
| import { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find and read the remaining.ts file
find . -name "remaining.ts" -path "*/bookmark/*" | head -1Repository: timelessco/recollect
Length of output: 101
🏁 Script executed:
# Check the imageToText and ocr modules
find . -name "imageToText.ts" -o -name "ocr.ts" | head -10Repository: timelessco/recollect
Length of output: 115
🏁 Script executed:
# Read remaining.ts to check imports at lines 13-15 and around 242-278
wc -l ./src/utils/api/bookmark/remaining.tsRepository: timelessco/recollect
Length of output: 105
🏁 Script executed:
# Read remaining.ts lines 1-30 to see the imports
sed -n '1,30p' ./src/utils/api/bookmark/remaining.tsRepository: timelessco/recollect
Length of output: 864
🏁 Script executed:
# Read imageToText.ts to check export style
cat ./src/async/ai/imageToText.tsRepository: timelessco/recollect
Length of output: 1712
🏁 Script executed:
# Read ocr.ts to check export style
cat ./src/async/ai/ocr.tsRepository: timelessco/recollect
Length of output: 3938
🏁 Script executed:
# Check lines 242-278 of remaining.ts
sed -n '242,278p' ./src/utils/api/bookmark/remaining.tsRepository: timelessco/recollect
Length of output: 821
🏁 Script executed:
# Also check around those lines for context on how generateImageMetadata uses these imports
sed -n '230,290p' ./src/utils/api/bookmark/remaining.tsRepository: timelessco/recollect
Length of output: 1377
Use named imports to align with TypeScript export conventions.
The modules imageToText.ts and ocr.ts currently export both named functions and defaults:
// imageToText.ts
export const imageToText = async (/* ... */) => { /* ... */ };
export default imageToText;
// ocr.ts
export const ocr = async (/* ... */) => { /* ... */ };
export default ocr;Per the project's guidelines, TypeScript files should use named exports only and never include default exports. The current default imports work functionally, but to enforce consistency:
-import imageToText from "../../../async/ai/imageToText";
-import ocr from "../../../async/ai/ocr";
+import { imageToText } from "../../../async/ai/imageToText";
+import { ocr } from "../../../async/ai/ocr";Also remove the export default statements from imageToText.ts and ocr.ts.
Also applies to: 242-278
🤖 Prompt for AI Agents
In src/utils/api/bookmark/remaining.ts around lines 13 to 15 (and also update
related exports in the files referenced at lines 242–278), the code imports
imageToText and ocr via default imports while their modules export both named
and default identifiers; per project convention we must use named exports only.
Change the import style here to import the named exports (e.g., import {
imageToText } from "..."; import { ocr } from "...") and edit the source modules
(async/ai/imageToText.ts and async/ai/ocr.ts) to remove any export default
statements so they only export the named functions; ensure any other call sites
are updated to use the named imports.
| export const updateBookmarkWithRemainingData = async ( | ||
| supabase: SupabaseClient, | ||
| { | ||
| id, | ||
| userId, | ||
| metaData, | ||
| description, | ||
| ogImage, | ||
| }: { | ||
| description: string | null; | ||
| id: number; | ||
| metaData: Record<string, unknown>; | ||
| ogImage: string | null; | ||
| userId: string; | ||
| }, | ||
| ): Promise<{ | ||
| data: SingleListData[] | null; | ||
| error: PostgrestError | VerifyErrors | string | null; | ||
| }> => { | ||
| const { | ||
| data, | ||
| error: databaseError, | ||
| }: { | ||
| data: SingleListData[] | null; | ||
| error: PostgrestError | VerifyErrors | string | null; | ||
| } = await supabase | ||
| .from(MAIN_TABLE_NAME) | ||
| .update({ | ||
| meta_data: metaData, | ||
| description, | ||
| ogImage, | ||
| }) | ||
| .match({ id, user_id: userId }) | ||
| .select(`id`); | ||
|
|
||
| if (isNull(data)) { | ||
| const error = new Error("DB return data is empty"); | ||
| console.error("DB return data is empty for bookmark:", id); | ||
| Sentry.captureException(error, { | ||
| tags: { operation: "update_bookmark_with_remaining_data", userId }, | ||
| extra: { id }, | ||
| }); | ||
| return { data: null, error: "DB return data is empty" }; | ||
| } | ||
|
|
||
| if (!isNull(databaseError)) { | ||
| console.error( | ||
| "Failed to update bookmark with remaining data:", | ||
| databaseError, | ||
| ); | ||
| Sentry.captureException(databaseError, { | ||
| tags: { operation: "update_bookmark_with_remaining_data", userId }, | ||
| extra: { id, description, ogImage }, | ||
| }); | ||
| return { data: null, error: databaseError }; | ||
| } | ||
|
|
||
| return { data, error: null }; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the complete file to understand the context
wc -l src/utils/api/bookmark/remaining.tsRepository: timelessco/recollect
Length of output: 103
🏁 Script executed:
cat -n src/utils/api/bookmark/remaining.tsRepository: timelessco/recollect
Length of output: 13235
🏁 Script executed:
# Check lodash isNull behavior - it should return false for arrays
python3 << 'EOF'
# isNull from lodash only returns true for null and undefined
# isNull([]) returns false
# isNull(null) returns true
# isNull(undefined) returns true
# Let's verify this is the case by checking how lodash is typically used
print("isNull behavior:")
print("isNull(null) = true")
print("isNull(undefined) = true")
print("isNull([]) = false")
print("isNull([1,2,3]) = false")
print("\nSo empty arrays are NOT caught by isNull(data)")
EOFRepository: timelessco/recollect
Length of output: 150
🏁 Script executed:
# Search for callers of updateBookmarkWithRemainingData to understand how it's used
rg "updateBookmarkWithRemainingData" -n -A 5 -B 2Repository: timelessco/recollect
Length of output: 2081
🏁 Script executed:
# Read the caller file to see how it handles the response
wc -l src/pages/api/v1/bookmarks/add/tasks/remaining.tsRepository: timelessco/recollect
Length of output: 117
🏁 Script executed:
# Read lines around the call to see how the result is handled
sed -n '200,240p' src/pages/api/v1/bookmarks/add/tasks/remaining.tsRepository: timelessco/recollect
Length of output: 1399
🏁 Script executed:
# Let me check what lodash isNull actually does with empty arrays
# by looking at how it's used elsewhere in the codebase
rg "isNull\(" -n -B 2 -A 2 | head -40Repository: timelessco/recollect
Length of output: 2241
🏁 Script executed:
# Let's check if there are other update operations in the codebase to see patterns
rg "\.update\(" -n -A 5 | grep -A 5 "\.match\|\.select" | head -50Repository: timelessco/recollect
Length of output: 2469
🏁 Script executed:
# Search for patterns where code checks for zero-row updates (data.length === 0 or empty arrays)
rg "data\.length\s*===\s*0|isEmpty.*data" -n -B 2 -A 2Repository: timelessco/recollect
Length of output: 8223
🏁 Script executed:
# Look at a similar update operation to understand the pattern
sed -n '118,125p' src/pages/api/category/update-user-category.tsRepository: timelessco/recollect
Length of output: 303
🏁 Script executed:
# Read full context of error handling in update-user-category.ts
sed -n '100,135p' src/pages/api/category/update-user-category.tsRepository: timelessco/recollect
Length of output: 926
🏁 Script executed:
# Let's check the learning note and understand the authorization model
# by looking at the API handler that calls updateBookmarkWithRemainingData
sed -n '1,50p' src/pages/api/v1/bookmarks/add/tasks/remaining.tsRepository: timelessco/recollect
Length of output: 1612
🏁 Script executed:
# Look for authorization checks and where userId comes from
sed -n '50,120p' src/pages/api/v1/bookmarks/add/tasks/remaining.tsRepository: timelessco/recollect
Length of output: 2192
🏁 Script executed:
# Continue reading to see error handling and complete context
sed -n '115,180p' src/pages/api/v1/bookmarks/add/tasks/remaining.tsRepository: timelessco/recollect
Length of output: 1686
🏁 Script executed:
# Search for any comments or documentation about Supabase returning empty arrays
rg "data.*\[\]|zero.*row|no.*row" -i -n | head -20Repository: timelessco/recollect
Length of output: 1475
Based on my verification of the codebase, I have confirmed all concerns in the review comment are valid:
-
Empty array detection:
isNull(data)returnsfalsefor empty arrays[]. I found throughout the codebase (20+ instances) that the standard pattern is to check BOTHisEmpty(data) || isNull(data)together. -
Order-of-checks issue: The standard pattern across similar update operations (in
update-user-category.ts,update-user-profile.tsx, etc.) is to check database errors FIRST, then empty data. The current function does it backward, which means:- If a DB error occurs and data is null, the first check catches null data
- The actual
databaseErrordetails are lost - Caller gets generic "DB return data is empty" instead of real error
-
Zero-row match scenario: Supabase returns
data: []witherror: nullwhen.match()matches zero rows. While authorization is verified at the handler level (confirmed by the calling code insrc/pages/api/v1/bookmarks/add/tasks/remaining.ts), defensive checks for race conditions and edge cases are still valuable.
The review comment's suggested fix aligns with the codebase's established patterns and addresses genuine edge cases.
Ensure error handling correctly prioritizes database errors and detects zero-row updates.
The current implementation has two issues:
- On database error,
datawill often benull, so the firstif (isNull(data))check fires and masks the actualdatabaseErrordetails. - If the update matches zero rows, Supabase returns
data: []witherror: null;isNull([])returnsfalse, so this case is treated as success.
Reordering the checks to match the established codebase pattern (check error first, then empty data) ensures errors are properly surfaced and zero-row updates are detected:
const {
data,
error: databaseError,
@@ -387,20 +387,23 @@ export const updateBookmarkWithRemainingData = async (
.match({ id, user_id: userId })
.select(`id`);
+ if (!isNull(databaseError)) {
+ console.error(
+ "Failed to update bookmark with remaining data:",
+ databaseError,
+ );
+ Sentry.captureException(databaseError, {
+ tags: { operation: "update_bookmark_with_remaining_data", userId },
+ extra: { id, description, ogImage },
+ });
+ return { data: null, error: databaseError };
+ }
+
- if (isNull(data)) {
+ if (isNull(data) || data.length === 0) {
const error = new Error("DB return data is empty");
console.error("DB return data is empty for bookmark:", id);
Sentry.captureException(error, {
tags: { operation: "update_bookmark_with_remaining_data", userId },
extra: { id },
});
return { data: null, error: "DB return data is empty" };
}
-
- if (!isNull(databaseError)) {
- console.error(
- "Failed to update bookmark with remaining data:",
- databaseError,
- );
- Sentry.captureException(databaseError, {
- tags: { operation: "update_bookmark_with_remaining_data", userId },
- extra: { id, description, ogImage },
- });
- return { data: null, error: databaseError };
- }🤖 Prompt for AI Agents
In src/utils/api/bookmark/remaining.ts around lines 355 to 413, the
error/empty-data checks are inverted and don’t detect zero-row updates: first
check databaseError (if not null) and handle/log/return it (capture with Sentry
including id, description, ogImage, userId), then check if data is null or empty
using if (isNull(data) || isEmpty(data)) to detect both null and [] and
handle/log/return the "DB return data is empty" case; finally return { data,
error: null } on success.
| /** | ||
| * Zod schema for validating screenshot request payload | ||
| * Requires: | ||
| * - id: non-empty string (bookmark ID) | ||
| * - url: valid URL string | ||
| * - userId: user ID | ||
| */ | ||
| export const screenshotRequestSchema = z.object({ | ||
| id: z.number().min(1, "Bookmark ID is required"), | ||
| url: z.string().url("Invalid URL format"), | ||
| userId: z.string(), | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Read the screenshot.ts file to see the full context around lines 31-42 and 125-129
wc -l src/utils/api/bookmark/screenshot.tsRepository: timelessco/recollect
Length of output: 104
🏁 Script executed:
#!/bin/bash
# Get the content around the flagged lines
sed -n '25,150p' src/utils/api/bookmark/screenshot.ts | cat -nRepository: timelessco/recollect
Length of output: 4312
🏁 Script executed:
#!/bin/bash
# Search for SingleListData type definition
rg -n "type SingleListData|interface SingleListData" --type=tsRepository: timelessco/recollect
Length of output: 3861
🏁 Script executed:
#!/bin/bash
# Search for how screenshotRequestSchema is used
rg -n "screenshotRequestSchema" --type=ts -A 3 -B 1Repository: timelessco/recollect
Length of output: 1368
🏁 Script executed:
#!/bin/bash
# Check the bookmarks table schema or related types
rg -n "bookmarks.*id|id.*number|id.*string" src/types/ --type=ts -C 2 | head -40Repository: timelessco/recollect
Length of output: 1715
🏁 Script executed:
#!/bin/bash
# Get lines 125-150 to see the updateBookmarkWithScreenshot function
sed -n '124,150p' src/utils/api/bookmark/screenshot.ts | cat -nRepository: timelessco/recollect
Length of output: 1012
🏁 Script executed:
#!/bin/bash
# Search for calls to fetchExistingBookmarkData
rg -n "fetchExistingBookmarkData" --type=ts -B 2 -A 3Repository: timelessco/recollect
Length of output: 1432
🏁 Script executed:
#!/bin/bash
# Look at the API endpoint that uses screenshotRequestSchema
cat -n src/pages/api/v1/bookmarks/add/tasks/screenshot.ts | sed -n '90,120p'Repository: timelessco/recollect
Length of output: 1220
🏁 Script executed:
#!/bin/bash
# Check the full SingleListData type to confirm id is number
cat -n src/types/apiTypes.ts | sed -n '31,48p'Repository: timelessco/recollect
Length of output: 553
🏁 Script executed:
#!/bin/bash
# Find all calls to fetchExistingBookmarkData to see calling patterns
rg -n "fetchExistingBookmarkData\s*\(" --type=ts -B 3 -A 4Repository: timelessco/recollect
Length of output: 701
🏁 Script executed:
#!/bin/bash
# Get the updateBookmarkWithScreenshot function signature (lines 125-135)
sed -n '125,180p' src/utils/api/bookmark/screenshot.ts | cat -nRepository: timelessco/recollect
Length of output: 1999
🏁 Script executed:
#!/bin/bash
# Search for where bookmarkId parameter is used in remaining.ts (from learnings)
rg -n "bookmarkId|id.*updateBookmarkWithRemainingData" src/utils/api/bookmark/remaining.ts --type=ts -B 2 -A 2Repository: timelessco/recollect
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Search for any other usages of these functions
rg -n "updateBookmarkWithScreenshot|fetchExistingBookmarkData" --type=tsRepository: timelessco/recollect
Length of output: 632
🏁 Script executed:
#!/bin/bash
# Check what z.number() constraints are typically used in this codebase for IDs
rg -n "z\.number\(\)" --type=ts -B 1 -A 1 | head -40Repository: timelessco/recollect
Length of output: 1389
Update JSDoc and fix bookmarkId parameter types to match the numeric schema.
The JSDoc incorrectly states id is a "non‑empty string", but screenshotRequestSchema validates id as z.number(). Additionally, both fetchExistingBookmarkData and updateBookmarkWithScreenshot incorrectly type bookmarkId as string when they receive and process numeric IDs (matching SingleListData["id"] which is number).
Update the JSDoc at line 10 to reflect numeric IDs, and change both function signatures:
/**
* Zod schema for validating screenshot request payload
* Requires:
- * - id: non-empty string (bookmark ID)
+ * - id: numeric bookmark ID
* - url: valid URL string
* - userId: user ID
*/export const fetchExistingBookmarkData = async (
supabase: SupabaseClient,
- bookmarkId: string,
+ bookmarkId: number,
userId: string,
)export const updateBookmarkWithScreenshot = async (
supabase: SupabaseClient,
{
bookmarkId,
userId,
title,
description,
metaData,
}: {
- bookmarkId: string;
+ bookmarkId: number;
description: string;
metaData: Record<string, unknown>;
title: string;
userId: string;
},
)Also remove the unnecessary .toString() conversion at line 338 in src/pages/api/v1/bookmarks/add/tasks/screenshot.ts when calling fetchExistingBookmarkData.
🤖 Prompt for AI Agents
In src/utils/api/bookmark/screenshot.ts around lines 31 to 42, the JSDoc and
parameter types are inconsistent with the schema: update the JSDoc (line ~10) to
state that id/bookmarkId is a numeric ID (number) instead of a non-empty string,
and change the signatures of fetchExistingBookmarkData and
updateBookmarkWithScreenshot to accept bookmarkId: number (matching
SingleListData["id"] and screenshotRequestSchema.id). Also remove the
unnecessary .toString() conversion at
src/pages/api/v1/bookmarks/add/tasks/screenshot.ts line 338 when calling
fetchExistingBookmarkData so you pass the numeric ID directly. Ensure any
internal usages and type annotations are updated accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.env.example (1)
80-85: Add missing blank line at end of file and reorder keys alphabetically.The file is missing a blank line at the end (line 85), and the
INTERNAL_API_KEYkey should be ordered alphabetically beforeRECOLLECT_SERVER_API.Apply this diff to fix formatting:
RECOLLECT_SERVER_API=your-api -RECOLLECT_SERVER_API_KEY=your-secret-key INTERNAL_API_KEY=key +RECOLLECT_SERVER_API_KEY=your-secret-key +
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.env.example(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
{.env,.env.local,.env.example}
📄 CodeRabbit inference engine (.cursor/rules/supabase-nextjs-rules.mdc)
Set up environment variables for NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY
Files:
.env.example
🧠 Learnings (2)
📓 Common learnings
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.092Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/utils/api/bookmark/remaining.ts:374-375
Timestamp: 2025-11-26T10:14:58.025Z
Learning: In src/utils/api/bookmark/remaining.ts, the updateBookmarkWithRemainingData function intentionally matches only on id (not id + user_id) because the bookmark id is unique and authorization is verified at the API handler level before calling this utility function.
📚 Learning: 2025-11-21T10:36:41.054Z
Learnt from: navin-moorthy
Repo: timelessco/recollect PR: 610
File: src/pages/api/v1/get-gemini-api-key.ts:64-69
Timestamp: 2025-11-21T10:36:41.054Z
Learning: In the recollect codebase, environment variables should be validated at build time via the Zod schema in ./scripts/env/schema.js. The API_KEY_ENCRYPTION_KEY variable was initially missing from this validation schema despite being used in production code.
Applied to files:
.env.example
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 85-85: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
[warning] 85-85: [UnorderedKey] The INTERNAL_API_KEY key should go before the RECOLLECT_SERVER_API key
(UnorderedKey)
🔇 Additional comments (1)
.env.example (1)
80-85: Environment variables are already validated in the schema.All three new environment variables (
RECOLLECT_SERVER_API,RECOLLECT_SERVER_API_KEY, andINTERNAL_API_KEY) are properly defined and validated inscripts/env/schema.js:
- Lines 18-19:
RECOLLECT_SERVER_APIandRECOLLECT_SERVER_API_KEYare defined as optional strings- Line 21:
INTERNAL_API_KEYis defined as a required string- Lines 38-41: All three are correctly referenced in the server configuration
The validation is already in place as required.
|
@abhishekmg Please fix the eslint issues, changes looks good. |
…cessing functions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/utils/api/bookmark/add.ts (3)
123-167: Consider narrowingcategoryIdtype tonumberfor consistency with DB and callers
checkIfBookmarkExistsacceptscategoryId: number | string, but yourcomputeCategoryIdhelper returns anumberand the DBcategory_idcolumn is very likely numeric. Allowingstringhere increases the chance of subtle type mismatches (e.g., callers passing"123"instead of123).You could tighten the API to require a
numberand normalize at the call sites:-export const checkIfBookmarkExists = async ( - supabase: SupabaseClient, - url: string, - categoryId: number | string, -): Promise<CheckResult<boolean>> => { +export const checkIfBookmarkExists = async ( + supabase: SupabaseClient, + url: string, + categoryId: number, +): Promise<CheckResult<boolean>> => {This keeps the boundary strongly typed and avoids accidental string IDs leaking in.
242-253: HardencomputeCategoryIdagainst non‑numeric string inputs
computeCategoryIdnow usesNumber(categoryId), which fixes the earlier cast issue, but a malformed ID like"123x"would evaluate toNaNand propagate out as the computed category ID.Consider validating that the numeric conversion is finite before returning it, and falling back to
0otherwise:-export const computeCategoryId = ( - updateAccess: boolean, - categoryId: number | string | null | undefined, -): number => - updateAccess === true && - !isNull(categoryId) && - categoryId !== "null" && - categoryId !== 0 && - !uncategorizedPages?.includes(categoryId as string) - ? Number(categoryId) - : 0; +export const computeCategoryId = ( + updateAccess: boolean, + categoryId: number | string | null | undefined, +): number => { + if ( + !updateAccess || + isNull(categoryId) || + categoryId === "null" || + categoryId === 0 || + uncategorizedPages?.includes(String(categoryId)) + ) { + return 0; + } + + const numericCategoryId = Number(categoryId); + return Number.isFinite(numericCategoryId) ? numericCategoryId : 0; +};This keeps behavior the same for valid IDs while being safer against unexpected input.
1-317: Consider splitting this utility file and applying the props-object pattern for multi-arg functionsTwo style/guideline points to consider:
File size: This module is ~300+ lines, above the
src/**/*.{ts,tsx}guideline of max 250 lines per file. It naturally splits into:
- Access/duplicate-check helpers (
checkIfUserIsCategoryOwnerOrCollaborator,checkIfBookmarkExists,computeCategoryId).- Scraping/URL processing helpers (
scrapeUrlMetadata,processUrlMetadata,isUrlFromPreferredOgSite).Extracting one of these groups into its own module would bring each file under the limit and keep concerns more focused.
Functions with 2+ parameters: Several helpers here take 3–4 positional parameters (e.g.,
checkIfUserIsCategoryOwnerOrCollaborator,checkIfBookmarkExists,computeCategoryId,processUrlMetadata). To align with the guideline of using apropsparameter for 2+ args, you could migrate them to an options object:export type CheckAccessProps = { supabase: SupabaseClient; categoryId: SingleListData["category_id"]; userId: SingleListData["user_id"]["id"]; email: ProfilesTableTypes["email"]; }; export const checkIfUserIsCategoryOwnerOrCollaborator = async ({ supabase, categoryId, userId, email, }: CheckAccessProps): Promise<CheckResult<boolean>> => { /* ... */ };This also tends to make call sites more self‑documenting.
As per coding guidelines.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/utils/api/bookmark/add.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-nextjs-rules.mdc)
**/*.{ts,tsx,js,jsx}: Must use@supabase/ssrpackage for creating browser and server clients in Next.js Supabase Auth implementation
Browser client must usecreateBrowserClientfrom@supabase/ssrwith NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY
Server client must usecreateServerClientfrom@supabase/ssrwith proper cookie handling usinggetAllandsetAllmethods only
NEVER use individual cookie methods (get,set,remove) for Supabase client configuration
NEVER import from@supabase/auth-helpers-nextjsas it is deprecated
Server client cookiesetAllmethod must include try-catch error handling as it may be called from Server Components
**/*.{ts,tsx,js,jsx}: Use only Functional Programming, never Class Based Code
Use only Named exports, never default exports
UsefetchAPI instead of axios or older alternatives
Never use CommonJS (require/module.exports), use ES modules instead
Never usevar, useconst/letinstead
Never useeval()
Never useargumentsobject, use rest parameters instead
Always use template literals instead of string concatenation
Always use optional chaining (?.) for safe property access
Usefor...ofloops instead of array methods when iterating
Files:
src/utils/api/bookmark/add.ts
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx}: Usebroadcastfor all realtime events (database changes via triggers, messaging, notifications, game state)
Usepresencesparingly for user state tracking (online status, user counters)
Use topic names that correlate with concepts and tables:scope:entity(e.g.,room:123:messages)
Include unsubscribe/cleanup logic in all implementations
Setprivate: truefor channels using database triggers or RLS policies
Give preference to use private channels over public channels (better security and control)
Implement proper error handling and reconnection logic
Do not usepostgres_changesfor new applications (single-threaded, doesn't scale well); migrate tobroadcast from databaseon existing applications if necessary
Do not create multiple subscriptions without proper cleanup
Do not use database functions (realtime.send,realtime.broadcast_changes) in client code
Use dedicated, granular topics to ensure messages are only sent to relevant listeners and improve scalability
Check if channel state is 'subscribed' before subscribing to prevent multiple subscriptions
Set auth withsupabase.realtime.setAuth()before subscribing to private channels
Configure broadcast options: useself: trueto receive your own broadcast messages andack: truefor server acknowledgment
Monitor channel states (SUBSCRIBED, TIMED_OUT, CLOSED, CHANNEL_ERROR) and rely on built-in automatic reconnection with exponential backoff
Use one channel per logical scope (e.g.,room:123) rather than nested scopes to optimize channel structure
Shard high-volume topics (e.g.,chat:shard:1,chat:shard:2) to distribute load and improve scalability
Enable enhanced logging by settinglog_level: 'info'in Supabase Realtime client configuration for debugging
Replacepostgres_changeswithbroadcastlisteners in migration from postgres_changes to realtime.broadcast_changes
Prefer generating code usingbroadcastoverpostgres_changesin all new implementations
Alw...
Files:
src/utils/api/bookmark/add.ts
**/*.{js,ts,jsx,tsx,sql}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx,sql}: Use snake_case for event names:entity_action(e.g.,message_created)
Do not use generic event names like 'update' or 'change'; use specific event names followingentity_actionpattern
Files:
src/utils/api/bookmark/add.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Only create abstractions when actually needed in TypeScript code
Prefer clear function/variable names over inline comments in TypeScript code
Avoid helper functions when a simple inline expression suffices in TypeScript code
Useknipto remove unused code when making large changes in TypeScript
Don't unnecessarily addtry/catchblocks in TypeScript code
Use next/font and next/script when applicable in Next.js
Be mindful of serialized prop size for React Server Component to child components in Next.js
Runpnpm lint:typesto check for TypeScript type errors
Functions should use camelCase naming convention
Constants should use UPPER_SNAKE_CASE naming convention
Prefer type deduction over custom interfaces
Functions with 2+ parameters should use interface withpropsparameter
Use Type Hierarchy: Use types from immediate parent only, never skip to grandparents
When child props equal parent props, usetype Child = Parenttype alias
Use TypeScript Utility Types:Parameters<>,ReturnType<>,Pick<>,Awaited<>
Never useanytypes in TypeScript
Never use@ts-ignoredirectives
Never use TypeScript enums, use union types or const objects instead
Never use TypeScript namespaces
Files:
src/utils/api/bookmark/add.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Maximum 250 lines per file - split larger files into modules
Files:
src/utils/api/bookmark/add.ts
src/utils/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Utility functions should be located in
/src/utils
Files:
src/utils/api/bookmark/add.ts
🧠 Learnings (11)
📓 Common learnings
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.100Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
📚 Learning: 2025-11-26T10:14:58.025Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/utils/api/bookmark/remaining.ts:374-375
Timestamp: 2025-11-26T10:14:58.025Z
Learning: In src/utils/api/bookmark/remaining.ts, the updateBookmarkWithRemainingData function intentionally matches only on id (not id + user_id) because the bookmark id is unique and authorization is verified at the API handler level before calling this utility function.
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-26T10:14:11.932Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/data.tsx:179-185
Timestamp: 2025-11-26T10:14:11.932Z
Learning: In the min-data bookmark API response at src/pages/api/v1/bookmarks/add/tasks/min-data.tsx, bookmarkData.user_id returns a string (UUID) at runtime, not the ProfilesTableTypes object as defined in the TypeScript types. The queue payload in src/pages/api/v1/bookmarks/add/data.tsx correctly uses bookmarkData.user_id directly as a string for the userId field.
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : NEVER import from `supabase/auth-helpers-nextjs` as it is deprecated
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : For external dependencies, prefer importing via `npm:` and `jsr:` specifiers. Minimize the use of imports from `deno.land/x`, `esm.sh`, and `unpkg.com`. If a package comes from those CDNs, replace the CDN hostname with the `npm:` specifier
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Must use `supabase/ssr` package for creating browser and server clients in Next.js Supabase Auth implementation
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Do NOT use bare specifiers when importing dependencies. Use `npm:` or `jsr:` prefixes for external dependencies (e.g., `npm:supabase/supabase-js` instead of bare `supabase/supabase-js`)
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-27T10:05:25.508Z
Learnt from: navin-moorthy
Repo: timelessco/recollect PR: 631
File: src/pages/api/profiles/fetch-user-profile-pic.ts:29-29
Timestamp: 2025-11-27T10:05:25.508Z
Learning: Applies to src/pages/api/**/*.{ts,tsx} : Next.js API routes should use default exports (not named exports). The named export guideline only applies to frontend code, not API routes.
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Use Node built-in APIs by importing them with the `node:` specifier (e.g., `import process from "node:process"`). Use Node APIs to fill gaps in Deno APIs
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/!(*.test|*.spec).ts : Use Web APIs and Deno's core APIs instead of external dependencies (e.g., use fetch instead of Axios, use WebSockets API instead of node-ws)
Applied to files:
src/utils/api/bookmark/add.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Always define a version for external imports (e.g., `npm:express4.18.2` instead of `npm:express`)
Applied to files:
src/utils/api/bookmark/add.ts
🔇 Additional comments (3)
src/utils/api/bookmark/add.ts (3)
55-121: Ownership/collaborator access check looks robustThe early return for
categoryId === null, clear separation of owner vs collaborator checks, and consistent Sentry logging give good observability and safe behavior. No changes needed here.
169-240: Scraper error handling and fallback URL logic look solidThe explicit user‑agent, structured
ScrapperTypes, Sentry logging, and safe fallback to hostname (with its own guardedtry/catch) are all good improvements and address the earlier URL‑parsing concerns cleanly. No further changes needed here.
255-300: URL media/iframe processing is well-structuredThe separation between media vs non‑media URLs, reuse of
checkIfUrlAnMedia/checkIfUrlAnImage, and thecanEmbedInIframecheck with Sentry logging and safe defaults in the catch block give a clear contract for callers. Behavior (e.g., not attempting iframe for media URLs) looks intentional and coherent.
| // Check if URL is from a preferred OG image site | ||
| export const isUrlFromPreferredOgSite = (url: string): boolean => { | ||
| try { | ||
| const urlHost = new URL(url)?.hostname?.toLowerCase(); | ||
| return OG_IMAGE_PREFERRED_SITES?.some((keyword) => | ||
| urlHost?.includes(keyword), | ||
| ); | ||
| } catch (error) { | ||
| console.error("Failed to check OG image preference:", error); | ||
| Sentry.captureException(error, { | ||
| tags: { operation: "check_og_image_preference" }, | ||
| extra: { url }, | ||
| }); | ||
| return false; | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure isUrlFromPreferredOgSite always returns a strict boolean
OG_IMAGE_PREFERRED_SITES?.some(...) can return undefined when the list is nullish, so at runtime the function may return undefined despite the annotated boolean return type. This is a minor typing/runtime mismatch.
You can coerce to a strict boolean with nullish coalescing:
export const isUrlFromPreferredOgSite = (url: string): boolean => {
try {
const urlHost = new URL(url)?.hostname?.toLowerCase();
- return OG_IMAGE_PREFERRED_SITES?.some((keyword) =>
- urlHost?.includes(keyword),
- );
+ return (
+ OG_IMAGE_PREFERRED_SITES?.some((keyword) =>
+ urlHost?.includes(keyword),
+ ) ?? false
+ );
} catch (error) {This keeps semantics the same while aligning with the declared return type.
🤖 Prompt for AI Agents
In src/utils/api/bookmark/add.ts around lines 302 to 317, the function
isUrlFromPreferredOgSite can return undefined because
OG_IMAGE_PREFERRED_SITES?.some(...) may yield undefined when the list is
nullish; update the return to always produce a strict boolean by coalescing the
result (e.g., wrap the .some(...) call with a nullish-coalescing or boolean
cast) so the function signature Promise<boolean> is honored and behavior stays
the same when the list is absent.
* feat(uplpad): ✨ added upload api queue * refactor(api): remove debug logging for API key validation * fix(PR): 🐛 fixed PR comments * refactor(pdfjs): removed pdfjs lib * fix: added waituntill for the apis queues
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
♻️ Duplicate comments (1)
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts (1)
46-49: Use named export instead of default export.Per coding guidelines: "Use only Named exports, never default exports." However, per a previous learning, Next.js API routes require default exports for the framework to recognize them as route handlers. This is an exception to the general rule.
🧹 Nitpick comments (9)
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts (2)
3-3: Consider using nativefetchAPI instead of axios.Per coding guidelines, the project prefers the native
fetchAPI over axios. While axios works correctly here, migrating tofetchwould align with project standards and reduce bundle size.Example migration for one of the POST calls:
-import axios from "axios"; +// No import needed - fetch is native-const [screenshotApiError, screenshotResponse] = await vet(() => - axios.post( - screenshotApiUrl, - { - id: payload.bookmarkId, - url: payload.url, - userId: payload.userId, - }, - { - headers: { - "x-api-key": process.env.INTERNAL_API_KEY, - }, - }, - ), -); +const [screenshotApiError, screenshotResponse] = await vet(() => + fetch(screenshotApiUrl, { + method: "POST", + headers: { + "Content-Type": "application/json", + "x-api-key": process.env.INTERNAL_API_KEY ?? "", + }, + body: JSON.stringify({ + id: payload.bookmarkId, + url: payload.url, + userId: payload.userId, + }), + }), +);
1-417: Consider splitting this file to comply with the 250-line limit.Per coding guidelines: "Maximum 250 lines per file - split larger files into modules." This file is ~417 lines. Consider extracting helper functions (e.g.,
processMessage,callScreenshotApi,callRemainingApi,archiveMessage) into a separate utility module.Possible structure:
src/utils/api/bookmark/queue-consumer-helpers.ts - processQueueMessage() - callScreenshotApi() - callRemainingApi() - archiveQueueMessage() src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts - handler() - orchestrates the flow using helperssupabase/migrations/20251201000000_add_upload_file_queue.sql (1)
101-106: Consider adding a time-based filter for archive cleanup.The cron job deletes ALL archived messages. Consider retaining recent archives for debugging or audit purposes.
PERFORM cron.schedule( 'upload-file-queue-cron', -- job name '30 1 * * *', -- cron schedule: daily at 1:30 AM - $cmd$DELETE FROM pgmq."a_upload-file-queue"$cmd$ -- SQL to execute + $cmd$DELETE FROM pgmq."a_upload-file-queue" WHERE archived_at < NOW() - INTERVAL '7 days'$cmd$ -- Delete archives older than 7 days );This preserves a rolling 7-day window of archived messages for troubleshooting while preventing unbounded growth.
src/pages/api/v1/bookmarks/add/data.ts (2)
5-5: Consider usingfetchAPI instead of axios.Per coding guidelines, the project prefers the native
fetchAPI over axios. While axios works correctly here, consider migrating tofetchfor consistency across the codebase.
186-206: Verify null safety when accessing nesteddataproperty.At line 187, the condition
!data?.data?.lengthuses optional chaining, but at line 194,data.data[0]is accessed directly. While the condition at line 187 ensuresdata.datahas length, consider adding a defensive check or using optional chaining for consistency.- const bookmarkData = data.data[0]; + const bookmarkData = data.data?.[0];src/pages/api/v1/file/upload/data.ts (3)
5-7: Consider usingfetchAPI and native checks instead of axios and lodash.Per coding guidelines, prefer native
fetchover axios. Additionally,isEmpty(data.data)at line 219 could be replaced with!data.data?.lengthfor consistency and to avoid the lodash dependency for a simple check.
173-184: Missing request body validation.Unlike the bookmark API which validates the request body using Zod schema, this endpoint accesses
request.bodydirectly without validation. This could lead to runtime errors if required fields are missing or have incorrect types.Consider adding Zod schema validation similar to the bookmark endpoint:
const schema = z.object({ category_id: z.string(), name: z.string(), thumbnailPath: z.string().nullable(), type: z.string(), uploadFileNamePath: z.string(), }); const parseResult = schema.safeParse(request.body); if (!parseResult.success) { response.status(400).json({ error: "Invalid request body", success: false, }); return; } const bodyData = parseResult.data;
346-360: Verify consistency between logged URL and queued URL for PDFs.At line 354, the log shows
publicUrl: fileRecord.url(original URL), but the queue payload at line 348 usesqueuePublicUrlwhich may be the PDF screenshot URL. This could cause confusion during debugging. Consider logging the actual URL being queued.console.log("Queueing background job for remaining data:", { bookmarkId: fileData.id, - publicUrl: fileRecord.url, + publicUrl: queuePublicUrl, + originalUrl: fileRecord.url, });src/pages/api/v1/file/upload/tasks/queue-consumer.ts (1)
3-3: Consider usingfetchAPI instead of axios.Per coding guidelines, prefer the native
fetchAPI. Based on learnings, the intentional absence of client-side timeouts for the remaining data API call is noted, which can be preserved withfetchas well.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (16)
docs/architecture-overview.md(1 hunks)docs/file-upload-queue-implementation.md(1 hunks)docs/internal_api_authentication.md(1 hunks)package.json(4 hunks)src/async/mutationHooks/files/useFileUploadOptimisticMutation.ts(0 hunks)src/async/supabaseCrudHelpers/index.ts(4 hunks)src/pages/api/v1/bookmarks/add/data.ts(1 hunks)src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts(1 hunks)src/pages/api/v1/file/upload/data.ts(1 hunks)src/pages/api/v1/file/upload/tasks/min-data.ts(1 hunks)src/pages/api/v1/file/upload/tasks/queue-consumer.ts(1 hunks)src/pages/api/v1/file/upload/tasks/remaining.ts(1 hunks)src/types/pdfjs-distribution.d.ts(0 hunks)src/utils/constants.ts(2 hunks)src/utils/file-upload.ts(0 hunks)supabase/migrations/20251201000000_add_upload_file_queue.sql(1 hunks)
💤 Files with no reviewable changes (3)
- src/types/pdfjs-distribution.d.ts
- src/async/mutationHooks/files/useFileUploadOptimisticMutation.ts
- src/utils/file-upload.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/internal_api_authentication.md
- src/async/supabaseCrudHelpers/index.ts
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-nextjs-rules.mdc)
**/*.{ts,tsx,js,jsx}: Must use@supabase/ssrpackage for creating browser and server clients in Next.js Supabase Auth implementation
Browser client must usecreateBrowserClientfrom@supabase/ssrwith NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY
Server client must usecreateServerClientfrom@supabase/ssrwith proper cookie handling usinggetAllandsetAllmethods only
NEVER use individual cookie methods (get,set,remove) for Supabase client configuration
NEVER import from@supabase/auth-helpers-nextjsas it is deprecated
Server client cookiesetAllmethod must include try-catch error handling as it may be called from Server Components
**/*.{ts,tsx,js,jsx}: Use only Functional Programming, never Class Based Code
Use only Named exports, never default exports
UsefetchAPI instead of axios or older alternatives
Never use CommonJS (require/module.exports), use ES modules instead
Never usevar, useconst/letinstead
Never useeval()
Never useargumentsobject, use rest parameters instead
Always use template literals instead of string concatenation
Always use optional chaining (?.) for safe property access
Usefor...ofloops instead of array methods when iterating
Files:
src/pages/api/v1/file/upload/data.tssrc/pages/api/v1/file/upload/tasks/remaining.tssrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/pages/api/v1/file/upload/tasks/queue-consumer.tssrc/pages/api/v1/file/upload/tasks/min-data.tssrc/utils/constants.ts
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx}: Usebroadcastfor all realtime events (database changes via triggers, messaging, notifications, game state)
Usepresencesparingly for user state tracking (online status, user counters)
Use topic names that correlate with concepts and tables:scope:entity(e.g.,room:123:messages)
Include unsubscribe/cleanup logic in all implementations
Setprivate: truefor channels using database triggers or RLS policies
Give preference to use private channels over public channels (better security and control)
Implement proper error handling and reconnection logic
Do not usepostgres_changesfor new applications (single-threaded, doesn't scale well); migrate tobroadcast from databaseon existing applications if necessary
Do not create multiple subscriptions without proper cleanup
Do not use database functions (realtime.send,realtime.broadcast_changes) in client code
Use dedicated, granular topics to ensure messages are only sent to relevant listeners and improve scalability
Check if channel state is 'subscribed' before subscribing to prevent multiple subscriptions
Set auth withsupabase.realtime.setAuth()before subscribing to private channels
Configure broadcast options: useself: trueto receive your own broadcast messages andack: truefor server acknowledgment
Monitor channel states (SUBSCRIBED, TIMED_OUT, CLOSED, CHANNEL_ERROR) and rely on built-in automatic reconnection with exponential backoff
Use one channel per logical scope (e.g.,room:123) rather than nested scopes to optimize channel structure
Shard high-volume topics (e.g.,chat:shard:1,chat:shard:2) to distribute load and improve scalability
Enable enhanced logging by settinglog_level: 'info'in Supabase Realtime client configuration for debugging
Replacepostgres_changeswithbroadcastlisteners in migration from postgres_changes to realtime.broadcast_changes
Prefer generating code usingbroadcastoverpostgres_changesin all new implementations
Alw...
Files:
src/pages/api/v1/file/upload/data.tssrc/pages/api/v1/file/upload/tasks/remaining.tssrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/pages/api/v1/file/upload/tasks/queue-consumer.tssrc/pages/api/v1/file/upload/tasks/min-data.tssrc/utils/constants.ts
**/*.{js,ts,jsx,tsx,sql}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx,sql}: Use snake_case for event names:entity_action(e.g.,message_created)
Do not use generic event names like 'update' or 'change'; use specific event names followingentity_actionpattern
Files:
src/pages/api/v1/file/upload/data.tssrc/pages/api/v1/file/upload/tasks/remaining.tssrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/pages/api/v1/file/upload/tasks/queue-consumer.tssrc/pages/api/v1/file/upload/tasks/min-data.tssrc/utils/constants.tssupabase/migrations/20251201000000_add_upload_file_queue.sql
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Only create abstractions when actually needed in TypeScript code
Prefer clear function/variable names over inline comments in TypeScript code
Avoid helper functions when a simple inline expression suffices in TypeScript code
Useknipto remove unused code when making large changes in TypeScript
Don't unnecessarily addtry/catchblocks in TypeScript code
Use next/font and next/script when applicable in Next.js
Be mindful of serialized prop size for React Server Component to child components in Next.js
Runpnpm lint:typesto check for TypeScript type errors
Functions should use camelCase naming convention
Constants should use UPPER_SNAKE_CASE naming convention
Prefer type deduction over custom interfaces
Functions with 2+ parameters should use interface withpropsparameter
Use Type Hierarchy: Use types from immediate parent only, never skip to grandparents
When child props equal parent props, usetype Child = Parenttype alias
Use TypeScript Utility Types:Parameters<>,ReturnType<>,Pick<>,Awaited<>
Never useanytypes in TypeScript
Never use@ts-ignoredirectives
Never use TypeScript enums, use union types or const objects instead
Never use TypeScript namespaces
Files:
src/pages/api/v1/file/upload/data.tssrc/pages/api/v1/file/upload/tasks/remaining.tssrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/pages/api/v1/file/upload/tasks/queue-consumer.tssrc/pages/api/v1/file/upload/tasks/min-data.tssrc/utils/constants.ts
src/pages/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Prefer fetching data in React Server Components (page can still be static) in Next.js
Files:
src/pages/api/v1/file/upload/data.tssrc/pages/api/v1/file/upload/tasks/remaining.tssrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/pages/api/v1/file/upload/tasks/queue-consumer.tssrc/pages/api/v1/file/upload/tasks/min-data.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Maximum 250 lines per file - split larger files into modules
Files:
src/pages/api/v1/file/upload/data.tssrc/pages/api/v1/file/upload/tasks/remaining.tssrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/pages/api/v1/file/upload/tasks/queue-consumer.tssrc/pages/api/v1/file/upload/tasks/min-data.tssrc/utils/constants.ts
package.json
📄 CodeRabbit inference engine (.cursor/rules/supabase-nextjs-rules.mdc)
Install
@supabase/supabase-jsand@supabase/ssrpackages as dependencies
Files:
package.json
src/utils/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Utility functions should be located in
/src/utils
Files:
src/utils/constants.ts
**/*.sql
📄 CodeRabbit inference engine (.cursor/rules/supabase-create-db-functions.mdc)
**/*.sql: Default toSECURITY INVOKERin Supabase PostgreSQL functions to ensure safer access control; useSECURITY DEFINERonly when explicitly required
Always setsearch_pathto an empty string (set search_path = '';) in PostgreSQL functions and use fully qualified names (e.g.,schema_name.table_name) for all database objects
Ensure all queries within PostgreSQL functions are valid PostgreSQL SQL queries and compatible with Supabase context
Minimize side effects in PostgreSQL functions by preferring functions that return results over those that modify data unless serving a specific purpose (e.g., triggers)
Use explicit typing in PostgreSQL functions by clearly specifying input and output types, avoiding ambiguous or loosely typed parameters
Declare PostgreSQL functions asIMMUTABLEorSTABLEby default to allow better optimization, usingVOLATILEonly if the function modifies data or has side effects
If a PostgreSQL function is used as a trigger, include a validCREATE TRIGGERstatement that attaches the function to the desired table and event (e.g.,BEFORE INSERT)
**/*.sql: The generated SQL must be valid SQL
Always use double apostrophe in SQL strings (eg. 'Night''s watch')
Add indexes on columns used within RLS Policies that are not already indexed or primary keys
**/*.sql: Create indexes for all columns used in RLS policies
Do not write complex RLS queries without proper indexing
Userealtime.broadcast_changesin database triggers for database change notifications with customizable payloads and better scalability than postgres_changes
Userealtime.sendin database triggers for custom messages not bound to a table or for integration with other services
Create RLS policies onrealtime.messagestable for SELECT operations to control read access to private channels
Create RLS policies onrealtime.messagestable for INSERT operations to control write access to private channels
Add database triggers using `realtime.broadcast_chang...
Files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
supabase/migrations/**/*.sql
📄 CodeRabbit inference engine (.cursor/rules/supabase-declarative-database-schema.mdc)
supabase/migrations/**/*.sql: Do not rely on schema diff tool to capture DML statements (insert, update, delete). Add DML statements through versioned migrations insupabase/migrations/instead
Do not rely on schema diff tool to capture view ownership, grants, security invoker on views, materialized views, or column type alterations on views. Add these through versioned migrations insupabase/migrations/instead
Do not rely on schema diff tool to capture RLS policy alter statements, column privileges, schema privileges, comments, partitions, alter publication statements, create domain statements, or grant statements. Add these through versioned migrations insupabase/migrations/instead
Files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
supabase/migrations/*.sql
📄 CodeRabbit inference engine (.cursor/rules/supabase-create-migration.mdc)
supabase/migrations/*.sql: Migration files must be named in the formatYYYYMMDDHHmmss_short_description.sqlwith proper casing for months, minutes, and seconds in UTC time
Include a header comment in migration files with metadata about the migration, such as the purpose, affected tables/columns, and any special considerations
Include thorough comments explaining the purpose and expected behavior of each migration step
Write all SQL in consistent casing, prioritizing uppercase for SQL keywords for consistency within each migration file
Add copious comments for any destructive SQL commands, including truncating, dropping, or column alterations
When creating a new table, enable Row Level Security (RLS) even if the table is intended for public access
RLS Policies should be granular with one policy for each operation (select, insert, update, delete) and for each Supabase role (anon and authenticated). DO NOT combine policies even if the functionality is the same for both roles
RLS Policies for public access tables can simply returntrue
Include comments explaining the rationale and intended behavior of each RLS security policy
Files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
🧠 Learnings (32)
📓 Common learnings
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.100Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
📚 Learning: 2025-11-27T10:05:25.508Z
Learnt from: navin-moorthy
Repo: timelessco/recollect PR: 631
File: src/pages/api/profiles/fetch-user-profile-pic.ts:29-29
Timestamp: 2025-11-27T10:05:25.508Z
Learning: Applies to src/pages/api/**/*.{ts,tsx} : Next.js API routes should use default exports (not named exports). The named export guideline only applies to frontend code, not API routes.
Applied to files:
src/pages/api/v1/file/upload/data.tssrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/pages/api/v1/file/upload/tasks/queue-consumer.ts
📚 Learning: 2025-12-15T08:55:44.694Z
Learnt from: navin-moorthy
Repo: timelessco/recollect PR: 699
File: src/utils/constants.ts:13-14
Timestamp: 2025-12-15T08:55:44.694Z
Learning: Env vars validated at build time via Zod schemas in scripts/env (schema.js and server.js) and wired into next.config.ts. Therefore, for code that uses NEXT_PUBLIC_* vars defined as non-optional in clientSchema (e.g., z.string() without .optional()), the value is guaranteed to exist at runtime. Do not use unnecessary non-null assertions for these validated env vars when reviewing code in src/**/*.ts files.
Applied to files:
src/pages/api/v1/file/upload/data.tssrc/pages/api/v1/file/upload/tasks/remaining.tssrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/pages/api/v1/file/upload/tasks/queue-consumer.tssrc/pages/api/v1/file/upload/tasks/min-data.tssrc/utils/constants.ts
📚 Learning: 2025-11-27T07:54:42.100Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.100Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
Applied to files:
src/pages/api/v1/file/upload/tasks/remaining.tssrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/pages/api/v1/file/upload/tasks/queue-consumer.tssrc/utils/constants.tsdocs/file-upload-queue-implementation.md
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Always define a version for external imports (e.g., `npm:express4.18.2` instead of `npm:express`)
Applied to files:
package.json
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/!(*.test|*.spec).ts : Use Web APIs and Deno's core APIs instead of external dependencies (e.g., use fetch instead of Axios, use WebSockets API instead of node-ws)
Applied to files:
package.jsonsrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : For external dependencies, prefer importing via `npm:` and `jsr:` specifiers. Minimize the use of imports from `deno.land/x`, `esm.sh`, and `unpkg.com`. If a package comes from those CDNs, replace the CDN hostname with the `npm:` specifier
Applied to files:
package.jsonsrc/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-12-01T08:31:10.229Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-01T08:31:10.229Z
Learning: Applies to **/*.{ts,tsx} : Run `pnpm lint:types` to check for TypeScript type errors
Applied to files:
package.json
📚 Learning: 2025-11-26T10:14:11.932Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/data.tsx:179-185
Timestamp: 2025-11-26T10:14:11.932Z
Learning: In the min-data bookmark API response at src/pages/api/v1/bookmarks/add/tasks/min-data.tsx, bookmarkData.user_id returns a string (UUID) at runtime, not the ProfilesTableTypes object as defined in the TypeScript types. The queue payload in src/pages/api/v1/bookmarks/add/data.tsx correctly uses bookmarkData.user_id directly as a string for the userId field.
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/utils/constants.ts
📚 Learning: 2025-11-26T10:14:58.025Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/utils/api/bookmark/remaining.ts:374-375
Timestamp: 2025-11-26T10:14:58.025Z
Learning: In src/utils/api/bookmark/remaining.ts, the updateBookmarkWithRemainingData function intentionally matches only on id (not id + user_id) because the bookmark id is unique and authorization is verified at the API handler level before calling this utility function.
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.tssrc/pages/api/v1/bookmarks/add/data.tssrc/utils/constants.ts
📚 Learning: 2025-12-01T08:31:10.229Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-01T08:31:10.229Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use only Named exports, never default exports
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-12-01T08:31:10.229Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-01T08:31:10.229Z
Learning: Applies to **/*.{ts,tsx} : Functions should use camelCase naming convention
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-12-01T08:31:10.229Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-01T08:31:10.229Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Never use CommonJS (require/module.exports), use ES modules instead
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Do NOT use bare specifiers when importing dependencies. Use `npm:` or `jsr:` prefixes for external dependencies (e.g., `npm:supabase/supabase-js` instead of bare `supabase/supabase-js`)
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : NEVER import from `supabase/auth-helpers-nextjs` as it is deprecated
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Server client must use `createServerClient` from `supabase/ssr` with proper cookie handling using `getAll` and `setAll` methods only
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Must use `supabase/ssr` package for creating browser and server clients in Next.js Supabase Auth implementation
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Browser client must use `createBrowserClient` from `supabase/ssr` with NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : NEVER use individual cookie methods (`get`, `set`, `remove`) for Supabase client configuration
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to {middleware,middlewares}.{ts,js} : Middleware must implement `createServerClient` with `getAll` and `setAll` cookie methods, call `supabase.auth.getUser()`, and redirect unauthenticated users
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-12-01T07:47:49.719Z
Learnt from: B-a-l-aj-i
Repo: timelessco/recollect PR: 642
File: src/utils/worker.ts:57-60
Timestamp: 2025-12-01T07:47:49.719Z
Learning: In the recollect project, it's acceptable to log full message objects in queue processing (including user_id, url, and other metadata) for debugging purposes, even though it may contain sensitive information. The team prioritizes debuggability over log volume and PII concerns.
Applied to files:
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts
📚 Learning: 2025-11-25T08:33:02.187Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-declarative-database-schema.mdc:0-0
Timestamp: 2025-11-25T08:33:02.187Z
Learning: Applies to supabase/migrations/**/*.sql : Do not rely on schema diff tool to capture RLS policy alter statements, column privileges, schema privileges, comments, partitions, alter publication statements, create domain statements, or grant statements. Add these through versioned migrations in `supabase/migrations/` instead
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-12-01T08:31:19.438Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-create-migration.mdc:0-0
Timestamp: 2025-12-01T08:31:19.438Z
Learning: Applies to supabase/migrations/*.sql : Add copious comments for any destructive SQL commands, including truncating, dropping, or column alterations
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-12-01T08:31:19.438Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-create-migration.mdc:0-0
Timestamp: 2025-12-01T08:31:19.438Z
Learning: Applies to supabase/migrations/*.sql : When creating a new table, enable Row Level Security (RLS) even if the table is intended for public access
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-11-25T08:33:02.187Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-declarative-database-schema.mdc:0-0
Timestamp: 2025-11-25T08:33:02.187Z
Learning: Applies to supabase/schemas/**/*.sql : All database schema modifications must be defined within `.sql` files located in the `supabase/schemas/` directory. Do not create or modify files directly in the `supabase/migrations/` directory unless the modification is about known caveats (DML statements, view ownership, RLS policies, etc.)
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-12-01T08:31:19.438Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-create-migration.mdc:0-0
Timestamp: 2025-12-01T08:31:19.438Z
Learning: Applies to supabase/migrations/*.sql : Include a header comment in migration files with metadata about the migration, such as the purpose, affected tables/columns, and any special considerations
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-12-01T08:31:19.438Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-create-migration.mdc:0-0
Timestamp: 2025-12-01T08:31:19.438Z
Learning: Applies to supabase/migrations/*.sql : Include comments explaining the rationale and intended behavior of each RLS security policy
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-11-25T08:33:02.187Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-declarative-database-schema.mdc:0-0
Timestamp: 2025-11-25T08:33:02.187Z
Learning: Applies to supabase/migrations/**/*.sql : Do not rely on schema diff tool to capture view ownership, grants, security invoker on views, materialized views, or column type alterations on views. Add these through versioned migrations in `supabase/migrations/` instead
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-11-25T08:33:53.554Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-use-realtime.mdc:0-0
Timestamp: 2025-11-25T08:33:53.554Z
Learning: Applies to **/*.sql : Add database triggers using `realtime.broadcast_changes` function when migrating from postgres_changes to broadcast
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-12-01T08:31:19.438Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-create-migration.mdc:0-0
Timestamp: 2025-12-01T08:31:19.438Z
Learning: Applies to supabase/migrations/*.sql : RLS Policies should be granular with one policy for each operation (select, insert, update, delete) and for each Supabase role (anon and authenticated). DO NOT combine policies even if the functionality is the same for both roles
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-12-01T08:31:19.438Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-create-migration.mdc:0-0
Timestamp: 2025-12-01T08:31:19.438Z
Learning: Applies to supabase/migrations/*.sql : Migration files must be named in the format `YYYYMMDDHHmmss_short_description.sql` with proper casing for months, minutes, and seconds in UTC time
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
📚 Learning: 2025-12-15T17:16:24.028Z
Learnt from: navin-moorthy
Repo: timelessco/recollect PR: 694
File: supabase/migrations/20251208212147_set_bookmark_categories_rpc.sql:9-39
Timestamp: 2025-12-15T17:16:24.028Z
Learning: In all SQL migration files under supabase/migrations, set the search_path to public, pg_temp (do not leave it empty). This ensures migration functions execute with the correct schema context in Supabase, improving reliability and predictability of migrations.
Applied to files:
supabase/migrations/20251201000000_add_upload_file_queue.sql
🧬 Code graph analysis (4)
src/pages/api/v1/file/upload/data.ts (3)
src/utils/constants.ts (4)
getBaseUrl(47-47)NEXT_API_URL(49-49)UPLOAD_FILE_APIS(86-91)PDF_MIME_TYPE(352-352)src/utils/helpers.ts (1)
getAxiosConfigWithAuth(324-339)src/utils/supabaseServerClient.ts (1)
apiSupabaseClient(37-84)
src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts (3)
src/types/apiTypes.ts (1)
NextApiRequest(211-213)src/utils/supabaseServerClient.ts (1)
apiSupabaseServiceClient(96-105)src/utils/constants.ts (3)
getBaseUrl(47-47)NEXT_API_URL(49-49)ADD_BOOKMARK_APIS(78-84)
src/pages/api/v1/bookmarks/add/data.ts (5)
src/types/apiTypes.ts (3)
SingleListData(31-46)NextApiRequest(211-213)AddBookmarkMinDataPayloadTypes(217-221)src/utils/api/bookmark/add.ts (1)
getBookmarkBodySchema(48-53)src/utils/constants.ts (3)
getBaseUrl(47-47)NEXT_API_URL(49-49)ADD_BOOKMARK_APIS(78-84)src/utils/helpers.ts (2)
getAxiosConfigWithAuth(324-339)checkIfUrlAnImage(357-360)src/utils/supabaseServerClient.ts (1)
apiSupabaseClient(37-84)
src/pages/api/v1/file/upload/tasks/queue-consumer.ts (3)
src/types/apiTypes.ts (2)
SingleListData(31-46)ImgMetadataType(12-27)src/utils/supabaseServerClient.ts (1)
apiSupabaseServiceClient(96-105)src/utils/constants.ts (3)
getBaseUrl(47-47)NEXT_API_URL(49-49)UPLOAD_FILE_APIS(86-91)
🪛 Gitleaks (8.30.0)
docs/file-upload-queue-implementation.md
[high] 774-775: Discovered a potential authorization token provided in a curl command header, which could compromise the curl accessed resource.
(curl-auth-header)
[high] 781-782: Discovered a potential authorization token provided in a curl command header, which could compromise the curl accessed resource.
(curl-auth-header)
🪛 LanguageTool
docs/file-upload-queue-implementation.md
[style] ~496-~496: To form a complete sentence, be sure to include a subject or ‘there’.
Context: ...eparate Min-Data and Remaining APIs? Could be one API, but separated for: - **F...
(MISSING_IT_THERE)
🔇 Additional comments (24)
package.json (2)
92-93: Coding guidelines compliance confirmed.The required
@supabase/ssrand@supabase/supabase-jsdependencies are already present as specified in the coding guidelines.
97-97: Package additions align with PR objectives and satisfy all requirements.The new dependencies support the API documentation enhancement (swagger-ui-react with @types/swagger-ui-react and openapi-types), and @vercel/functions is actively used in the codebase for background task handling via the
waitUntilfunction in API routes. The coding guideline requiring @supabase/supabase-js and @supabase/ssr is already satisfied, with both packages present in dependencies at versions 2.86.0 and 0.8.0 respectively. All package versions appear current.src/utils/constants.ts (2)
78-91: Well-organized API endpoint grouping.The new
ADD_BOOKMARK_APISandUPLOAD_FILE_APISobjects provide a clean, organized structure for related endpoints. This pattern improves discoverability and maintainability compared to individual constants scattered throughout the file.
148-150: LGTM!The
PDF_SCREENSHOT_APIconstant follows the established naming convention and is appropriately placed with other server API constants.src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts (4)
52-68: Internal API key authentication looks good.The authentication implementation correctly validates the API key from either
x-api-keyheader orAuthorization: Bearerheader, returning appropriate 401 response on failure. This addresses the security concern raised in past reviews.
70-106: Queue read operation with solid error handling.The queue read implementation properly handles errors with Sentry reporting and returns meaningful error responses. The batch size of 10 is reasonable for processing throughput.
141-171: Payload validation handles missing fields appropriately.The validation correctly checks for required fields (
bookmarkId,url,isImage) and logs invalid payloads to Sentry for debugging. Invalid messages are counted as failed but not archived, allowing manual inspection.
281-336: Conditional archiving logic is well-designed.The archiving logic correctly archives only when both screenshot (when required) and remaining data operations succeed. Failed archive operations are tracked separately with appropriate error reporting.
supabase/migrations/20251201000000_add_upload_file_queue.sql (2)
16-21: Queue creation logic is well-guarded.Good use of existence checks before creating the queue to ensure idempotent migrations.
88-108: Cron job setup with proper guards.The cron job setup correctly checks for both the archive table and pg_cron extension existence before scheduling, and handles existing job removal gracefully.
docs/architecture-overview.md (1)
1-11: Helpful architecture index document.Good practice to maintain a central index for architecture documentation. The structure is clean and provides clear navigation to detailed implementation docs.
docs/file-upload-queue-implementation.md (2)
1-865: Excellent, comprehensive documentation.This is a well-structured and thorough documentation of the file upload queue system. It covers architecture, flow diagrams, components, design decisions, error handling, and operational guidance. The mermaid diagrams and code examples significantly aid understanding.
851-857: Update documentation to reflect correct bookmark queue authentication.The queue-consumer.ts implementation uses
INTERNAL_API_KEYfor API endpoint authentication (line 56), not "Service role key" as stated in the comparison table. While the queue consumer internally usesapiSupabaseServiceClient()with the service role key for database operations, the endpoint itself requires the internal API key. Update the "Authentication" row in the table from "Service role key" to "Internal API key" to match the implementation.Likely an incorrect or invalid review comment.
src/pages/api/v1/bookmarks/add/data.ts (3)
41-84: LGTM - Well-structured helper function with proper error handling.The
callMinDataApihelper cleanly encapsulates the min-data API call with appropriate Sentry error capture and logging. The error response structure is consistent with theApiResponsetype.
127-172: LGTM - Proper request validation and authentication flow.The handler correctly validates the HTTP method, initializes Supabase client, authenticates the user, and validates the request body using Zod schema. Error responses are well-structured with appropriate status codes.
215-273: LGTM - Background processing with proper error isolation.The
waitUntilblock correctly handles background work after the response is sent. Error handling within the async IIFE ensures failures don't affect the already-sent response, and Sentry captures any issues for observability.src/pages/api/v1/file/upload/data.ts (2)
49-92: LGTM - Consistent helper function pattern.The
callMinDataApihelper follows the same well-structured pattern as the bookmark endpoint with proper error handling and Sentry integration.
262-344: LGTM - Robust PDF screenshot handling with graceful fallback.The PDF screenshot logic includes comprehensive error handling for axios errors, appropriate timeouts, and graceful fallback to the original URL if screenshot generation fails. This ensures the queue operation continues even if the external service is unavailable.
src/pages/api/v1/file/upload/tasks/remaining.ts (1)
170-198: LGTM - Proper internal API authentication pattern.The handler correctly validates the internal API key from either
x-api-keyheader or Bearer token, uses service client for elevated permissions, and returns appropriate 401/405 responses for invalid requests.src/pages/api/v1/file/upload/tasks/min-data.ts (2)
46-142: LGTM - Robust video processing with per-operation error handling.The
videoLogicfunction properly isolates failures for blurhash, OCR, and image caption generation. Each operation catches its own errors and reports to Sentry, ensuring one failure doesn't prevent other metadata from being generated.
341-359: LGTM - Database insert with proper error handling.The database insert operation correctly handles errors, captures them to Sentry, and returns appropriate responses. The select of
idafter insert ensures the created record ID is returned to the caller.src/pages/api/v1/file/upload/tasks/queue-consumer.ts (3)
48-70: LGTM - Proper internal API authentication.The authentication pattern correctly checks both
x-api-keyheader and Bearer token, returning a detailed 401 response with proper structure matching theApiResponsetype.
133-169: LGTM - Proper payload validation with graceful failure handling.Invalid payloads are logged, captured to Sentry, counted as failures, and the loop continues to process remaining messages. This ensures one bad message doesn't block the entire batch.
224-276: LGTM - Correct archive-on-success pattern.Messages are only archived after successful processing, ensuring failed messages remain in the queue for retry. Archive failures are properly tracked separately from processing failures, maintaining accurate counts.
| **Check queue status:** | ||
|
|
||
| ```sql | ||
| -- Count messages in queue | ||
| SELECT COUNT(*) FROM pgmq_public.upload_file_queue; | ||
|
|
||
| -- View messages | ||
| SELECT * FROM pgmq_public.upload_file_queue LIMIT 10; | ||
|
|
||
| -- View archived messages | ||
| SELECT * FROM pgmq_public.upload_file_queue_archive | ||
| ORDER BY archived_at DESC LIMIT 10; | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SQL table names may not match actual PGMQ naming convention.
The monitoring SQL examples use underscore-separated table names (pgmq_public.upload_file_queue), but PGMQ typically creates tables with the queue name as-is (e.g., pgmq.q_upload-file-queue as shown in the migration file).
-- Count messages in queue
-SELECT COUNT(*) FROM pgmq_public.upload_file_queue;
+SELECT COUNT(*) FROM pgmq."q_upload-file-queue";
-- View messages
-SELECT * FROM pgmq_public.upload_file_queue LIMIT 10;
+SELECT * FROM pgmq."q_upload-file-queue" LIMIT 10;
-- View archived messages
-SELECT * FROM pgmq_public.upload_file_queue_archive
+SELECT * FROM pgmq."a_upload-file-queue"
ORDER BY archived_at DESC LIMIT 10;🤖 Prompt for AI Agents
In docs/file-upload-queue-implementation.md around lines 787 to 799, the SQL
examples use underscore-style table names (pgmq_public.upload_file_queue) which
do not match PGMQ's actual naming convention; update the examples to reference
the real PGMQ table names as created by the migration (use the pgmq schema and
the queue name format used there, e.g. the q_<queue-name> and corresponding
archive table name), and replace all example queries accordingly so they reflect
the exact table identifiers used by the migration.
| export default async function handler( | ||
| request: NextApiRequest<BodyDataType>, | ||
| response: NextApiResponse<UploadFileApiResponse>, | ||
| ) { | ||
| try { | ||
| const supabase = apiSupabaseClient(request, response); | ||
|
|
||
| // Authentication check | ||
| const { data: userData, error: userError } = await supabase.auth.getUser(); | ||
| const userId = userData?.user?.id; | ||
| const email = userData?.user?.email; | ||
|
|
||
| if (userError || !userId) { | ||
| console.warn("User authentication failed:", { | ||
| error: userError?.message, | ||
| }); | ||
| response.status(401).json({ | ||
| success: false, | ||
| error: "Unauthorized", | ||
| }); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing HTTP method validation.
Unlike other endpoints in this PR, this handler doesn't validate that the request method is POST. This could allow unexpected behavior from GET or other HTTP methods.
export default async function handler(
request: NextApiRequest<BodyDataType>,
response: NextApiResponse<UploadFileApiResponse>,
) {
try {
+ // Validate request method
+ if (request.method !== "POST") {
+ response.status(405).json({
+ success: false,
+ error: "Method Not Allowed",
+ });
+ return;
+ }
+
const supabase = apiSupabaseClient(request, response);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export default async function handler( | |
| request: NextApiRequest<BodyDataType>, | |
| response: NextApiResponse<UploadFileApiResponse>, | |
| ) { | |
| try { | |
| const supabase = apiSupabaseClient(request, response); | |
| // Authentication check | |
| const { data: userData, error: userError } = await supabase.auth.getUser(); | |
| const userId = userData?.user?.id; | |
| const email = userData?.user?.email; | |
| if (userError || !userId) { | |
| console.warn("User authentication failed:", { | |
| error: userError?.message, | |
| }); | |
| response.status(401).json({ | |
| success: false, | |
| error: "Unauthorized", | |
| }); | |
| return; | |
| } | |
| export default async function handler( | |
| request: NextApiRequest<BodyDataType>, | |
| response: NextApiResponse<UploadFileApiResponse>, | |
| ) { | |
| try { | |
| // Validate request method | |
| if (request.method !== "POST") { | |
| response.status(405).json({ | |
| success: false, | |
| error: "Method Not Allowed", | |
| }); | |
| return; | |
| } | |
| const supabase = apiSupabaseClient(request, response); | |
| // Authentication check | |
| const { data: userData, error: userError } = await supabase.auth.getUser(); | |
| const userId = userData?.user?.id; | |
| const email = userData?.user?.email; | |
| if (userError || !userId) { | |
| console.warn("User authentication failed:", { | |
| error: userError?.message, | |
| }); | |
| response.status(401).json({ | |
| success: false, | |
| error: "Unauthorized", | |
| }); | |
| return; | |
| } |
🤖 Prompt for AI Agents
In src/pages/api/v1/file/upload/tasks/min-data.ts around lines 189 to 210, the
handler is missing an HTTP method check allowing non-POST requests to reach the
logic; add an early guard that verifies request.method === 'POST' and if not
respond with status 405, set the Allow header to 'POST', and return a JSON error
payload (e.g., success: false, error: 'Method Not Allowed') so only POST
requests proceed to the authentication and business logic.
| if ( | ||
| Number.parseInt(categoryId as string, 10) !== 0 && | ||
| typeof categoryId === "number" | ||
| ) { | ||
| const checkIfUserIsCategoryOwnerOrCollaboratorValue = | ||
| await checkIfUserIsCategoryOwnerOrCollaborator( | ||
| supabase, | ||
| categoryId as number, | ||
| userId as string, | ||
| email as string, | ||
| response, | ||
| ); | ||
|
|
||
| // Note: helper function already sends error response if needed | ||
| if (!checkIfUserIsCategoryOwnerOrCollaboratorValue) { | ||
| console.warn("User authorization failed for category:", { categoryId }); | ||
| // Don't send response here - helper already sent it if there was an error | ||
| // Only send 403 if user lacks permission (not a DB error) | ||
| if (!response.headersSent) { | ||
| response.status(403).json({ | ||
| error: | ||
| "User is neither owner or collaborator for the collection or does not have edit access", | ||
| success: false, | ||
| }); | ||
| } | ||
|
|
||
| return; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Category ID type check logic appears flawed.
At line 237-238, the condition Number.parseInt(categoryId as string, 10) !== 0 && typeof categoryId === "number" will always be false because categoryId comes from data.category_id which is typed as string in BodyDataType. The typeof categoryId === "number" check will never pass.
If the intent is to check permissions for non-zero categories:
// Check category permissions if not using default category (0)
- if (
- Number.parseInt(categoryId as string, 10) !== 0 &&
- typeof categoryId === "number"
- ) {
+ const parsedCategoryId = Number.parseInt(categoryId as string, 10);
+ if (!Number.isNaN(parsedCategoryId) && parsedCategoryId !== 0) {
const checkIfUserIsCategoryOwnerOrCollaboratorValue =
await checkIfUserIsCategoryOwnerOrCollaborator(
supabase,
- categoryId as number,
+ parsedCategoryId,
userId as string,
email as string,
response,
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ( | |
| Number.parseInt(categoryId as string, 10) !== 0 && | |
| typeof categoryId === "number" | |
| ) { | |
| const checkIfUserIsCategoryOwnerOrCollaboratorValue = | |
| await checkIfUserIsCategoryOwnerOrCollaborator( | |
| supabase, | |
| categoryId as number, | |
| userId as string, | |
| email as string, | |
| response, | |
| ); | |
| // Note: helper function already sends error response if needed | |
| if (!checkIfUserIsCategoryOwnerOrCollaboratorValue) { | |
| console.warn("User authorization failed for category:", { categoryId }); | |
| // Don't send response here - helper already sent it if there was an error | |
| // Only send 403 if user lacks permission (not a DB error) | |
| if (!response.headersSent) { | |
| response.status(403).json({ | |
| error: | |
| "User is neither owner or collaborator for the collection or does not have edit access", | |
| success: false, | |
| }); | |
| } | |
| return; | |
| } | |
| } | |
| // Check category permissions if not using default category (0) | |
| const parsedCategoryId = Number.parseInt(categoryId as string, 10); | |
| if (!Number.isNaN(parsedCategoryId) && parsedCategoryId !== 0) { | |
| const checkIfUserIsCategoryOwnerOrCollaboratorValue = | |
| await checkIfUserIsCategoryOwnerOrCollaborator( | |
| supabase, | |
| parsedCategoryId, | |
| userId as string, | |
| email as string, | |
| response, | |
| ); | |
| // Note: helper function already sends error response if needed | |
| if (!checkIfUserIsCategoryOwnerOrCollaboratorValue) { | |
| console.warn("User authorization failed for category:", { categoryId }); | |
| // Don't send response here - helper already sent it if there was an error | |
| // Only send 403 if user lacks permission (not a DB error) | |
| if (!response.headersSent) { | |
| response.status(403).json({ | |
| error: | |
| "User is neither owner or collaborator for the collection or does not have edit access", | |
| success: false, | |
| }); | |
| } | |
| return; | |
| } | |
| } |
🤖 Prompt for AI Agents
In src/pages/api/v1/file/upload/tasks/min-data.ts around lines 236 to 264, the
current condition uses typeof categoryId === "number" which will never be true
because categoryId is a string; replace the boolean check with a numeric parse
and presence guard: parse categoryId to a number (e.g. const catId =
Number.parseInt(categoryId as string, 10) or Number(categoryId)), ensure it is a
valid non-zero number (e.g. !Number.isNaN(catId) && catId !== 0), then call
checkIfUserIsCategoryOwnerOrCollaborator with that numeric catId; also remove
the incorrect typeof check and keep the existing response.headersSent handling
and 403 response logic unchanged.
| } else { | ||
| // Processing failed, don't archive | ||
| console.log("Message will remain in queue for retry:", { | ||
| messageId: message.msg_id, | ||
| fileId: payload.id, | ||
| remainingDataSuccess, | ||
| }); | ||
|
|
||
| Sentry.captureException( | ||
| new Error(remainingDataError || "Processing failed"), | ||
| { | ||
| tags: { | ||
| operation: "process_queue_message", | ||
| }, | ||
| extra: { | ||
| messageId: message.msg_id, | ||
| fileId: payload.id, | ||
| remainingDataSuccess, | ||
| }, | ||
| }, | ||
| ); | ||
|
|
||
| failedCount++; | ||
| results.push({ | ||
| messageId: message.msg_id, | ||
| fileId: payload.id, | ||
| success: false, | ||
| archived: false, | ||
| remainingDataSuccess, | ||
| error: remainingDataError || "Processing failed", | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "*queue-consumer*" -o -name "*consumer*" | grep -E "\.(ts|tsx|js)$" | head -20Repository: timelessco/recollect
Length of output: 175
🏁 Script executed:
wc -l src/pages/api/v1/file/upload/tasks/queue-consumer.tsRepository: timelessco/recollect
Length of output: 120
🏁 Script executed:
cat -n src/pages/api/v1/file/upload/tasks/queue-consumer.tsRepository: timelessco/recollect
Length of output: 10835
🏁 Script executed:
grep -n "read\|read_ct\|visibility\|vt\|dead" src/pages/api/v1/file/upload/tasks/queue-consumer.tsRepository: timelessco/recollect
Length of output: 461
🏁 Script executed:
find . -path "*supabase/migrations*" -name "*.sql" | xargs grep -l "pgmq\|queue" | head -5Repository: timelessco/recollect
Length of output: 308
🏁 Script executed:
cat -n supabase/migrations/20251201000000_add_upload_file_queue.sqlRepository: timelessco/recollect
Length of output: 5782
🏁 Script executed:
grep -A 10 "pgmq.read\|\.rpc.*read" src/pages/api/v1/file/upload/tasks/queue-consumer.ts | head -20Repository: timelessco/recollect
Length of output: 327
🏁 Script executed:
grep -r "read_ct\|read_count" src/pages/api/v1/file/upload/tasks/ 2>/dev/nullRepository: timelessco/recollect
Length of output: 46
🏁 Script executed:
grep -r "pgmq.*read\|supabase.*read" supabase/migrations/ | head -10Repository: timelessco/recollect
Length of output: 1370
🏁 Script executed:
grep -A 5 -B 5 "sleep_seconds\|vt :=" supabase/migrations/20251105181644_prod_schema.sql | grep -A 10 "read"Repository: timelessco/recollect
Length of output: 1839
🏁 Script executed:
grep -n "sleep_seconds\|vt" src/pages/api/v1/file/upload/tasks/queue-consumer.tsRepository: timelessco/recollect
Length of output: 88
🏁 Script executed:
cat -n supabase/migrations/20251126000000_add_bookmark_url_queue.sql | head -50Repository: timelessco/recollect
Length of output: 2634
🏁 Script executed:
find . -type f \( -name "*.ts" -o -name "*.sql" \) | xargs grep -l "dlq\|dead.letter\|dead_letter\|failed.*queue" 2>/dev/null | head -10Repository: timelessco/recollect
Length of output: 101
🏁 Script executed:
grep -n "dlq\|dead.letter\|dead_letter\|failed.*queue" src/pages/api/v1/raindrop/import.ts | head -20Repository: timelessco/recollect
Length of output: 216
🏁 Script executed:
grep -r "sleep_seconds\|visibility\|timeout" src/pages/api/v1/file/upload/tasks/ src/pages/api/v1/bookmarks/add/tasks/Repository: timelessco/recollect
Length of output: 288
Fix visibility timeout and implement retry limits to prevent message reprocessing loops.
The queue consumer reads messages with sleep_seconds: 0, which maps to a visibility timeout of 0 seconds. This means failed messages become immediately visible for reprocessing without delay, risking tight loops and database load. Additionally, the read_ct field can be used to implement consumer retries by marking messages as failed after a certain number of retries, but this is not currently checked.
Recommended fixes:
- Increase
sleep_secondsto a reasonable value (e.g., 30-60 seconds) to prevent immediate reprocessing - Check
message.read_ctand archive/delete the message after a maximum retry threshold is reached - Implement a dead-letter queue table to move permanently failed messages for manual review
| if (ogImage) { | ||
| // Handle OCR processing | ||
| try { | ||
| // For internal API calls, we need to get the userId from the file record | ||
| const { data: fileRecord } = await supabase | ||
| .from(MAIN_TABLE_NAME) | ||
| .select("user_id") | ||
| .eq("id", fileId) | ||
| .single(); | ||
|
|
||
| const userId = fileRecord?.user_id; | ||
|
|
||
| if (userId) { | ||
| imageOcrValue = await ocr(ogImage, supabase, userId); | ||
| } | ||
| } catch (error) { | ||
| console.error("OCR processing failed:", error); | ||
| Sentry.captureException(error, { | ||
| tags: { | ||
| operation: "ocr_processing", | ||
| }, | ||
| extra: { | ||
| publicUrl: ogImage, | ||
| fileId, | ||
| }, | ||
| }); | ||
| } | ||
|
|
||
| // Handle image caption generation | ||
| try { | ||
| // For internal API calls, we need to get the userId from the file record | ||
| const { data: fileRecord } = await supabase | ||
| .from(MAIN_TABLE_NAME) | ||
| .select("user_id") | ||
| .eq("id", fileId) | ||
| .single(); | ||
|
|
||
| const userId = fileRecord?.user_id; | ||
|
|
||
| if (userId) { | ||
| imageCaption = await imageToText(ogImage, supabase, userId); | ||
| } | ||
| } catch (error) { | ||
| console.error("Image caption generation failed:", error); | ||
| Sentry.captureException(error, { | ||
| tags: { | ||
| operation: "image_caption_generation", | ||
| }, | ||
| extra: { | ||
| publicUrl: ogImage, | ||
| fileId, | ||
| }, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Redundant database queries for user_id.
The notVideoLogic function queries the database for user_id twice - once for OCR (lines 40-44) and again for image caption (lines 67-71). This is inefficient and increases latency. Fetch user_id once at the beginning and reuse it.
const notVideoLogic = async (
publicUrl: string,
mediaType: string | null,
supabase: SupabaseClient,
fileId: number,
) => {
const ogImage = publicUrl;
let imageCaption = null;
let imageOcrValue = null;
+ // Fetch user_id once for both OCR and image caption
+ let userId: string | null = null;
+ try {
+ const { data: fileRecord } = await supabase
+ .from(MAIN_TABLE_NAME)
+ .select("user_id")
+ .eq("id", fileId)
+ .single();
+ userId = fileRecord?.user_id;
+ } catch (error) {
+ console.error("Failed to fetch user_id:", error);
+ }
if (ogImage) {
// Handle OCR processing
try {
- // For internal API calls, we need to get the userId from the file record
- const { data: fileRecord } = await supabase
- .from(MAIN_TABLE_NAME)
- .select("user_id")
- .eq("id", fileId)
- .single();
-
- const userId = fileRecord?.user_id;
-
if (userId) {
imageOcrValue = await ocr(ogImage, supabase, userId);
}
} catch (error) {
// ... error handling
}
// Handle image caption generation
try {
- // For internal API calls, we need to get the userId from the file record
- const { data: fileRecord } = await supabase
- .from(MAIN_TABLE_NAME)
- .select("user_id")
- .eq("id", fileId)
- .single();
-
- const userId = fileRecord?.user_id;
-
if (userId) {
imageCaption = await imageToText(ogImage, supabase, userId);
}
} catch (error) {
// ... error handling
}
}🤖 Prompt for AI Agents
In src/pages/api/v1/file/upload/tasks/remaining.ts around lines 36 to 90, the
code queries the database for user_id twice (once before OCR and again before
image caption); instead, run a single supabase query to select user_id for the
given fileId before the two try blocks, store it in a local variable, and reuse
that variable in both ocr(...) and imageToText(...) calls; if no user_id is
returned, skip both operations early (or log/handle once) and keep existing
try/catch and Sentry reporting unchanged.
| // Merge: keep existing values if new ones are null/undefined | ||
| const mergedMeta = { | ||
| ...existingMeta, | ||
| ...Object.fromEntries( | ||
| Object.entries(metaData).map(([key, value]) => [ | ||
| key, | ||
| value || existingMeta?.[key], | ||
| ]), | ||
| ), | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Falsy value merge logic may incorrectly override valid values.
The merge logic value || existingMeta?.[key] will incorrectly fall back to existing values when value is a valid falsy value like 0, false, or "". Use nullish coalescing (??) instead.
// Merge: keep existing values if new ones are null/undefined
const mergedMeta = {
...existingMeta,
...Object.fromEntries(
Object.entries(metaData).map(([key, value]) => [
key,
- value || existingMeta?.[key],
+ value ?? existingMeta?.[key],
]),
),
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Merge: keep existing values if new ones are null/undefined | |
| const mergedMeta = { | |
| ...existingMeta, | |
| ...Object.fromEntries( | |
| Object.entries(metaData).map(([key, value]) => [ | |
| key, | |
| value || existingMeta?.[key], | |
| ]), | |
| ), | |
| }; | |
| // Merge: keep existing values if new ones are null/undefined | |
| const mergedMeta = { | |
| ...existingMeta, | |
| ...Object.fromEntries( | |
| Object.entries(metaData).map(([key, value]) => [ | |
| key, | |
| value ?? existingMeta?.[key], | |
| ]), | |
| ), | |
| }; |
🤖 Prompt for AI Agents
In src/pages/api/v1/file/upload/tasks/remaining.ts around lines 258 to 267, the
merge step uses `value || existingMeta?.[key]` which treats valid falsy values
(0, false, "") as absent; change the fallback to nullish coalescing so only
null/undefined fall back — replace the `value || existingMeta?.[key]` expression
with `value ?? existingMeta?.[key]` and keep the rest of the merge logic intact.
| -- Create RLS policy for queue access | ||
| DROP POLICY IF EXISTS "auth-access" ON "pgmq"."q_upload-file-queue"; | ||
| CREATE POLICY "auth-access" ON "pgmq"."q_upload-file-queue" | ||
| AS permissive FOR all TO authenticated USING (true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RLS policy should be granular per operation and role.
Per coding guidelines: "RLS Policies should be granular with one policy for each operation (select, insert, update, delete) and for each Supabase role (anon and authenticated). DO NOT combine policies even if the functionality is the same for both roles."
The current policy combines all operations into one.
Consider splitting into granular policies:
- DROP POLICY IF EXISTS "auth-access" ON "pgmq"."q_upload-file-queue";
- CREATE POLICY "auth-access" ON "pgmq"."q_upload-file-queue"
- AS permissive FOR all TO authenticated USING (true);
+ -- Drop existing combined policy if it exists
+ DROP POLICY IF EXISTS "auth-access" ON "pgmq"."q_upload-file-queue";
+
+ -- Granular policies for authenticated role
+ DROP POLICY IF EXISTS "authenticated_select" ON "pgmq"."q_upload-file-queue";
+ CREATE POLICY "authenticated_select" ON "pgmq"."q_upload-file-queue"
+ AS permissive FOR SELECT TO authenticated USING (true);
+
+ DROP POLICY IF EXISTS "authenticated_insert" ON "pgmq"."q_upload-file-queue";
+ CREATE POLICY "authenticated_insert" ON "pgmq"."q_upload-file-queue"
+ AS permissive FOR INSERT TO authenticated WITH CHECK (true);
+
+ DROP POLICY IF EXISTS "authenticated_update" ON "pgmq"."q_upload-file-queue";
+ CREATE POLICY "authenticated_update" ON "pgmq"."q_upload-file-queue"
+ AS permissive FOR UPDATE TO authenticated USING (true) WITH CHECK (true);
+
+ DROP POLICY IF EXISTS "authenticated_delete" ON "pgmq"."q_upload-file-queue";
+ CREATE POLICY "authenticated_delete" ON "pgmq"."q_upload-file-queue"
+ AS permissive FOR DELETE TO authenticated USING (true);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| -- Create RLS policy for queue access | |
| DROP POLICY IF EXISTS "auth-access" ON "pgmq"."q_upload-file-queue"; | |
| CREATE POLICY "auth-access" ON "pgmq"."q_upload-file-queue" | |
| AS permissive FOR all TO authenticated USING (true); | |
| -- Drop existing combined policy if it exists | |
| DROP POLICY IF EXISTS "auth-access" ON "pgmq"."q_upload-file-queue"; | |
| -- Granular policies for authenticated role | |
| DROP POLICY IF EXISTS "authenticated_select" ON "pgmq"."q_upload-file-queue"; | |
| CREATE POLICY "authenticated_select" ON "pgmq"."q_upload-file-queue" | |
| AS permissive FOR SELECT TO authenticated USING (true); | |
| DROP POLICY IF EXISTS "authenticated_insert" ON "pgmq"."q_upload-file-queue"; | |
| CREATE POLICY "authenticated_insert" ON "pgmq"."q_upload-file-queue" | |
| AS permissive FOR INSERT TO authenticated WITH CHECK (true); | |
| DROP POLICY IF EXISTS "authenticated_update" ON "pgmq"."q_upload-file-queue"; | |
| CREATE POLICY "authenticated_update" ON "pgmq"."q_upload-file-queue" | |
| AS permissive FOR UPDATE TO authenticated USING (true) WITH CHECK (true); | |
| DROP POLICY IF EXISTS "authenticated_delete" ON "pgmq"."q_upload-file-queue"; | |
| CREATE POLICY "authenticated_delete" ON "pgmq"."q_upload-file-queue" | |
| AS permissive FOR DELETE TO authenticated USING (true); |
… queue consumer Added fallback to check query parameters for API key in addition to headers. This allows the queue consumer to work with Supabase webhooks that send the API key as a query parameter instead of in headers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Syncs lockfile with package.json to resolve Vercel deployment error. Updates include webpack, AWS SDK, and pdfjs-dist dependencies. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/pages/api/bookmark/fetch-bookmarks-count.ts (2)
202-206: Critical: Hardcoded error messages always returned on success.The endpoint returns hardcoded error messages
["Unauthorized", "Internal Server Error"]even on successful responses (status 200). This breaks the API contract and will confuse consumers who check theerrorfield.🐛 Proposed fix
- const errorMessages = ["Unauthorized", "Internal Server Error"]; - - const nonEmptyErrors = errorMessages.filter((message) => !isEmpty(message)); - - response.status(200).json({ data: count, error: nonEmptyErrors }); + response.status(200).json({ data: count, error: null });
73-207: Migrate this endpoint to the App Router pattern with modern Supabase client.This legacy Pages Router endpoint uses the deprecated
apiSupabaseClientpattern and is actively used in the codebase (called insrc/async/supabaseCrudHelpers/index.ts:242). Migrate tosrc/app/api/**usingcreateServerClientwithgetAll/setAllcookie methods per the Supabase SSR pattern.src/async/mutationHooks/files/useFileUploadOptimisticMutation.ts (4)
1-265: Refactor: File exceeds maximum line limit.This file contains 265 lines, exceeding the 250-line maximum specified in the coding guidelines. Consider splitting this into smaller, focused modules.
Suggested refactoring approach
Consider extracting the following into separate utility functions or modules:
- Video thumbnail generation logic (lines 47-101) →
generateAndUploadVideoThumbnail.ts- Presigned URL upload logic (lines 161-194) →
uploadFileWithPresignedUrl.ts- Category-specific toast logic (lines 215-261) →
handleFileUploadToasts.tsThis would improve maintainability and adhere to the single responsibility principle.
74-84: Critical: Node.js Buffer API used in browser context.Lines 76 and 84 use Node.js
BufferAPI, which is not available in browser environments. This code will throw a runtime error when executed client-side.🔧 Proposed fix using browser-native APIs
try { const base64Data = thumbnailBase64?.split(",")?.[1]; if (base64Data) { - const buffer = Buffer.from(base64Data, "base64"); + // Convert base64 to Uint8Array using browser APIs + const binaryString = atob(base64Data); + const bytes = new Uint8Array(binaryString.length); + for (let i = 0; i < binaryString.length; i++) { + bytes[i] = binaryString.charCodeAt(i); + } const uploadResponse = await fetch( uploadTokenData.signedUrl, { method: "PUT", headers: { "Content-Type": "image/jpg", }, - body: buffer.buffer as BodyInit, + body: bytes, }, );
171-194: Refactor: Confusing variable naming.The variable
errorCondition(line 171) is true when there is no error (isNull(error)), which is counterintuitive. The name suggests it indicates an error condition, but it actually indicates success.💡 Proposed fix with clearer naming
// Generate presigned URL for secure client-side upload const storagePath = `${STORAGE_FILES_PATH}/${session?.user?.id}/${uploadFileNamePath}`; const { data: uploadTokenData, error } = await storageHelpers.createSignedUploadUrl( R2_MAIN_BUCKET_NAME, storagePath, ); // Upload file using presigned URL -const errorCondition = isNull(error); +const uploadUrlSuccess = isNull(error); -if (uploadTokenData?.signedUrl && errorCondition) { +if (uploadTokenData?.signedUrl && uploadUrlSuccess) { try { const uploadResponse = await fetch(uploadTokenData.signedUrl, { method: "PUT", body: data?.file, headers: { "Content-Type": data?.file?.type || "application/octet-stream", }, }); if (!uploadResponse.ok) { const errorMessage = `Upload failed with status: ${uploadResponse.status}`; errorToast(errorMessage); } } catch { errorToast("Upload failed"); } } -if (!errorCondition) { +if (!uploadUrlSuccess) { errorToast("Failed to create upload URL"); }
218-222: Critical:anytype usage violates coding guidelines.Line 220 uses the
anytype, which is explicitly prohibited by the coding guidelines. This defeats TypeScript's type safety and can lead to runtime errors.🔧 Proposed fix with proper typing
Define a proper type for the API response:
+type UploadFileApiResponse = { + data: unknown; + success: boolean; +}; + onSuccess: async (apiResponse, data) => { const uploadedDataType = data?.file?.type; - const apiResponseTyped = apiResponse as unknown as { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - data: any; - success: boolean; - }; + const apiResponseTyped = apiResponse as unknown as UploadFileApiResponse; if (apiResponseTyped?.success === true) {Alternatively, import the proper response type from your API types file if it exists.
🤖 Fix all issues with AI agents
In @.env.example:
- Around line 88-94: Reorder the environment variables under the SERVER block so
they are alphabetically ordered (place INTERNAL_API_KEY before
RECOLLECT_SERVER_API) and add a single blank newline at the end of the file;
specifically, move the INTERNAL_API_KEY line to appear above
RECOLLECT_SERVER_API and ensure the file ends with one trailing newline
character.
🧹 Nitpick comments (4)
src/pages/api/bookmark/fetch-bookmarks-count.ts (1)
196-200: Use Sentry for error tracking instead of console.error.Replace
console.errorwith Sentry error capture for proper production error monitoring and observability.As per coding guidelines, see
docs/sentry_rules.mdfor implementation examples.src/async/mutationHooks/files/useFileUploadOptimisticMutation.ts (2)
109-205: Add Sentry breadcrumbs for optimistic mutation tracking.Per the coding guidelines, optimistic mutations should include Sentry breadcrumbs to aid debugging of state inconsistencies. This is especially important for the rollback logic in
onError.📊 Example implementation
onMutate: async (data: UploadFileApiPayload) => { + Sentry.addBreadcrumb({ + category: 'optimistic-update', + message: 'File upload optimistic update started', + level: 'info', + data: { fileName: data?.file?.name, fileType: data?.file?.type } + }); + // Cancel any outgoing refetches await queryClient.cancelQueries({ queryKey: [BOOKMARKS_KEY, session?.user?.id, CATEGORY_ID, sortBy], }); // ... rest of onMutate }, onError: (context: { previousData: BookmarksPaginatedDataTypes }) => { + Sentry.addBreadcrumb({ + category: 'optimistic-update', + message: 'File upload optimistic update rolled back', + level: 'warning', + }); + queryClient.setQueryData( [BOOKMARKS_KEY, session?.user?.id, CATEGORY_ID, sortBy], context?.previousData, ); },See
docs/sentry_rules.mdfor complete implementation examples.
92-99: Consider sending thumbnail generation errors to Sentry.The error handling for thumbnail generation and upload currently only logs to console (lines 93 and 99). Since thumbnails enhance the user experience, tracking these failures could help identify issues.
📊 Proposed enhancement
} catch (uploadError) { - console.error("Thumbnail upload error:", uploadError); + Sentry.captureException(uploadError, { + level: 'warning', + tags: { feature: 'video-thumbnail-upload' }, + extra: { fileName: thumbnailFileName } + }); }} catch (error) { - console.error("Error generating video thumbnail:", error); + Sentry.captureException(error, { + level: 'warning', + tags: { feature: 'video-thumbnail-generation' }, + extra: { fileType: data?.file?.type } + }); }Note: These are warnings rather than errors since thumbnail generation is optional and shouldn't block the main upload flow.
src/pages/api/bookmark/add-bookmark-min-data.ts (1)
59-59: Consider using JSDoc format for the deprecation comment.The current deprecation comment uses the standard
//format. IDEs typically provide better tooling support (strikethrough, warnings) when using the JSDoc@deprecatedtag:-// @deprecated Use checkIfUserIsCategoryOwnerOrCollaboratorUtil from utils/api/bookmark/add.ts instead +/** + * @deprecated Use checkIfUserIsCategoryOwnerOrCollaboratorUtil from utils/api/bookmark/add.ts instead + */
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (16)
.env.examplecspell.jsonpackage.jsonscripts/env/schema.jssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/mutationHooks/files/useFileUploadOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/components/lightbox/LightBoxPlugin.tsxsrc/pages/api/bookmark/add-bookmark-min-data.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/file/upload-file.tssrc/pages/api/v1/tests/file/post/upload.tssrc/types/apiTypes.tssrc/utils/constants.tssrc/utils/supabaseServerClient.tsturbo.json
💤 Files with no reviewable changes (1)
- src/components/lightbox/LightBoxPlugin.tsx
🚧 Files skipped from review as they are similar to previous changes (7)
- src/pages/api/file/upload-file.ts
- scripts/env/schema.js
- src/utils/supabaseServerClient.ts
- src/pages/api/v1/tests/file/post/upload.ts
- cspell.json
- src/types/apiTypes.ts
- turbo.json
🧰 Additional context used
📓 Path-based instructions (10)
{.env,.env.local,.env.example}
📄 CodeRabbit inference engine (.cursor/rules/supabase-nextjs-rules.mdc)
Set up environment variables for NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY
Files:
.env.example
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-nextjs-rules.mdc)
**/*.{ts,tsx,js,jsx}: Must use@supabase/ssrpackage for creating browser and server clients in Next.js Supabase Auth implementation
Browser client must usecreateBrowserClientfrom@supabase/ssrwith NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY
Server client must usecreateServerClientfrom@supabase/ssrwith proper cookie handling usinggetAllandsetAllmethods only
NEVER use individual cookie methods (get,set,remove) for Supabase client configuration
NEVER import from@supabase/auth-helpers-nextjsas it is deprecated
Server client cookiesetAllmethod must include try-catch error handling as it may be called from Server Components
Files:
src/utils/constants.tssrc/async/mutationHooks/files/useFileUploadOptimisticMutation.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx}: Usebroadcastfor all realtime events (database changes via triggers, messaging, notifications, game state)
Usepresencesparingly for user state tracking (online status, user counters)
Use topic names that correlate with concepts and tables:scope:entity(e.g.,room:123:messages)
Include unsubscribe/cleanup logic in all implementations
Setprivate: truefor channels using database triggers or RLS policies
Give preference to use private channels over public channels (better security and control)
Implement proper error handling and reconnection logic
Do not usepostgres_changesfor new applications (single-threaded, doesn't scale well); migrate tobroadcast from databaseon existing applications if necessary
Do not create multiple subscriptions without proper cleanup
Do not use database functions (realtime.send,realtime.broadcast_changes) in client code
Use dedicated, granular topics to ensure messages are only sent to relevant listeners and improve scalability
Check if channel state is 'subscribed' before subscribing to prevent multiple subscriptions
Set auth withsupabase.realtime.setAuth()before subscribing to private channels
Configure broadcast options: useself: trueto receive your own broadcast messages andack: truefor server acknowledgment
Monitor channel states (SUBSCRIBED, TIMED_OUT, CLOSED, CHANNEL_ERROR) and rely on built-in automatic reconnection with exponential backoff
Use one channel per logical scope (e.g.,room:123) rather than nested scopes to optimize channel structure
Shard high-volume topics (e.g.,chat:shard:1,chat:shard:2) to distribute load and improve scalability
Enable enhanced logging by settinglog_level: 'info'in Supabase Realtime client configuration for debugging
Replacepostgres_changeswithbroadcastlisteners in migration from postgres_changes to realtime.broadcast_changes
Prefer generating code usingbroadcastoverpostgres_changesin all new implementations
Alw...
Files:
src/utils/constants.tssrc/async/mutationHooks/files/useFileUploadOptimisticMutation.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
**/*.{js,ts,jsx,tsx,sql}
📄 CodeRabbit inference engine (.cursor/rules/supabase-use-realtime.mdc)
**/*.{js,ts,jsx,tsx,sql}: Use snake_case for event names:entity_action(e.g.,message_created)
Do not use generic event names like 'update' or 'change'; use specific event names followingentity_actionpattern
Files:
src/utils/constants.tssrc/async/mutationHooks/files/useFileUploadOptimisticMutation.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Only create abstractions when actually needed. Prefer clear function/variable names over inline comments. Avoid helper functions when a simple inline expression suffices.
Useknipto remove unused code when making large changes.
Don't unnecessarily addtry/catchblocks.
Add Sentry breadcrumbs for cache misses and state inconsistencies to aid debugging (Optimistic mutations).
Maximum 250 lines per file - split larger files into modules.
Only Functional Programming - Never Class Based Code.
Only Named exports - Never default exports.
Functions use camelCase naming convention.
Constants use UPPER_SNAKE_CASE naming convention.
Prefer type deduction over custom interfaces.
Use interface withpropsparameter for functions with 2 or more parameters.
Use type alias (type Child = Parent) when child props equal parent props instead of creating new interface.
Use Utility Types:Parameters<>,ReturnType<>,Pick<>,Awaited<>for type deduction.
UsefetchAPI; never use axios or older HTTP client alternatives.
Never useanytypes.
Never use@ts-ignoredirectives.
Never use CommonJS; use ES modules.
Never usevar; useconstorlet.
Use template literals instead of string concatenation.
Use optional chaining (?.) for safe property access.
Usefor...ofloops instead of traditional for loops or forEach.
Never useeval().
Never useargumentsobject; use rest parameters instead.
Never use enums; use const objects or union types instead.
Never use TypeScript namespaces; use ES modules instead.
TypeScript strict mode always enabled.
Seedocs/sentry_rules.mdfor Sentry implementation examples for error tracking, performance monitoring, and logging best practices.
Files:
src/utils/constants.tssrc/async/mutationHooks/files/useFileUploadOptimisticMutation.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
**/*.{ts,tsx,css}
📄 CodeRabbit inference engine (CLAUDE.md)
Mostly use built-in Tailwind values, occasionally allow dynamic values, rarely use globals.
Files:
src/utils/constants.tssrc/async/mutationHooks/files/useFileUploadOptimisticMutation.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
**/{src/hooks,src/utils,src/async}/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Hook and utility files use kebab-case naming (e.g.,
use-bookmark-categories.ts), but exported function names remain camelCase.
Files:
src/utils/constants.ts
**/{src/types,**}/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Only export types that are used in other files; check with grep first before exporting.
Files:
src/utils/constants.tssrc/async/mutationHooks/files/useFileUploadOptimisticMutation.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
package.json
📄 CodeRabbit inference engine (.cursor/rules/supabase-nextjs-rules.mdc)
Install
@supabase/supabase-jsand@supabase/ssrpackages as dependencies
Files:
package.json
**/{src/app,src/pages}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/{src/app,src/pages}/**/*.{ts,tsx}: Prefer fetching data in RSC (Server-Side Components); page can still be static.
Use next/font and next/script when applicable.
Files:
src/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
🧠 Learnings (16)
📓 Common learnings
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.100Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
📚 Learning: 2025-11-27T07:54:42.100Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts:174-201
Timestamp: 2025-11-27T07:54:42.100Z
Learning: In the queue consumer at src/pages/api/v1/bookmarks/add/tasks/queue-consumer.ts, axios calls to screenshot and remaining data APIs do not include client-side timeouts because these are intentionally long-running operations and the downstream server APIs already have their own timeout configurations. Adding client-side timeouts would prematurely abort legitimate long-running tasks.
Applied to files:
src/utils/constants.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.ts
📚 Learning: 2025-11-26T10:14:58.025Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/utils/api/bookmark/remaining.ts:374-375
Timestamp: 2025-11-26T10:14:58.025Z
Learning: In src/utils/api/bookmark/remaining.ts, the updateBookmarkWithRemainingData function intentionally matches only on id (not id + user_id) because the bookmark id is unique and authorization is verified at the API handler level before calling this utility function.
Applied to files:
src/utils/constants.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
📚 Learning: 2025-11-26T10:14:11.932Z
Learnt from: abhishekmg
Repo: timelessco/recollect PR: 623
File: src/pages/api/v1/bookmarks/add/data.tsx:179-185
Timestamp: 2025-11-26T10:14:11.932Z
Learning: In the min-data bookmark API response at src/pages/api/v1/bookmarks/add/tasks/min-data.tsx, bookmarkData.user_id returns a string (UUID) at runtime, not the ProfilesTableTypes object as defined in the TypeScript types. The queue payload in src/pages/api/v1/bookmarks/add/data.tsx correctly uses bookmarkData.user_id directly as a string for the userId field.
Applied to files:
src/utils/constants.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
📚 Learning: 2026-01-02T17:26:26.505Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-02T17:26:26.505Z
Learning: Applies to **/{src/hooks,src/utils,src/async}/*.ts : Hook and utility files use kebab-case naming (e.g., `use-bookmark-categories.ts`), but exported function names remain camelCase.
Applied to files:
src/utils/constants.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
📚 Learning: 2025-12-15T08:55:44.694Z
Learnt from: navin-moorthy
Repo: timelessco/recollect PR: 699
File: src/utils/constants.ts:13-14
Timestamp: 2025-12-15T08:55:44.694Z
Learning: Env vars validated at build time via Zod schemas in scripts/env (schema.js and server.js) and wired into next.config.ts. Therefore, for code that uses NEXT_PUBLIC_* vars defined as non-optional in clientSchema (e.g., z.string() without .optional()), the value is guaranteed to exist at runtime. Do not use unnecessary non-null assertions for these validated env vars when reviewing code in src/**/*.ts files.
Applied to files:
src/utils/constants.tssrc/async/mutationHooks/files/useFileUploadOptimisticMutation.tssrc/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
📚 Learning: 2026-01-02T17:26:26.505Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-02T17:26:26.505Z
Learning: Applies to **/{src/components,src/pageComponents}/**/*.tsx : Use `use-category-multi-select` hook with Base UI Combobox and match-sorter for filtering in multi-select implementations (see `/src/components/lightbox/category-multi-select.tsx`).
Applied to files:
src/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : For external dependencies, prefer importing via `npm:` and `jsr:` specifiers. Minimize the use of imports from `deno.land/x`, `esm.sh`, and `unpkg.com`. If a package comes from those CDNs, replace the CDN hostname with the `npm:` specifier
Applied to files:
src/async/supabaseCrudHelpers/index.tspackage.jsonsrc/pages/api/bookmark/add-bookmark-min-data.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/!(*.test|*.spec).ts : Use Web APIs and Deno's core APIs instead of external dependencies (e.g., use fetch instead of Axios, use WebSockets API instead of node-ws)
Applied to files:
src/async/supabaseCrudHelpers/index.tspackage.json
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Do NOT use bare specifiers when importing dependencies. Use `npm:` or `jsr:` prefixes for external dependencies (e.g., `npm:supabase/supabase-js` instead of bare `supabase/supabase-js`)
Applied to files:
src/async/supabaseCrudHelpers/index.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
📚 Learning: 2025-11-25T08:33:53.554Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-use-realtime.mdc:0-0
Timestamp: 2025-11-25T08:33:53.554Z
Learning: Applies to **/*.{js,ts,jsx,tsx} : Replace `postgres_changes` with `broadcast` listeners in migration from postgres_changes to realtime.broadcast_changes
Applied to files:
src/async/supabaseCrudHelpers/index.ts
📚 Learning: 2025-11-25T08:34:06.124Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-writing-edge-functions.mdc:0-0
Timestamp: 2025-11-25T08:34:06.124Z
Learning: Applies to supabase/functions/**/*.ts : Always define a version for external imports (e.g., `npm:express4.18.2` instead of `npm:express`)
Applied to files:
src/async/supabaseCrudHelpers/index.tspackage.json
📚 Learning: 2025-12-03T11:56:54.699Z
Learnt from: B-a-l-aj-i
Repo: timelessco/recollect PR: 653
File: src/pages/api/bookmark/search-bookmarks.ts:98-106
Timestamp: 2025-12-03T11:56:54.699Z
Learning: In src/pages/api/bookmark/search-bookmarks.ts, the pagination using .range(offset, offset + limit) intentionally returns limit + 1 items (e.g., 26 items when PAGINATION_LIMIT is 25) - this is an accepted behavior by the maintainers.
Applied to files:
src/pages/api/bookmark/fetch-bookmarks-count.ts
📚 Learning: 2026-01-05T07:22:08.085Z
Learnt from: navin-moorthy
Repo: timelessco/recollect PR: 690
File: src/pages/api/bookmark/fetch-discover-bookmarks.ts:0-0
Timestamp: 2026-01-05T07:22:08.085Z
Learning: In timelessco/recollect, legacy Pages Router API endpoints under src/pages/api/** that rely on apiSupabaseClient with deprecated cookie methods are acceptable only if they are not actively used in the codebase. For new development, create endpoints under the App Router at src/app/api/** and adopt the modern Supabase client pattern (getAll/setAll). When encountering such legacy endpoints, verify usage: if unused, they can remain temporarily, but plan migration for any active or upcoming work. Prefer migrating to the App Router pattern for maintainability and consistency.
Applied to files:
src/pages/api/bookmark/fetch-bookmarks-count.tssrc/pages/api/bookmark/add-bookmark-min-data.ts
📚 Learning: 2025-11-25T08:33:12.981Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: .cursor/rules/supabase-nextjs-rules.mdc:0-0
Timestamp: 2025-11-25T08:33:12.981Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : NEVER import from `supabase/auth-helpers-nextjs` as it is deprecated
Applied to files:
src/pages/api/bookmark/add-bookmark-min-data.ts
📚 Learning: 2026-01-02T17:26:26.505Z
Learnt from: CR
Repo: timelessco/recollect PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-02T17:26:26.505Z
Learning: Applies to **/*.{ts,tsx} : Use `knip` to remove unused code when making large changes.
Applied to files:
src/pages/api/bookmark/add-bookmark-min-data.ts
🧬 Code graph analysis (2)
src/async/supabaseCrudHelpers/index.ts (1)
src/utils/constants.ts (3)
NEXT_API_URL(57-57)ADD_BOOKMARK_APIS(88-94)UPLOAD_FILE_APIS(96-101)
src/pages/api/bookmark/add-bookmark-min-data.ts (2)
src/utils/api/bookmark/add.ts (1)
checkIfUserIsCategoryOwnerOrCollaborator(56-121)src/types/apiTypes.ts (2)
SingleListData(31-50)ProfilesTableTypes(188-197)
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 94-94: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
[warning] 94-94: [UnorderedKey] The INTERNAL_API_KEY key should go before the RECOLLECT_SERVER_API key
(UnorderedKey)
🔇 Additional comments (14)
src/pages/api/bookmark/fetch-bookmarks-count.ts (1)
178-183: LGTM! Type annotations are correct.The explicit type annotations for the map callbacks accurately reflect the data structure returned from the Supabase queries and improve type safety without changing behavior.
src/async/mutationHooks/files/useFileUploadOptimisticMutation.ts (2)
32-32: No issues found with storageHelpers migration.The
storageHelpers.createSignedUploadUrl()API is correctly implemented and used in the mutation file. The oldgetStoragePublicBaseUrl()function remains available for internal use within the storage client. Migration is complete and the code is correct.
215-261: No action needed—PDF thumbnail handling was not removed from this file.This mutation hook never contained PDF thumbnail generation logic. PDF thumbnails are appropriately handled server-side in
/pages/api/v1/screenshot.tsand related API endpoints. The client-side mutation hook correctly limits its scope to video thumbnail generation (lines 47–101) and success toast messages (lines 215–261).Likely an incorrect or invalid review comment.
src/utils/constants.ts (2)
88-101: LGTM! Good refactoring to group related API endpoints.The consolidation of bookmark and file upload API endpoints into structured objects (
ADD_BOOKMARK_APISandUPLOAD_FILE_APIS) improves maintainability and discoverability. This follows a clear naming pattern and aligns with the broader queue-based architecture changes.
165-166: LGTM! New PDF screenshot endpoint constant.The
PDF_SCREENSHOT_APIconstant aligns with the migration to external Recollect server for screenshot/thumbnail processing mentioned in the PR summary.package.json (1)
98-98: LGTM! Dependencies added for OpenAPI/Swagger documentation.The new dependencies support the OpenAPI documentation and Swagger UI functionality:
@vercel/functions: Vercel serverless function utilitiesswagger-ui-react&@types/swagger-ui-react: React component for rendering OpenAPI specsopenapi-types: TypeScript types for OpenAPI specificationsThese align with the PR's introduction of API documentation infrastructure.
Also applies to: 138-138, 169-169, 198-198
src/async/supabaseCrudHelpers/index.ts (3)
43-43: LGTM! Import updates align with refactored API constants.The imports correctly reference the new grouped API constant objects (
ADD_BOOKMARK_APISandUPLOAD_FILE_APIS) introduced insrc/utils/constants.ts.Also applies to: 76-76
269-269: LGTM! Correctly uses new grouped API constant.The change from
ADD_BOOKMARK_MIN_DATAtoADD_BOOKMARK_APIS.DATAproperly accesses the DATA endpoint from the new grouped constant structure.
682-682: LGTM! Correctly uses new grouped API constant.The change from
UPLOAD_FILE_APItoUPLOAD_FILE_APIS.DATAproperly accesses the DATA endpoint from the new grouped constant structure.src/async/mutationHooks/bookmarks/useAddBookmarkMinDataOptimisticMutation.ts (2)
5-5: LGTM! Import cleanup.The import updates properly reflect the module's simplified responsibilities after removing post-mutation media processing logic.
Also applies to: 20-21
119-126: LGTM! Simplified onSettled aligns with queue-based architecture.The removal of extensive post-mutation thumbnail and media processing from
onSettledis correct. Background processing is now handled by dedicated queue consumers introduced elsewhere in this PR, improving separation of concerns and allowing the UI mutation to complete faster.src/pages/api/bookmark/add-bookmark-min-data.ts (3)
67-83: LGTM! Clean delegation pattern.The refactoring correctly delegates to the external utility function and properly handles both error cases:
- Database errors trigger a 500 response in the wrapper
- Permission denials return
falsefor the caller to handleThe use of
result.error ?? "Unknown error"provides good defensive handling.
236-252: LGTM! Correct handling of both error scenarios.The error handling properly uses
response.headersSentto distinguish between:
- Database errors where the wrapper already sent a 500 response
- Permission denials where a 403 should be sent here
This prevents duplicate response attempts and maintains correct HTTP semantics.
60-66: The breaking change has been properly handled across the codebase. All external call sites inupload-file.ts,v1/tests/file/post/upload.ts,v1/file/upload/tasks/min-data.ts, andv1/bookmarks/add/tasks/min-data.tscorrectly useawaitwhen calling this function. The deprecated wrapper in the current file also properly awaits the utility implementation.
|
|
||
| # ============================================================================= | ||
| # SERVER | ||
| # ============================================================================= | ||
| RECOLLECT_SERVER_API=your-api | ||
| RECOLLECT_SERVER_API_KEY=your-secret-key | ||
| INTERNAL_API_KEY=key No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address dotenv-linter warnings.
Two minor formatting issues flagged by static analysis:
- Missing blank line at end of file (standard convention)
- Key ordering:
INTERNAL_API_KEYshould come beforeRECOLLECT_SERVER_APIfor alphabetical consistency
📝 Proposed fix
# =============================================================================
# SECURITY
# =============================================================================
API_KEY_ENCRYPTION_KEY=your-secret-key
# =============================================================================
# SERVER
# =============================================================================
+INTERNAL_API_KEY=key
RECOLLECT_SERVER_API=your-api
RECOLLECT_SERVER_API_KEY=your-secret-key
-INTERNAL_API_KEY=key
+Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 94-94: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
[warning] 94-94: [UnorderedKey] The INTERNAL_API_KEY key should go before the RECOLLECT_SERVER_API key
(UnorderedKey)
🤖 Prompt for AI Agents
In @.env.example around lines 88 - 94, Reorder the environment variables under
the SERVER block so they are alphabetically ordered (place INTERNAL_API_KEY
before RECOLLECT_SERVER_API) and add a single blank newline at the end of the
file; specifically, move the INTERNAL_API_KEY line to appear above
RECOLLECT_SERVER_API and ensure the file ends with one trailing newline
character.
PR Checklist
Overview
Summary by CodeRabbit
New Features
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.