-
Notifications
You must be signed in to change notification settings - Fork 4
fix: Vibe coding on fixing the peroformance issues for gift suggestions #697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughParallelized DB reads and writes: the API now fetches match and suggestions concurrently; member assignments are updated in parallel with aggregated error handling; suggestion generation fetches images concurrently and inserts suggestions in bulk; a normalized suggestion interface was added. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant API as API Route
participant DB as Database/Supabase
participant AI as Suggestion Service
rect rgba(220,240,255,0.9)
Note over API,DB: Parallel read of match + existing suggestions
API->>DB: fetch match (gift_exchange_members)
API->>DB: fetch suggestions (gift_suggestions)
DB-->>API: matchResult, suggestionsResult
end
rect rgba(240,230,220,0.9)
Note over API,DB: Parallel assignment updates
API->>DB: Promise.allSettled(update member assignments...)
DB-->>API: assignment results (success/fail)
end
rect rgba(230,255,230,0.9)
Note over API,AI: Batch suggestion generation and storage
API->>AI: generate suggestions for all giver→recipient pairs
AI->>AI: fetch images in parallel (Promise.allSettled)
AI->>DB: bulk insert normalized suggestions
DB-->>AI: insert result
end
API-->>Client: final response (match + mapped suggestions)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
lib/generateAndStoreSuggestions.ts (2)
106-124: Tighten JSON parsing and validation to avoid brittle runtime failuresThis block assumes the parsed JSON is an array of objects in the expected shape; if the model ever returns a different top-level structure,
raw.map(...)will throw a genericTypeErrorand bubble out as an untyped error. Likewise,JSON.parsewill throw aSyntaxErrorthat is not wrapped.Consider:
- Verifying
Array.isArray(raw)before mapping and throwing anOpenAiError(or similar) with the status and maybe a redacted snippet ofjsonContentwhen the shape is unexpected.- After
Number(obj.matchScore ?? 0), guard againstNaNand clamp to[0, 100]so obviously-bad values don’t silently degrade in storage.
These would make failures much easier to debug without changing the happy path.
126-159: Parallel image fetch + bulk insert look good; optionally guard for future scaleUsing
Promise.allSettledforgetAmazonImageis a solid choice here: a single image failure won’t break the entire insert, and indexing intoimageResults[idx]is safe becauseallSettledpreserves order. The bulk insert ofrowsintogift_suggestionsis also a nice performance win over per-row inserts.If you ever increase the number of suggestions substantially, you may want to cap concurrency for
getAmazonImage(and, indirectly, OpenAI calls upstream) to avoid hammering external APIs, but for the current small fixed count this is fine.Also applies to: 161-169
app/api/gift-exchanges/[id]/giftSuggestions/route.ts (1)
78-84: Type thesuggestionJSON shape to avoid silent driftThe response mapping assumes each row from
gift_suggestionshas asuggestionobject in the normalized shape, butsuggestionsResult.datais effectivelyanyhere. To catch future schema drift at compile time, consider typing the query result to something like:type SuggestionRow = { id: string; created_at: string; suggestion: IGeneratedSuggestionNormalized; }; const suggestionsResult = await supabase .from('gift_suggestions') .select('*') .eq('gift_exchange_id', id) .eq('giver_id', user.id) as PostgrestResponse<SuggestionRow>;and/or adding a lightweight runtime guard before spreading
...suggestion.suggestion.lib/drawGiftExchange.ts (2)
77-97: Preserve underlying assignment error details when using Promise.allSettledUsing
Promise.allSettledfor the member updates is a good way to ensure all updates are attempted, but the current rejected-branch handling loses the original reason:if (result.status === 'rejected') { throw new SupabaseError('Failed to assign recipients', 500); }You might want to pass through
result.reasonasdetails, or at least log it, so operational debugging is easier. For example:for (const result of assignmentResults) { if (result.status === 'rejected') { throw new SupabaseError( 'Failed to assign recipients', 500, result.reason, ); } if (result.value.error) { throw new SupabaseError( 'Failed to assign recipients', result.value.error.code, result.value.error, ); } }This keeps the behavior the same for callers while retaining the original error context.
100-111: Suggestion generation failures are fully swallowed; consider at least logging
Promise.allSettledaroundgenerateAndStoreSuggestionsensures suggestion generation runs concurrently and does not block the draw on a single failure, which is good. However, because the settled results are ignored, anySupabaseError/OpenAiErrorthrown insidegenerateAndStoreSuggestionsdisappears silently, and the exchange is still markedactive.If suggestions are intentionally “best effort”, that’s fine, but it would still be useful to:
- Inspect the settled results and log or metricize any
status === 'rejected'orvalue.error.- Optionally decide on a threshold (e.g., “all suggestion generations failed”) where you’d rather fail the draw instead of silently proceeding.
This would improve observability without changing the happy path or the basic best-effort behavior.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
app/api/gift-exchanges/[id]/giftSuggestions/route.ts(2 hunks)lib/drawGiftExchange.ts(1 hunks)lib/generateAndStoreSuggestions.ts(2 hunks)lib/interfaces/IGeneratedSuggestionRaw.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
app/api/gift-exchanges/[id]/giftSuggestions/route.ts (1)
lib/errors/CustomErrors.ts (1)
SupabaseError(57-77)
lib/generateAndStoreSuggestions.ts (3)
lib/interfaces/IGeneratedSuggestionRaw.ts (2)
IGeneratedSuggestionRaw(4-10)IGeneratedSuggestionNormalized(13-20)lib/getAmazonImage.ts (1)
getAmazonImage(12-60)lib/errors/CustomErrors.ts (1)
SupabaseError(57-77)
lib/drawGiftExchange.ts (2)
lib/errors/CustomErrors.ts (1)
SupabaseError(57-77)lib/generateAndStoreSuggestions.ts (1)
generateAndStoreSuggestions(23-175)
🔇 Additional comments (4)
lib/generateAndStoreSuggestions.ts (1)
8-11: Interface import keeps suggestion shapes centralizedImporting
IGeneratedSuggestionRaw/IGeneratedSuggestionNormalizedhere is a good move; it centralizes the contract between generation, normalization, and storage and should make future schema changes safer.app/api/gift-exchanges/[id]/giftSuggestions/route.ts (1)
30-60: Parallelizing match and suggestions queries is a good latency optimizationRunning the
gift_exchange_membersandgift_suggestionsqueries in parallel viaPromise.allis correct and should reduce tail latency for this endpoint. The subsequent error checks onmatchResult.errorandsuggestionsResult.errorpreserve the previous failure semantics.lib/drawGiftExchange.ts (1)
72-75: Assignment cycle correctly prevents self-matchesThe
assignmentsconstruction ((index + 1) % shuffledMembers.length) forms a single cycle over the shuffled members, which guarantees no self-assignments given the existingmembers.length >= 3check. Simple and correct.lib/interfaces/IGeneratedSuggestionRaw.ts (1)
4-10: Interfaces cleanly separate raw vs. normalized suggestion shapesDefining
IGeneratedSuggestionRawfor the model response andIGeneratedSuggestionNormalizedfor the persisted shape (with consistentprice: stringandimageUrl) is a nice separation of concerns and matches howgenerateAndStoreSuggestionsuses them.Also applies to: 12-20
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
lib/generateAndStoreSuggestions.ts (3)
106-106: Consider adding runtime validation for the parsed JSON.The type assertion
as IGeneratedSuggestionRaw[]assumes the OpenAI response matches the expected structure without validation. While the defensive conversions in lines 108-119 provide some safety, runtime validation would be more robust.As suggested in a previous review, consider using a type guard or validation library to ensure the parsed data matches the expected shape before processing.
Example with a type guard:
function isValidSuggestionArray(data: unknown): data is IGeneratedSuggestionRaw[] { if (!Array.isArray(data)) return false; return data.every(item => typeof item === 'object' && item !== null && 'title' in item && 'price' in item && 'description' in item && 'matchReasons' in item && 'matchScore' in item ); } const parsed = JSON.parse(jsonContent); if (!isValidSuggestionArray(parsed)) { throw new OpenAiError('Invalid suggestion format from OpenAI', 500); } const rawItems = parsed;Based on learnings
121-123: LGTM: Excellent performance improvement with parallel image fetching.Using
Promise.allSettledis the right choice here—it allows image fetches to run in parallel while gracefully handling failures without blocking the entire operation. This is a significant performance improvement over sequential fetching.Optional enhancement: Consider adding logging when image fetches fail for monitoring purposes:
const imageResults = await Promise.allSettled( parsedResponse.map((response) => getAmazonImage(String(response.title))), ); // Log failures for monitoring imageResults.forEach((result, idx) => { if (result.status === 'rejected') { console.warn(`Failed to fetch image for "${parsedResponse[idx].title}":`, result.reason); } });
125-150: Well-structured bulk insert payload with one minor redundancy.The payload structure is clean and type-safe. The handling of image results is correct, safely extracting imageUrl only when the promise fulfilled successfully.
Nitpick: Line 143 has a redundant
String()call sincesuggestion.priceis already converted to a string on line 112:suggestion: { title: suggestion.title, - price: String(suggestion.price), + price: suggestion.price, description: suggestion.description, matchReasons: suggestion.matchReasons, matchScore: suggestion.matchScore, imageUrl, },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
lib/generateAndStoreSuggestions.ts(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
lib/generateAndStoreSuggestions.ts (3)
lib/interfaces/IGeneratedSuggestionRaw.ts (2)
IGeneratedSuggestionRaw(4-10)IGeneratedSuggestionNormalized(13-20)lib/getAmazonImage.ts (1)
getAmazonImage(12-60)lib/errors/CustomErrors.ts (1)
SupabaseError(57-77)
🔇 Additional comments (4)
lib/generateAndStoreSuggestions.ts (4)
8-11: LGTM: Improved type safety.The new interfaces improve type safety and code clarity throughout the suggestion generation flow.
108-119: LGTM: Robust data normalization.The defensive conversions ensure data consistency and handle edge cases where OpenAI might return unexpected types. The
Array.isArray()check with fallback is particularly good for preventing runtime errors.
152-162: LGTM: Efficient bulk insert with proper error handling.The bulk insert is a significant performance improvement over individual inserts, reducing database round trips while maintaining proper error handling. The atomic nature of the insert ensures data consistency—either all suggestions are stored or none are.
76-76: No issues found — model change is fully compatible.The web search confirms gpt-4o-mini is a valid OpenAI model (announced July 18, 2024) that is >60% cheaper than gpt-3.5-turbo and includes stronger reasoning and coding performance. Code examination shows the response format is fully compatible: both models use the same
chat.completions.create()API with identical response structure (message.content), and your defensive JSON parsing logic already handles potential format variations. This is a beneficial replacement with no breaking changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
lib/generateAndStoreSuggestions.test.ts (1)
99-109: Consider validating all suggestion fields and exact array length.The test correctly validates the bulk insert behavior, but it only checks
titleandimageUrl. The mock OpenAI response (lines 62-73) includesprice,description,matchReasons, andmatchScorewhich should also be validated to ensure they're correctly stored.Additionally,
expect.arrayContainingonly verifies that at least one matching element exists. Since the mock returns a single suggestion, consider validating that exactly one item is inserted to catch bugs where duplicates or extra items might be added.Apply this diff to make the test more comprehensive:
expect(mockInsert).toHaveBeenCalledWith( - expect.arrayContaining([ - expect.objectContaining({ - gift_exchange_id: 'Exchange1', - giver_id: 'Giver1', - recipient_id: 'Recipient1', - suggestion: expect.objectContaining({ - title: 'Kindle Paperwhite', - imageUrl: 'https://amazon.com/kindle.jpg', - }), - }), - ]) + [ + expect.objectContaining({ + gift_exchange_id: 'Exchange1', + giver_id: 'Giver1', + recipient_id: 'Recipient1', + suggestion: expect.objectContaining({ + title: 'Kindle Paperwhite', + price: '129.99', + description: 'Test description', + matchReasons: ['Loves reading', 'Something else', 'Another thing'], + matchScore: 95, + imageUrl: 'https://amazon.com/kindle.jpg', + }), + }), + ] );
nickytonline
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a nit and a question but good to go
* Fix: awaiting function for gift generation * Fix: deleted package.json debug code * test: edited test title to be more clear * Test: added rendering for test * test: adjusted test title. trying to redeploy my PR to test vercel function duration * Fix: added toast notification and loading spinner to button * Fix: indent spacing * Fix: edited toast notification for drawing * Fix: added state change in finally block * Update components/GiftExchangeHeader/GiftExchangeHeader.tsx Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Bump version from 0.8.2-alpha to 0.8.3-alpha * fix: Vibe coding on fixing the peroformance issues for gift suggestions (#697) * Vibe coding on fixing the peroformance issues for gift suggestions * Clean: cleaned up the ai code some. * Feature: updated AI model * Fix: cleaned up more of the AI code * Fix: deleted interface as not needed. * Test: fixed failing tests --------- Co-authored-by: Alex Appleget <[email protected]> * Bump version from 0.8.3-alpha to 0.8.4-alpha --------- Co-authored-by: Alex Appleget <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Description
This is an attempt to improve the performance issues we're seeing when drawing gift suggestions.
Pre-submission checklist
test #001: created unit test for __ component)Peer Code ReviewersandSenior+ Code Reviewersgroupsgis-code-questionsSummary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.