Skip to content

Conversation

@shashilo
Copy link
Collaborator

@shashilo shashilo commented Nov 19, 2025

Description

This is an attempt to improve the performance issues we're seeing when drawing gift suggestions.

Pre-submission checklist

  • Code builds and passes locally
  • PR title follows Conventional Commit format (e.g. test #001: created unit test for __ component)
  • Request reviews from the Peer Code Reviewers and Senior+ Code Reviewers groups
  • Thread has been created in Discord and PR is linked in gis-code-questions

Summary by CodeRabbit

  • Refactor
    • Parallelized retrieval of match and suggestion data for faster responses.
    • Parallel batch updates for gift assignments with stricter error handling for reliability.
    • Bulk suggestion processing and parallel image fetching for higher throughput.
    • Upgraded AI model and standardized suggestion format for richer, more consistent recommendations.
  • Tests
    • Updated tests to validate bulk insertion of generated suggestions.

✏️ Tip: You can customize this high-level summary in your review settings.

@vercel
Copy link

vercel bot commented Nov 19, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
elecretanta Ready Ready Preview Comment Nov 20, 2025 9:54pm
elecretanta-storybook Ready Ready Preview Comment Nov 20, 2025 9:54pm
elecretanta-unit-test Ready Ready Preview Comment Nov 20, 2025 9:54pm

@shashilo shashilo changed the title Vibe coding on fixing the peroformance issues for gift suggestions fix: Vibe coding on fixing the peroformance issues for gift suggestions Nov 19, 2025
@shashilo shashilo requested review from a team, MichaelLarocca and nickytonline and removed request for a team, bethanyann and nickytonline November 19, 2025 20:58
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 19, 2025

Walkthrough

Parallelized DB reads and writes: the API now fetches match and suggestions concurrently; member assignments are updated in parallel with aggregated error handling; suggestion generation fetches images concurrently and inserts suggestions in bulk; a normalized suggestion interface was added.

Changes

Cohort / File(s) Summary
API Route Enhancement
app/api/gift-exchanges/[id]/giftSuggestions/route.ts
Replaced sequential fetches with concurrent Promise.all for match and suggestions; renamed results to matchResult / suggestionsResult; updated error checks and response mapping to use matchResult.data?.recipient and map suggestionsResult.data into output.
Parallel Assignment & Suggestion Generation
lib/drawGiftExchange.ts
Replaced per-member sequential updates with Promise.allSettled batch updates to create assignments; aggregated error detection throws on failures; moved suggestion generation out of per-loop fire-and-forget into a separate concurrent batch executed after assignments; preserves exchange status update flow.
Suggestion Processing Pipeline
lib/generateAndStoreSuggestions.ts
Switched OpenAI model to gpt-4o-mini; parse all items then fetch item images in parallel via Promise.allSettled; normalize items to IGeneratedSuggestionNormalized[]; perform a single bulk insert of all normalized suggestion rows and handle bulk-insert errors.
Type Definitions
lib/interfaces/IGeneratedSuggestionRaw.ts
Added exported interface IGeneratedSuggestionNormalized with fields: title, price, description, matchReasons, matchScore, and imageUrl (`string
Tests
lib/generateAndStoreSuggestions.test.ts
Updated expectation to assert bulk insert is called with an array (uses arrayContaining) instead of a single object.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant API as API Route
    participant DB as Database/Supabase
    participant AI as Suggestion Service

    rect rgba(220,240,255,0.9)
    Note over API,DB: Parallel read of match + existing suggestions
    API->>DB: fetch match (gift_exchange_members)
    API->>DB: fetch suggestions (gift_suggestions)
    DB-->>API: matchResult, suggestionsResult
    end

    rect rgba(240,230,220,0.9)
    Note over API,DB: Parallel assignment updates
    API->>DB: Promise.allSettled(update member assignments...)
    DB-->>API: assignment results (success/fail)
    end

    rect rgba(230,255,230,0.9)
    Note over API,AI: Batch suggestion generation and storage
    API->>AI: generate suggestions for all giver→recipient pairs
    AI->>AI: fetch images in parallel (Promise.allSettled)
    AI->>DB: bulk insert normalized suggestions
    DB-->>AI: insert result
    end

    API-->>Client: final response (match + mapped suggestions)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Areas needing attention:
    • lib/drawGiftExchange.ts — correctness of Promise.allSettled aggregation, ordering/atomicity, and error propagation.
    • lib/generateAndStoreSuggestions.ts — normalization logic, parallel image-fetch handling, and bulk insert payload/shape.
    • app/api/gift-exchanges/[id]/giftSuggestions/route.ts — response shape compatibility with client expectations.

Possibly related PRs

Suggested reviewers

  • nickytonline

Poem

🐰 I hopped through code with nimble paws,
parallel fetches now clap their claws.
Gifts assigned in one bright sweep,
images fetched, suggestions heap—
a rabbit's cheer for changes that soar. 🎁

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title contains typos ('peroformance' instead of 'performance') and uses vague language ('Vibe coding') that doesn't clearly describe the main technical improvements being made. Revise the title to be clear and professional. For example: 'fix: Improve gift suggestions performance with parallel queries and bulk operations' to accurately reflect the technical improvements.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch slo/improving-suggestion-api-performance

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
lib/generateAndStoreSuggestions.ts (2)

106-124: Tighten JSON parsing and validation to avoid brittle runtime failures

This block assumes the parsed JSON is an array of objects in the expected shape; if the model ever returns a different top-level structure, raw.map(...) will throw a generic TypeError and bubble out as an untyped error. Likewise, JSON.parse will throw a SyntaxError that is not wrapped.

Consider:

  • Verifying Array.isArray(raw) before mapping and throwing an OpenAiError (or similar) with the status and maybe a redacted snippet of jsonContent when the shape is unexpected.
  • After Number(obj.matchScore ?? 0), guard against NaN and clamp to [0, 100] so obviously-bad values don’t silently degrade in storage.
    These would make failures much easier to debug without changing the happy path.

126-159: Parallel image fetch + bulk insert look good; optionally guard for future scale

Using Promise.allSettled for getAmazonImage is a solid choice here: a single image failure won’t break the entire insert, and indexing into imageResults[idx] is safe because allSettled preserves order. The bulk insert of rows into gift_suggestions is also a nice performance win over per-row inserts.

If you ever increase the number of suggestions substantially, you may want to cap concurrency for getAmazonImage (and, indirectly, OpenAI calls upstream) to avoid hammering external APIs, but for the current small fixed count this is fine.

Also applies to: 161-169

app/api/gift-exchanges/[id]/giftSuggestions/route.ts (1)

78-84: Type the suggestion JSON shape to avoid silent drift

The response mapping assumes each row from gift_suggestions has a suggestion object in the normalized shape, but suggestionsResult.data is effectively any here. To catch future schema drift at compile time, consider typing the query result to something like:

type SuggestionRow = {
  id: string;
  created_at: string;
  suggestion: IGeneratedSuggestionNormalized;
};

const suggestionsResult = await supabase
  .from('gift_suggestions')
  .select('*')
  .eq('gift_exchange_id', id)
  .eq('giver_id', user.id) as PostgrestResponse<SuggestionRow>;

and/or adding a lightweight runtime guard before spreading ...suggestion.suggestion.

lib/drawGiftExchange.ts (2)

77-97: Preserve underlying assignment error details when using Promise.allSettled

Using Promise.allSettled for the member updates is a good way to ensure all updates are attempted, but the current rejected-branch handling loses the original reason:

if (result.status === 'rejected') {
  throw new SupabaseError('Failed to assign recipients', 500);
}

You might want to pass through result.reason as details, or at least log it, so operational debugging is easier. For example:

for (const result of assignmentResults) {
  if (result.status === 'rejected') {
    throw new SupabaseError(
      'Failed to assign recipients',
      500,
      result.reason,
    );
  }
  if (result.value.error) {
    throw new SupabaseError(
      'Failed to assign recipients',
      result.value.error.code,
      result.value.error,
    );
  }
}

This keeps the behavior the same for callers while retaining the original error context.


100-111: Suggestion generation failures are fully swallowed; consider at least logging

Promise.allSettled around generateAndStoreSuggestions ensures suggestion generation runs concurrently and does not block the draw on a single failure, which is good. However, because the settled results are ignored, any SupabaseError / OpenAiError thrown inside generateAndStoreSuggestions disappears silently, and the exchange is still marked active.

If suggestions are intentionally “best effort”, that’s fine, but it would still be useful to:

  • Inspect the settled results and log or metricize any status === 'rejected' or value.error.
  • Optionally decide on a threshold (e.g., “all suggestion generations failed”) where you’d rather fail the draw instead of silently proceeding.

This would improve observability without changing the happy path or the basic best-effort behavior.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ec8e025 and 4ee0813.

📒 Files selected for processing (4)
  • app/api/gift-exchanges/[id]/giftSuggestions/route.ts (2 hunks)
  • lib/drawGiftExchange.ts (1 hunks)
  • lib/generateAndStoreSuggestions.ts (2 hunks)
  • lib/interfaces/IGeneratedSuggestionRaw.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
app/api/gift-exchanges/[id]/giftSuggestions/route.ts (1)
lib/errors/CustomErrors.ts (1)
  • SupabaseError (57-77)
lib/generateAndStoreSuggestions.ts (3)
lib/interfaces/IGeneratedSuggestionRaw.ts (2)
  • IGeneratedSuggestionRaw (4-10)
  • IGeneratedSuggestionNormalized (13-20)
lib/getAmazonImage.ts (1)
  • getAmazonImage (12-60)
lib/errors/CustomErrors.ts (1)
  • SupabaseError (57-77)
lib/drawGiftExchange.ts (2)
lib/errors/CustomErrors.ts (1)
  • SupabaseError (57-77)
lib/generateAndStoreSuggestions.ts (1)
  • generateAndStoreSuggestions (23-175)
🔇 Additional comments (4)
lib/generateAndStoreSuggestions.ts (1)

8-11: Interface import keeps suggestion shapes centralized

Importing IGeneratedSuggestionRaw / IGeneratedSuggestionNormalized here is a good move; it centralizes the contract between generation, normalization, and storage and should make future schema changes safer.

app/api/gift-exchanges/[id]/giftSuggestions/route.ts (1)

30-60: Parallelizing match and suggestions queries is a good latency optimization

Running the gift_exchange_members and gift_suggestions queries in parallel via Promise.all is correct and should reduce tail latency for this endpoint. The subsequent error checks on matchResult.error and suggestionsResult.error preserve the previous failure semantics.

lib/drawGiftExchange.ts (1)

72-75: Assignment cycle correctly prevents self-matches

The assignments construction ((index + 1) % shuffledMembers.length) forms a single cycle over the shuffled members, which guarantees no self-assignments given the existing members.length >= 3 check. Simple and correct.

lib/interfaces/IGeneratedSuggestionRaw.ts (1)

4-10: Interfaces cleanly separate raw vs. normalized suggestion shapes

Defining IGeneratedSuggestionRaw for the model response and IGeneratedSuggestionNormalized for the persisted shape (with consistent price: string and imageUrl) is a nice separation of concerns and matches how generateAndStoreSuggestions uses them.

Also applies to: 12-20

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
lib/generateAndStoreSuggestions.ts (3)

106-106: Consider adding runtime validation for the parsed JSON.

The type assertion as IGeneratedSuggestionRaw[] assumes the OpenAI response matches the expected structure without validation. While the defensive conversions in lines 108-119 provide some safety, runtime validation would be more robust.

As suggested in a previous review, consider using a type guard or validation library to ensure the parsed data matches the expected shape before processing.

Example with a type guard:

function isValidSuggestionArray(data: unknown): data is IGeneratedSuggestionRaw[] {
  if (!Array.isArray(data)) return false;
  return data.every(item => 
    typeof item === 'object' && 
    item !== null &&
    'title' in item &&
    'price' in item &&
    'description' in item &&
    'matchReasons' in item &&
    'matchScore' in item
  );
}

const parsed = JSON.parse(jsonContent);
if (!isValidSuggestionArray(parsed)) {
  throw new OpenAiError('Invalid suggestion format from OpenAI', 500);
}
const rawItems = parsed;

Based on learnings


121-123: LGTM: Excellent performance improvement with parallel image fetching.

Using Promise.allSettled is the right choice here—it allows image fetches to run in parallel while gracefully handling failures without blocking the entire operation. This is a significant performance improvement over sequential fetching.

Optional enhancement: Consider adding logging when image fetches fail for monitoring purposes:

const imageResults = await Promise.allSettled(
  parsedResponse.map((response) => getAmazonImage(String(response.title))),
);

// Log failures for monitoring
imageResults.forEach((result, idx) => {
  if (result.status === 'rejected') {
    console.warn(`Failed to fetch image for "${parsedResponse[idx].title}":`, result.reason);
  }
});

125-150: Well-structured bulk insert payload with one minor redundancy.

The payload structure is clean and type-safe. The handling of image results is correct, safely extracting imageUrl only when the promise fulfilled successfully.

Nitpick: Line 143 has a redundant String() call since suggestion.price is already converted to a string on line 112:

          suggestion: {
            title: suggestion.title,
-           price: String(suggestion.price),
+           price: suggestion.price,
            description: suggestion.description,
            matchReasons: suggestion.matchReasons,
            matchScore: suggestion.matchScore,
            imageUrl,
          },
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 09e8d23 and f1dce3c.

📒 Files selected for processing (1)
  • lib/generateAndStoreSuggestions.ts (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
lib/generateAndStoreSuggestions.ts (3)
lib/interfaces/IGeneratedSuggestionRaw.ts (2)
  • IGeneratedSuggestionRaw (4-10)
  • IGeneratedSuggestionNormalized (13-20)
lib/getAmazonImage.ts (1)
  • getAmazonImage (12-60)
lib/errors/CustomErrors.ts (1)
  • SupabaseError (57-77)
🔇 Additional comments (4)
lib/generateAndStoreSuggestions.ts (4)

8-11: LGTM: Improved type safety.

The new interfaces improve type safety and code clarity throughout the suggestion generation flow.


108-119: LGTM: Robust data normalization.

The defensive conversions ensure data consistency and handle edge cases where OpenAI might return unexpected types. The Array.isArray() check with fallback is particularly good for preventing runtime errors.


152-162: LGTM: Efficient bulk insert with proper error handling.

The bulk insert is a significant performance improvement over individual inserts, reducing database round trips while maintaining proper error handling. The atomic nature of the insert ensures data consistency—either all suggestions are stored or none are.


76-76: No issues found — model change is fully compatible.

The web search confirms gpt-4o-mini is a valid OpenAI model (announced July 18, 2024) that is >60% cheaper than gpt-3.5-turbo and includes stronger reasoning and coding performance. Code examination shows the response format is fully compatible: both models use the same chat.completions.create() API with identical response structure (message.content), and your defensive JSON parsing logic already handles potential format variations. This is a beneficial replacement with no breaking changes.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
lib/generateAndStoreSuggestions.test.ts (1)

99-109: Consider validating all suggestion fields and exact array length.

The test correctly validates the bulk insert behavior, but it only checks title and imageUrl. The mock OpenAI response (lines 62-73) includes price, description, matchReasons, and matchScore which should also be validated to ensure they're correctly stored.

Additionally, expect.arrayContaining only verifies that at least one matching element exists. Since the mock returns a single suggestion, consider validating that exactly one item is inserted to catch bugs where duplicates or extra items might be added.

Apply this diff to make the test more comprehensive:

    expect(mockInsert).toHaveBeenCalledWith(
-      expect.arrayContaining([
-        expect.objectContaining({
-        gift_exchange_id: 'Exchange1',
-        giver_id: 'Giver1',
-        recipient_id: 'Recipient1',
-        suggestion: expect.objectContaining({
-          title: 'Kindle Paperwhite',
-          imageUrl: 'https://amazon.com/kindle.jpg',
-        }),
-      }),
-      ])
+      [
+        expect.objectContaining({
+          gift_exchange_id: 'Exchange1',
+          giver_id: 'Giver1',
+          recipient_id: 'Recipient1',
+          suggestion: expect.objectContaining({
+            title: 'Kindle Paperwhite',
+            price: '129.99',
+            description: 'Test description',
+            matchReasons: ['Loves reading', 'Something else', 'Another thing'],
+            matchScore: 95,
+            imageUrl: 'https://amazon.com/kindle.jpg',
+          }),
+        }),
+      ]
    );
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a860142 and 3c769bd.

📒 Files selected for processing (1)
  • lib/generateAndStoreSuggestions.test.ts (2 hunks)

@shashilo shashilo enabled auto-merge (squash) November 20, 2025 21:57
Copy link
Member

@nickytonline nickytonline left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a nit and a question but good to go

@shashilo shashilo merged commit ce1ed54 into develop Nov 21, 2025
5 checks passed
@shashilo shashilo deleted the slo/improving-suggestion-api-performance branch November 21, 2025 14:28
shashilo added a commit that referenced this pull request Nov 21, 2025
* Fix: awaiting function for gift generation

* Fix: deleted package.json debug code

* test: edited test title to be more clear

* Test: added rendering for test

* test: adjusted test title. trying to redeploy my PR to test vercel function duration

* Fix: added toast notification and loading spinner to button

* Fix: indent spacing

* Fix: edited toast notification for drawing

* Fix: added state change in finally block

* Update components/GiftExchangeHeader/GiftExchangeHeader.tsx

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Bump version from 0.8.2-alpha to 0.8.3-alpha

* fix: Vibe coding on fixing the peroformance issues for gift suggestions (#697)

* Vibe coding on fixing the peroformance issues for gift suggestions

* Clean: cleaned up the ai code some.

* Feature: updated AI model

* Fix: cleaned up more of the AI code

* Fix: deleted interface as not needed.

* Test: fixed failing tests

---------

Co-authored-by: Alex Appleget <[email protected]>

* Bump version from 0.8.3-alpha to 0.8.4-alpha

---------

Co-authored-by: Alex Appleget <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants