Conversation
- Updated `getClient` function to accept `request` parameter for better context handling. - Modified various API routes to utilize the updated `getClient` function, ensuring proper session management and CORS headers. - Enhanced error handling by including request context in responses across multiple routes.
…ing and update dependencies
Refactor ForceGraph and Controls components to improve cooldown handl…
Completed Working on "Code Review"✅ Workflow completed successfully. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughRefactors graph cooldown handling and node styling, threads Request into auth/session calls and per-request CORS across most API routes, updates Next/TSX/turbopack and several dependencies, adds E2E model-fetching helpers and tests, and introduces chat settings for persisted max saved messages. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API_Route as API Route
participant Auth as getClient(request)
participant DB
participant CORS as getCorsHeaders(request)
Client->>API_Route: HTTP request (Request)
API_Route->>Auth: getClient(request)
alt Auth returns NextResponse
Auth-->>Client: Early NextResponse (auth error)
else Auth returns session
Auth-->>API_Route: session
API_Route->>DB: perform action with session.client
DB-->>API_Route: result
API_Route->>CORS: getCorsHeaders(request)
CORS-->>API_Route: headers
API_Route-->>Client: Response with result + headers
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
Suggested PR title: feat(api): thread Request into getClient and use per-request CORS; refactor graph cooldown handling 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔒 Trivy Security Scan Results |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
app/graph/page.tsx (1)
156-184:⚠️ Potential issue | 🟠 MajorMissing deps can skip initialization when
graphInfoappears.This effect reads
graphInfoandrunQuerybut neither is in the dependency list. IfgraphInfotransitions fromundefined→ value, this effect won’t rerun, and the content persistence/default query path can be skipped.🛠️ Suggested fix
- }, [fetchCount, graph.Id, graphName, setGraph, runDefaultQuery, defaultQuery, contentPersistence, setGraphName, graphNames, setIsQueryLoading]); + }, [fetchCount, graph.Id, graphName, graphInfo, runQuery, setGraph, runDefaultQuery, defaultQuery, contentPersistence, setGraphName, graphNames, setIsQueryLoading]);app/graph/controls.tsx (1)
48-55:⚠️ Potential issue | 🟠 MajorSwitch can’t pause after
cooldownTicksbecomes-1.When other paths set
cooldownTicksto-1, this handler never emits0, so the switch can’t pause animation. Use thecheckedargument instead of checking forundefined.🛠️ Suggested fix
- <Switch + <Switch data-testid="animationControl" className="pointer-events-auto data-[state=unchecked]:bg-border" checked={cooldownTicks !== 0} - onCheckedChange={() => { - handleCooldown(cooldownTicks === undefined ? 0 : undefined); - }} + onCheckedChange={(checked) => { + handleCooldown(checked ? undefined : 0); + }} />
🤖 Fix all issues with AI agents
In `@app/components/ForceGraph.tsx`:
- Around line 163-166: The reset logic currently computes cooldown =
cooldownTicks === undefined ? undefined : -1 and calls handleCooldown(cooldown)
which can re-enable animation when cooldownTicks is 0; change it to respect the
paused state (same pattern as GraphView) by guarding the call—only compute/reset
cooldown and call handleCooldown when the component is not paused (e.g., check
the paused/isPaused flag first); if paused, skip calling handleCooldown so
paused remains paused. Ensure you update the same pattern for the other
occurrences referencing cooldownTicks and handleCooldown.
In `@app/graph/GraphView.tsx`:
- Around line 121-124: The current logic converts cooldownTicks to -1 whenever
cooldownTicks is defined, which unintentionally unpauses when cooldownTicks ===
0; change the guard so you only map undefined to undefined and leave 0 as-is
(i.e., only convert to -1 when cooldownTicks is a positive value or explicitly
not paused), then call handleCooldown(cooldown) with that preserved value; apply
the same fix to the other similar block that also reads cooldownTicks and calls
handleCooldown later in this file.
In `@next.config.js`:
- Around line 11-12: The current custom SVG handling in the webpack() config
won't run under Turbopack; move the SVG rule into the turbopack configuration by
replacing/augmenting the existing turbopack: {} entry so it defines
turbopack.rules with a '*.svg' rule that uses the '@svgr/webpack' loader and
outputs as '*.js' (preserving the SVG-to-component behavior). Update the
turbopack property (the turbopack object) and remove or reconcile the SVG
handling inside the webpack() function (referencing the existing webpack()
custom SVG logic) and then verify import semantics that previously used
resourceQuery (*.svg?url) still behave as intended, adjusting the new turbopack
rule if needed.
In `@package.json`:
- Around line 15-16: Update the pinned ESLint config to match Next.js 16: bump
"eslint-config-next" in package.json devDependencies to ^16.1.5 (to align with
Next.js 16.1.5) and update any lockfile accordingly; remove or replace usages of
the removed "next lint" command (e.g., in package.json scripts) and switch to
invoking the ESLint CLI with a flat config (ensure an eslint.config.mjs exists
and adjust the lint script to call "eslint" with the project files). Ensure
package.json scripts no longer call "next lint" and that "eslint-config-next" is
the ^16.1.5 version to avoid config failures with Next.js 16.
There was a problem hiding this comment.
Review summary
- Importance counts: BLOCKER×1, CRITICAL×0, MAJOR×2, MINOR×0, SUGGESTION×0, PRAISE×0 across 3 files.
- Key themes: (1) graph state handling regressions (cooldown toggle and query-refresh effect) that break animation control and data loading, (2) platform/tooling drift where the Next.js upgrade now needs Node ≥20.9 but no engine guard enforces it.
- Next steps: fix the graph
useEffectdependencies so default queries run oncegraphInfoarrives, ensure the cooldown toggle treats any non-zero ticks as running so users can pause after auto-restarts, and add a Node engine constraint (or CI guard) matching Next 16.1.5’s requirement.
Blocker noted, so filing as overall comment for follow-up rather than approval.
package.json
Outdated
| "lucide-react": "^0.475.0", | ||
| "monaco-editor": "^0.47.0", | ||
| "next": "^15.5.10", | ||
| "next": "^16.1.5", |
There was a problem hiding this comment.
[MAJOR]: Missing Node engine constraint for Next 16
Next.js 16.1.5 requires Node ≥20.9, but the project still lacks an engines.node restriction. That means installs/tests can run under Node 18 and will fail only when next dev/next build crash at runtime with “Next.js requires Node.js v20.9+”.
Suggested fix: Declare the minimum Node version (e.g. add "engines": { "node": ">=20.9.0" } or enforce it via tooling/CI) so incompatible environments fail fast.
Refactor authentication handling to pass request context
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
app/api/graph/[graph]/memory/route.ts (1)
12-13:⚠️ Potential issue | 🟡 MinorUpdate JSDoc: request is now used.
The comment says the request is unused, but it’s passed to
getClient/getDBVersion. Please update to avoid misleading docs.💡 Suggested update
- * `@param` request - The incoming NextRequest (unused by this handler but required by the route signature) + * `@param` request - The incoming NextRequest (used for auth/session + per-request handling)app/api/upload/route.ts (1)
20-28:⚠️ Potential issue | 🟠 MajorAvoid leaking server filesystem paths in the response.
The success payload returns
filePath(absolute path), which discloses server filesystem details. Prefer returning a public URL/path instead.🔧 Suggested fix
- return NextResponse.json({ path: filePath, status: 200 }, { headers: getCorsHeaders(request) }); + const publicPath = `/assets/${filename}`; + return NextResponse.json({ path: publicPath }, { status: 200, headers: getCorsHeaders(request) });app/api/schema/[schema]/_route.ts (1)
5-54:⚠️ Potential issue | 🟠 MajorAdd per-request CORS headers to schema CRUD responses.
These handlers return
NextResponse.jsonwithoutgetCorsHeaders(request), so browser callers can still fail cross-origin even after the broader per-request CORS refactor. Please attach CORS headers to all success/error responses.🔧 Suggested pattern (apply across GET/POST/DELETE/PATCH)
+import { getCorsHeaders } from "../../utils"; ... - return NextResponse.json({ result }, { status: 200 }); + return NextResponse.json({ result }, { status: 200, headers: getCorsHeaders(request) }); ... - return NextResponse.json( - { message: (error as Error).message }, - { status: 400 } - ); + return NextResponse.json( + { message: (error as Error).message }, + { status: 400, headers: getCorsHeaders(request) } + );Also applies to: 57-95, 97-136, 138-190
app/api/schema/_route.ts (1)
1-41:⚠️ Potential issue | 🟠 MajorAdd per-request CORS headers in schema list responses.
Without
getCorsHeaders(request)here, schema list calls can still fail in browsers for cross-origin usage.🔧 Proposed fix
-import { NextRequest, NextResponse } from "next/server"; +import { NextRequest, NextResponse } from "next/server"; +import { getCorsHeaders } from "../utils"; ... - return NextResponse.json({ opts: schemaNames }, { status: 200 }); + return NextResponse.json( + { opts: schemaNames }, + { status: 200, headers: getCorsHeaders(request) } + ); ... - return NextResponse.json( - { message: (error as Error).message }, - { status: 400 } - ); + return NextResponse.json( + { message: (error as Error).message }, + { status: 400, headers: getCorsHeaders(request) } + ); ... - return NextResponse.json( - { message: (err as Error).message }, - { status: 500 } - ); + return NextResponse.json( + { message: (err as Error).message }, + { status: 500, headers: getCorsHeaders(request) } + );
🤖 Fix all issues with AI agents
In `@app/api/chat/route.tsx`:
- Around line 6-10: The OPTIONS handler currently calls corsHeaders() with no
request, which breaks request-scoped/dynamic-origin preflight; change the export
async function OPTIONS() to accept the incoming Request (export async function
OPTIONS(request)) and use getCorsHeaders(request) to build headers, returning
new NextResponse(null, { status: 204, headers: getCorsHeaders(request) });
ensure you replace the corsHeaders() call with getCorsHeaders(request) and keep
NextResponse as the response class.
🧹 Nitpick comments (3)
app/api/swagger/route.ts (1)
17-23: Consider adding no-cache headers on error responses, too.
This avoids intermediaries caching transient failures.♻️ Suggested update
return NextResponse.json( { error: "Failed to generate API specification" }, - { status: 500, headers: getCorsHeaders(request) } + { + status: 500, + headers: { + ...getCorsHeaders(request), + "Cache-Control": "no-cache, no-store, must-revalidate", + "Pragma": "no-cache", + "Expires": "0" + } + } );app/api/graph/[graph]/count/route.ts (1)
72-75: Pre-existing: Abort listener registered too late.The abort handler is registered after the writer may already be closed. For effective client disconnect handling, register it before starting async operations.
Suggested improvement
+ // Clean up if the client disconnects early + request.signal.addEventListener("abort", () => { + writer.close(); + }); + try { const session = await getClient(request); // ... rest of the try block } // ... catch blocks - // Clean up if the client disconnects early - request.signal.addEventListener("abort", () => { - writer.close(); - }); - return new Response(readable, {app/api/auth/tokens/[tokenId]/route.ts (1)
13-57: Introduce minimal TokenData/AuthUser types instead ofany.This will tighten permission checks and reduce accidental field misuse while keeping the code flexible.
♻️ Example type tightening
+type TokenData = { + token_hash: string; + token_id: string; + user_id: string | number; + username: string; + name: string; + role: string; + host?: string | null; + port?: number | string | null; + created_at: string; + expires_at: string | null; + last_used: string | null; + is_active: boolean; + [key: string]: unknown; +}; + +type AuthenticatedUser = { + role: string; + username: string; + host?: string | null; + port?: number | string | null; +}; + async function fetchTokenById(tokenId: string, request: Request): Promise<{ - // eslint-disable-next-line `@typescript-eslint/no-explicit-any` - tokenData?: any; + tokenData?: TokenData; error?: NextResponse; }> { ... function checkViewPermission( - // eslint-disable-next-line `@typescript-eslint/no-explicit-any` - authenticatedUser: any, - // eslint-disable-next-line `@typescript-eslint/no-explicit-any` - tokenData: any, + authenticatedUser: AuthenticatedUser, + tokenData: TokenData, request: Request ): { authorized: boolean; error?: NextResponse } { ... function checkRevokePermission( - // eslint-disable-next-line `@typescript-eslint/no-explicit-any` - authenticatedUser: any, - // eslint-disable-next-line `@typescript-eslint/no-explicit-any` - tokenData: any, + authenticatedUser: AuthenticatedUser, + tokenData: TokenData, request: Request ): { authorized: boolean; error?: NextResponse } {Also applies to: 86-92
… GraphData conversion
…customization panel
… font to HyperlegibleSans
Fix regex patterns in getQueryWithLimit function to improve query mat…
Fix #1395 enhance Node and Link types to include caption and size properties in…
…d improved error handling
… improve model fetching logic in BrowserSettings
…oved setup and SVG handling
chore: update @falkordb/canvas dependency to v0.0.30
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
app/api/graph/model.ts (1)
829-839:⚠️ Potential issue | 🟡 Minor
captionandsizenot updated in the finalization loop — inconsistent with all other sites.Every other node creation/update path in this PR now assigns
color,caption, andsizetogether from the label style. This finalization loop only setscolor, which can leavecaption/sizestale if the resolved label differs from the one used during initial creation.Proposed fix
// Use custom color if available, otherwise use default label color node.color = label.style.color; + node.caption = label.style.caption; + node.size = label.style.size;e2e/tests/settingsBrowser.spec.ts (1)
122-132:⚠️ Potential issue | 🟡 MinorSearch filter tests assume model names contain the search keyword.
The test searches for
"gpt"then assertsopenaiModelis visible. If the first model returned by the API doesn't contain"gpt"in its name (e.g.,o1-mini,o3), this assertion will fail. Same issue applies to the"claude"search at Line 142 and the clearing test at Line 201.Consider either:
- Searching for a substring known to be in the dynamically fetched model name, or
- Using a hardcoded model name for search-filter tests since the intent is to test the filter UI, not the dynamic model list.
🤖 Fix all issues with AI agents
In `@app/globals.css`:
- Around line 48-50: The font-family declaration for the universal selector
currently uses quoted single-word font "HyperlegibleSans" with no fallback;
update the declaration in the global CSS (the rule targeting *) to remove the
unnecessary quotes for the single-word font name and add a generic fallback like
sans-serif (e.g., font-family: HyperlegibleSans, sans-serif) so the browser
falls back safely and Stylelint warnings are resolved.
In `@e2e/tests/customizeStyle.spec.ts`:
- Around line 634-637: The inline comment above the nodes retrieval is wrong: it
says "find person2 nodes" while the code filters for person1; update the comment
to accurately reflect the intent (e.g., change the comment to "Get initial nodes
- find person1 nodes") next to the nodes = await
graph.getNodesScreenPositions("graph"); and the person1Nodes variable so the
comment and the filter (person1Nodes) match.
In `@lib/utils.ts`:
- Around line 291-293: The current regex in the if-statement that checks
query.match(/\bCALL\s*\{[^}]*\}\s*RETURN\b(?![^;]*\bLIMIT\b)/i) fails on CALL
blocks with nested braces or map literals; replace this fragile regex with a
small parser: locate the "CALL" token, find the opening "{" and scan forward
using a brace-depth counter (skipping quoted strings and escaped chars) until
the matching "}" is found, then verify the following tokens include "RETURN" and
no "LIMIT"; implement this as a helper (e.g., findMatchingCallBlock or
hasCallBlockWithoutLimit) and use it in place of the regex in lib/utils.ts so
nested braces and maps are handled correctly.
🧹 Nitpick comments (5)
app/components/Header.tsx (1)
149-174: Remove commented-out SCHEMAS block.Lines 165–174 are dead code. If the SCHEMAS feature is planned, track it in an issue rather than leaving commented-out JSX in the component.
Suggested cleanup
</div> - {/* - <Button - label="SCHEMAS" - title="View and manage your schemas" - className={cn("w-full flex justify-center text-foreground p-1 rounded-lg", type === "Schema" && "text-primary bg-border")} - onClick={() => router.push("/schema")} - data-testid="SchemasButton" - /> - {separator} - */}e2e/tests/settingsBrowser.spec.ts (1)
15-19:geminiModelis declared but never used in any test.The
eslint-disableon Line 18 suppresses the warning, but if there's no planned test for it, consider removing the variable and its fetch call to reduce unnecessary API calls inbeforeAll.app/components/ForceGraph.tsx (1)
349-365:graphDatais missing from the dependency array.The effect reads
graphDatain the conditional at Line 355, but theeslint-disablecomment suppresses the exhaustive-deps warning. IfgraphDatais intentionally omitted to avoid re-running when it's set toundefinedat Line 357, consider adding a brief comment explaining why — the suppression is non-obvious.app/components/PaginationList.tsx (2)
183-183:itemsin deps causes this effect to re-run every render.
itemsis a new array reference each render (line 148,.slice()). This tears down and re-creates theResizeObserveron every cycle.items.lengthis also redundant whenitemsis already listed.Consider depending only on
itemHeight(which already captures the item-type dimension):Suggested fix
- }, [itemHeight, items, items.length]); + }, [itemHeight]);
347-351: Remove deprecateddefaultPropsblock — use default parameters instead.
defaultPropsare deprecated in React 18.3+. Since all default values here areundefined(the implicit default), remove the block entirely and optionally add explicit defaults in the destructured props if needed.Suggested fix
-PaginationList.defaultProps = { - className: undefined, - children: undefined, - isLoading: undefined, - isDeleteSelected: undefined, -};Or use default parameters in the function signature:
-export default function PaginationList<T extends Item>({ list, onClick, dataTestId, afterSearchCallback, isSelected, isDeleteSelected, label, isLoading, className, children, searchRef }: Props<T>) { +export default function PaginationList<T extends Item>({ list, onClick, dataTestId, afterSearchCallback, isSelected, isDeleteSelected = undefined, label, isLoading = undefined, className = undefined, children = undefined, searchRef }: Props<T>) {(Though the first approach is cleaner since
undefinedis already the default.)
| * { | ||
| font-family: 'Inter' !important; | ||
| font-family: 'HyperlegibleSans' !important; | ||
| } |
There was a problem hiding this comment.
Add a generic sans-serif fallback and remove quotes to satisfy Stylelint.
If HyperlegibleSans fails to load, the browser falls back to its default (often serif). Also, Stylelint flags unnecessary quotes for a single-word font name outside @font-face.
Proposed fix
* {
- font-family: 'HyperlegibleSans' !important;
+ font-family: HyperlegibleSans, sans-serif !important;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| * { | |
| font-family: 'Inter' !important; | |
| font-family: 'HyperlegibleSans' !important; | |
| } | |
| * { | |
| font-family: HyperlegibleSans, sans-serif !important; | |
| } |
🧰 Tools
🪛 Stylelint (17.2.0)
[error] 49-49: Unexpected quotes around "HyperlegibleSans" (font-family-name-quotes)
(font-family-name-quotes)
[error] 49-49: Unexpected missing generic font family (font-family-no-missing-generic-family-keyword)
(font-family-no-missing-generic-family-keyword)
🤖 Prompt for AI Agents
In `@app/globals.css` around lines 48 - 50, The font-family declaration for the
universal selector currently uses quoted single-word font "HyperlegibleSans"
with no fallback; update the declaration in the global CSS (the rule targeting
*) to remove the unnecessary quotes for the single-word font name and add a
generic fallback like sans-serif (e.g., font-family: HyperlegibleSans,
sans-serif) so the browser falls back safely and Stylelint warnings are
resolved.
| // Get initial nodes - find person2 nodes | ||
| let nodes = await graph.getNodesScreenPositions("graph"); | ||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| const person1Nodes = nodes.filter((n: any) => n.labels?.includes("person1")); |
There was a problem hiding this comment.
Misleading comment: says "person2" but filters for "person1".
Line 634 comment says // Get initial nodes - find person2 nodes but the filter on Line 637 correctly targets "person1". The comment should match.
Proposed fix
- // Get initial nodes - find person2 nodes
+ // Get initial nodes - find person1 nodes📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Get initial nodes - find person2 nodes | |
| let nodes = await graph.getNodesScreenPositions("graph"); | |
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | |
| const person1Nodes = nodes.filter((n: any) => n.labels?.includes("person1")); | |
| // Get initial nodes - find person1 nodes | |
| let nodes = await graph.getNodesScreenPositions("graph"); | |
| // eslint-disable-next-line `@typescript-eslint/no-explicit-any` | |
| const person1Nodes = nodes.filter((n: any) => n.labels?.includes("person1")); |
🤖 Prompt for AI Agents
In `@e2e/tests/customizeStyle.spec.ts` around lines 634 - 637, The inline comment
above the nodes retrieval is wrong: it says "find person2 nodes" while the code
filters for person1; update the comment to accurately reflect the intent (e.g.,
change the comment to "Get initial nodes - find person1 nodes") next to the
nodes = await graph.getNodesScreenPositions("graph"); and the person1Nodes
variable so the comment and the filter (person1Nodes) match.
| if (query.match(/\bCALL\s*\{[^}]*\}\s*RETURN\b(?![^;]*\bLIMIT\b)/i)) { | ||
| return [`${query} LIMIT ${limit}`, limit]; | ||
| } |
There was a problem hiding this comment.
[^}]* won't match CALL blocks containing nested braces.
Cypher queries can contain map literals or nested CALL blocks inside CALL { ... }, e.g.:
CALL { MATCH (n) WHERE n.props = {key: "val"} RETURN n } RETURN *Here [^}]* stops at the first } (the map literal), so the regex fails to match the full CALL block. This is hard to solve with a single regex, but worth noting as a known limitation.
🤖 Prompt for AI Agents
In `@lib/utils.ts` around lines 291 - 293, The current regex in the if-statement
that checks query.match(/\bCALL\s*\{[^}]*\}\s*RETURN\b(?![^;]*\bLIMIT\b)/i)
fails on CALL blocks with nested braces or map literals; replace this fragile
regex with a small parser: locate the "CALL" token, find the opening "{" and
scan forward using a brace-depth counter (skipping quoted strings and escaped
chars) until the matching "}" is found, then verify the following tokens include
"RETURN" and no "LIMIT"; implement this as a helper (e.g., findMatchingCallBlock
or hasCallBlockWithoutLimit) and use it in place of the regex in lib/utils.ts so
nested braces and maps are handled correctly.
Update README, ForceGraph component, and configuration files for impr…
Save chat session
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
README.md (2)
103-106:⚠️ Potential issue | 🟡 MinorFix invalid npm install command.
npm -iis not a valid npm command; it will fail for readers. Usenpm iornpm install.✅ Suggested fix
-* Build the node project `npm -i` +* Build the node project `npm i`
12-12:⚠️ Potential issue | 🟡 MinorTypo and wording cleanup for clarity.
Spelling mistakes and inconsistent casing reduce readability.
✅ Suggested fix
-## Visualize, manipulate and manage FalkorDB graph data interactively to monitor and explore data changes. The latest version introcused the following improvements +## Visualize, manipulate and manage FalkorDB graph data interactively to monitor and explore data changes. The latest version introduced the following improvements-> Note: Alternativly, you can run the browser from source and database using Docker +> Note: Alternatively, you can run the browser from source and database using Docker-* run the server `npm run dev` +* Run the server `npm run dev`Also applies to: 61-61, 106-106
🤖 Fix all issues with AI agents
In `@app/graph/Chat.tsx`:
- Around line 52-62: The save-effect is writing stale messages to the new graph
key because it runs immediately after load before state updates flush; update
the logic in Chat.tsx to skip the save when messages were just loaded for a
changed graph by adding a ref (e.g., justLoadedRef) toggled in the load
useEffect that sets messages for graphName, and check that ref inside the
save-effect/cleanup to return early if justLoadedRef is set (then clear it);
alternatively consolidate load/save into a single effect that reads from
localStorage and only writes on unmount or when messages actually change for the
current graph, referencing the useEffect hooks that read
`localStorage.getItem(\`chat-${graphName}\`)`, setMessages, the save cleanup
that calls localStorage.setItem(`chat-${graphName}`,
JSON.stringify(getLastUserMessagesWithContext(messages, maxSavedMessages))) and
the graphName/messages/maxSavedMessages deps.
In `@app/settings/browserSettings.tsx`:
- Around line 87-108: The effect currently auto-applies and persists a detected
model by calling setModel(defaultModel) and localStorage.setItem("model",
defaultModel) which bypasses the staged-new* pattern; change it to only stage
the detected model by calling setNewModel(defaultModel) (do not call setModel or
touch localStorage), so the model is only persisted when the user clicks Save;
also remove saveSettings from the effect dependency array since it is not used
in the effect body to avoid unnecessary re-runs (leave secretKey, model, toast,
setIndicator, setNewModel as needed).
- Line 276: The input currently uses the invalid attribute type="string"; locate
the JSX input element that sets type="string" and change it to type="number",
add min="5" and max="10" (and an appropriate step if needed), and ensure the
component parses/handles the numeric value (e.g., convert string
event.target.value to a number before validation or state updates) so the field
enforces the 5..10 range in the UI and logic.
In `@app/settings/ModelSelector.tsx`:
- Around line 135-186: The horizontal grid uses auto-cols-[10%] which makes each
model cell too narrow; update the container layout (the div that maps
categoryModels and renders each model button) to give each item a reasonable
minimum width — e.g., replace auto-cols-[10%] with auto-cols-[minmax(120px,1fr)]
or switch the container to a responsive flex layout (flex flex-wrap) so model
buttons rendered by formatModelDisplayName/handleModelClick/selectedModel have
usable width and avoid excessive truncation.
In `@e2e/logic/POM/settingsBrowserPage.ts`:
- Around line 50-52: The method waitForEnvironmentSection is misnamed compared
to the other Chat-related APIs; rename it to waitForChatSection and update any
callers to use the new name, keeping the implementation unchanged (still
returning waitForElementToBeVisible(this.chatSectionHeader)); also search for
references to waitForEnvironmentSection and replace them with waitForChatSection
so it aligns with clickChatSectionHeader and expandChatSection.
In `@e2e/tests/chat.spec.ts`:
- Around line 58-59: getAvailableModels() may return an empty array so using
models[0] can yield undefined; before calling settings.selectModel(models[0])
add a guard/assertion and ensure models is non-empty (e.g., assert
expect(models.length).toBeGreaterThan(0) or wait/retry until
getAvailableModels() returns a non-empty array) and only then call
settings.selectModel(models[0]); reference getAvailableModels, models, and
selectModel to find the code to modify.
In `@e2e/tests/settingsBrowser.spec.ts`:
- Around line 116-133: The test assumes searching for the literal string
"gpt"/"claude" will match the dynamically fetched openaiModel/anthropicModel,
which can be false; update the test to derive the search term from the actual
model name instead of hardcoding it: call settingsBrowserPage.searchModels(...)
using a substring or provider-unique token from openaiModel (e.g.,
openaiModel.name.slice(0,N) or another guaranteed-matching fragment) before
asserting via settingsBrowserPage.isModelVisible(openaiModel), and do the same
for anthropicModel, ensuring the search term will always match the target model
returned by the dynamic fixtures.
- Around line 25-43: The code assumes tempApiCall.getModelsByProvider(...)
returns an object with a models array and will crash if models is undefined;
update the assignments (for openaiModel, anthropicModel, geminiModel) to use
optional chaining and safe defaults (e.g., use response?.models?.[0] ??
'fallback') when reading the first model from the responses returned by
tempApiCall.getModelsByProvider to avoid a TypeError if models is missing.
🧹 Nitpick comments (13)
app/api/chat/route.tsx (3)
96-104: Content-Type mismatch on validation error response.The response body is a plain string (
validation.error), butContent-Typeis set totext/event-stream. Since this is a 400 error that short-circuits before streaming, consider usingapplication/jsonortext/plaininstead. The SSE headers (Cache-Control,Connection: keep-alive) are also unnecessary here.Suggested fix
if (!validation.success) { - return new Response(validation.error, { - status: 400, - headers: { - "Content-Type": "text/event-stream", - "Cache-Control": "no-cache", - Connection: "keep-alive", - ...getCorsHeaders(request), - }, + return NextResponse.json({ error: validation.error }, { + status: 400, + headers: getCorsHeaders(request), }); }
171-173: Abort handler can throw on an already-closed writer.Pre-existing issue, but worth noting:
writer.close()at lines 155/163/168 runs before the abort listener is registered at line 171. If the client aborts after the writer is already closed, this will throw. A guard would prevent unhandled errors.Suggested guard
request.signal.addEventListener("abort", () => { - writer.close(); + writer.close().catch(() => {}); });
78-88: TransformStream allocated before auth check — wasted on early return.Lines 79–80 create the stream before
getClientat line 84. On auth failure (line 87), the stream is abandoned. Move the stream creation after authentication succeeds to avoid the unnecessary allocation.Suggested reorder
export async function POST(request: NextRequest) { - const encoder = new TextEncoder(); - const { readable, writable } = new TransformStream(); - const writer = writable.getWriter(); - try { // Verify authentication via getClient const session = await getClient(request); if (session instanceof NextResponse) { return session; } const body = await request.json(); // Validate request body const validation = validateBody(chatRequest, body); if (!validation.success) { - return new Response(validation.error, { - status: 400, - headers: { - "Content-Type": "text/event-stream", - "Cache-Control": "no-cache", - Connection: "keep-alive", - ...getCorsHeaders(request), - }, + return NextResponse.json({ error: validation.error }, { + status: 400, + headers: getCorsHeaders(request), }); } + const encoder = new TextEncoder(); + const { readable, writable } = new TransformStream(); + const writer = writable.getWriter(); + const { messages, graphName, key, model } = validation.data;e2e/logic/POM/settingsBrowserPage.ts (2)
32-38: Private helper param names still usemodelName/displayTextwhile public API usesproviderName.
getModelButton(modelName)andgetModelButtonByDisplayText(displayText)weren't renamed to align with the public methods'providerNameparameter. Minor inconsistency within the class.
294-312:getAvailableModelsrelies on a hardcodedwaitForTimeout(500)to wait for models to load.This is fragile and could cause flakiness in CI. Consider waiting for a specific condition (e.g., at least one
[data-testid^="selectModel"]element to appear) instead of a fixed delay.e2e/tests/settingsBrowser.spec.ts (1)
15-19:geminiModelis declared and suppressed with eslint-disable but never used.If no test currently needs it, remove it to avoid dead code. You can always re-add it when a Gemini-specific test is introduced.
e2e/tests/chat.spec.ts (1)
311-313: Fragile regex for modifying Cypher RETURN clause.
editorQuery!.replace(/RETURN\s+(\w+)\.name/gi, 'RETURN $1')assumes the generated query usesRETURN <alias>.name. If the LLM generates a different pattern (e.g.,RETURN p.name AS friend_name), the regex won't match and the test logic silently does nothing. Consider adding an assertion that the replacement actually changed the query.app/graph/Chat.tsx (2)
126-146:setIsLoading(false)calls are no-ops here.At lines 133 and 144,
setIsLoading(false)is called beforesetIsLoading(true)at line 148. SinceisLoadingis alreadyfalseat this point in the flow (the early return at line 96 already guards the loading case), these are unnecessary.
114-124: Fragile DOM query to navigate to settings.Clicking a button found via
querySelector('[data-testid="settingsButton"]')after a 500mssetTimeoutis brittle — it depends on the element being in the DOM and the timing being sufficient. Consider using the router (router.push('/settings')) or a shared callback from context instead.app/providers.tsx (1)
508-508: Initial stateuseState(0)differs from localStorage default"5".
maxSavedMessagesstarts at0(line 92) until the async init effect at line 508 sets it from localStorage (defaulting to5). If any consumer readsmaxSavedMessagesbefore init completes, it sees0— which would causegetLastUserMessagesWithContextto save zero user messages. The window is brief and unlikely to cause problems in practice, but you could align the initialuseState(5)with the localStorage default for consistency.app/settings/browserSettings.tsx (1)
139-146: Boundary validation is good, but consider clamping on input rather than rejecting on save.Currently the user can type any value, and it's only rejected on submit. Since the range is small ([5..10]), clamping inline (or using
type="number"withmin/max) would provide immediate feedback and prevent a confusing toast after clicking save.app/components/Header.tsx (2)
145-160: Prefernext/linkoverrouter.pushfor static navigation targets.These buttons navigate to fixed routes (
/settings,/graph). Wrapping them withnext/link(or using<Link>) would enable Next.js prefetching and improve perceived navigation speed. Currentlyrouter.pushonly triggers navigation on click with no prefetch.As per coding guidelines, "Review React components for proper hooks usage, accessibility, and performance."
161-170: Remove commented-out SCHEMAS block or track it in an issue.Shipping commented-out UI code adds noise. If this feature is planned, a tracked issue is cleaner than a dormant code block.
| useEffect(() => { | ||
| const savedMessages = localStorage.getItem(`chat-${graphName}`); | ||
| setMessages(JSON.parse(savedMessages || "[]")); | ||
| }, [graphName]); // Re-run when graph changes | ||
|
|
||
| // Save messages on unmount or graph change | ||
| useEffect(() => () => { | ||
| if (messages.length > 0) { | ||
| localStorage.setItem(`chat-${graphName}`, JSON.stringify(getLastUserMessagesWithContext(messages, maxSavedMessages))); | ||
| } | ||
| }, [graphName, messages, maxSavedMessages]); |
There was a problem hiding this comment.
Race condition: save effect writes stale messages to the new graph's localStorage key on graph switch.
When graphName changes:
- Save-effect cleanup fires with old
graphName+ oldmessages→ correctly writes old graph's history. - Load-effect runs
setMessages(...)for the new graph (async state update). - Save-effect re-runs with new
graphNamebut oldmessages(state hasn't flushed yet). - On next render, the save-effect cleanup writes the old graph's messages under the new
chat-{graphName}key, overwriting the newly loaded data.
Consider using a ref to track the "just-loaded" state, or consolidate load/save into a single effect:
Proposed fix — skip save on the render immediately after load
+ const justLoaded = useRef(false);
+
// Load messages for current graph on mount
useEffect(() => {
const savedMessages = localStorage.getItem(`chat-${graphName}`);
setMessages(JSON.parse(savedMessages || "[]"));
+ justLoaded.current = true;
}, [graphName]);
// Save messages on unmount or graph change
useEffect(() => () => {
- if (messages.length > 0) {
+ if (justLoaded.current) {
+ justLoaded.current = false;
+ return;
+ }
+ if (messages.length > 0 && graphName) {
localStorage.setItem(`chat-${graphName}`, JSON.stringify(getLastUserMessagesWithContext(messages, maxSavedMessages)));
}
}, [graphName, messages, maxSavedMessages]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| const savedMessages = localStorage.getItem(`chat-${graphName}`); | |
| setMessages(JSON.parse(savedMessages || "[]")); | |
| }, [graphName]); // Re-run when graph changes | |
| // Save messages on unmount or graph change | |
| useEffect(() => () => { | |
| if (messages.length > 0) { | |
| localStorage.setItem(`chat-${graphName}`, JSON.stringify(getLastUserMessagesWithContext(messages, maxSavedMessages))); | |
| } | |
| }, [graphName, messages, maxSavedMessages]); | |
| const justLoaded = useRef(false); | |
| useEffect(() => { | |
| const savedMessages = localStorage.getItem(`chat-${graphName}`); | |
| setMessages(JSON.parse(savedMessages || "[]")); | |
| justLoaded.current = true; | |
| }, [graphName]); // Re-run when graph changes | |
| // Save messages on unmount or graph change | |
| useEffect(() => () => { | |
| if (justLoaded.current) { | |
| justLoaded.current = false; | |
| return; | |
| } | |
| if (messages.length > 0 && graphName) { | |
| localStorage.setItem(`chat-${graphName}`, JSON.stringify(getLastUserMessagesWithContext(messages, maxSavedMessages))); | |
| } | |
| }, [graphName, messages, maxSavedMessages]); |
🤖 Prompt for AI Agents
In `@app/graph/Chat.tsx` around lines 52 - 62, The save-effect is writing stale
messages to the new graph key because it runs immediately after load before
state updates flush; update the logic in Chat.tsx to skip the save when messages
were just loaded for a changed graph by adding a ref (e.g., justLoadedRef)
toggled in the load useEffect that sets messages for graphName, and check that
ref inside the save-effect/cleanup to return early if justLoadedRef is set (then
clear it); alternatively consolidate load/save into a single effect that reads
from localStorage and only writes on unmount or when messages actually change
for the current graph, referencing the useEffect hooks that read
`localStorage.getItem(\`chat-${graphName}\`)`, setMessages, the save cleanup
that calls localStorage.setItem(`chat-${graphName}`,
JSON.stringify(getLastUserMessagesWithContext(messages, maxSavedMessages))) and
the graphName/messages/maxSavedMessages deps.
| useEffect(() => { | ||
| (async () => { | ||
| if (!model && secretKey) { | ||
| const res = await securedFetch( | ||
| `/api/chat/models?provider=${detectProviderFromApiKey(secretKey)}`, | ||
| { method: "GET" }, | ||
| toast, | ||
| setIndicator | ||
| ); | ||
|
|
||
| if (result.ok) { | ||
| const { models } = await result.json(); | ||
| setModelDisplayNames(models); | ||
| } else { | ||
| // Fallback to gpt-4o-mini if fetch fails | ||
| setModelDisplayNames(["gpt-4o-mini"]); | ||
| } | ||
| } catch (error) { | ||
| setModelDisplayNames(["gpt-4o-mini"]); | ||
| setIndicator("offline"); | ||
| } finally { | ||
| setIsLoadingModels(false); | ||
| } | ||
| }; | ||
| if (!res.ok) return; | ||
|
|
||
| fetchModels(); | ||
| }, [toast, setIndicator]); | ||
| const defaultModel = (await res.json()).models[0]; | ||
|
|
||
| if (!defaultModel) return; | ||
|
|
||
| setNewModel(defaultModel); | ||
| setModel(defaultModel); | ||
| localStorage.setItem("model", defaultModel); | ||
| } | ||
| })(); | ||
| }, [secretKey, model, saveSettings, toast, setIndicator, setNewModel, setModel]); |
There was a problem hiding this comment.
Auto-detect model effect bypasses the staged-settings save pattern.
Lines 103–105 directly call setModel(defaultModel) and localStorage.setItem("model", defaultModel), persisting without user confirmation. Every other setting in this page is staged via new* state and only persisted on explicit save. This breaks that convention and could surprise users who expect unsaved changes to be discardable.
Also, saveSettings is in the dependency array but unused in the effect body — this causes unnecessary re-runs.
Proposed fix — stage the model instead of persisting immediately
useEffect(() => {
(async () => {
if (!model && secretKey) {
const res = await securedFetch(
`/api/chat/models?provider=${detectProviderFromApiKey(secretKey)}`,
{ method: "GET" },
toast,
setIndicator
);
if (!res.ok) return;
const defaultModel = (await res.json()).models[0];
if (!defaultModel) return;
setNewModel(defaultModel);
- setModel(defaultModel);
- localStorage.setItem("model", defaultModel);
}
})();
- }, [secretKey, model, saveSettings, toast, setIndicator, setNewModel, setModel]);
+ }, [secretKey, model, toast, setIndicator, setNewModel]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| (async () => { | |
| if (!model && secretKey) { | |
| const res = await securedFetch( | |
| `/api/chat/models?provider=${detectProviderFromApiKey(secretKey)}`, | |
| { method: "GET" }, | |
| toast, | |
| setIndicator | |
| ); | |
| if (result.ok) { | |
| const { models } = await result.json(); | |
| setModelDisplayNames(models); | |
| } else { | |
| // Fallback to gpt-4o-mini if fetch fails | |
| setModelDisplayNames(["gpt-4o-mini"]); | |
| } | |
| } catch (error) { | |
| setModelDisplayNames(["gpt-4o-mini"]); | |
| setIndicator("offline"); | |
| } finally { | |
| setIsLoadingModels(false); | |
| } | |
| }; | |
| if (!res.ok) return; | |
| fetchModels(); | |
| }, [toast, setIndicator]); | |
| const defaultModel = (await res.json()).models[0]; | |
| if (!defaultModel) return; | |
| setNewModel(defaultModel); | |
| setModel(defaultModel); | |
| localStorage.setItem("model", defaultModel); | |
| } | |
| })(); | |
| }, [secretKey, model, saveSettings, toast, setIndicator, setNewModel, setModel]); | |
| useEffect(() => { | |
| (async () => { | |
| if (!model && secretKey) { | |
| const res = await securedFetch( | |
| `/api/chat/models?provider=${detectProviderFromApiKey(secretKey)}`, | |
| { method: "GET" }, | |
| toast, | |
| setIndicator | |
| ); | |
| if (!res.ok) return; | |
| const defaultModel = (await res.json()).models[0]; | |
| if (!defaultModel) return; | |
| setNewModel(defaultModel); | |
| } | |
| })(); | |
| }, [secretKey, model, toast, setIndicator, setNewModel]); |
🤖 Prompt for AI Agents
In `@app/settings/browserSettings.tsx` around lines 87 - 108, The effect currently
auto-applies and persists a detected model by calling setModel(defaultModel) and
localStorage.setItem("model", defaultModel) which bypasses the staged-new*
pattern; change it to only stage the detected model by calling
setNewModel(defaultModel) (do not call setModel or touch localStorage), so the
model is only persisted when the user clicks Save; also remove saveSettings from
the effect dependency array since it is not used in the effect body to avoid
unnecessary re-runs (leave secretKey, model, toast, setIndicator, setNewModel as
needed).
| <Input | ||
| id="maxSaveMessagesInput" | ||
| data-testid="maxSaveMessagesInput" | ||
| type="string" |
There was a problem hiding this comment.
type="string" is not a valid HTML input type.
Browsers will fall back to type="text", but this should be explicit. Since the field is numeric with a [5..10] range, type="number" with min/max attributes would be more appropriate.
Proposed fix
- <Input
- id="maxSaveMessagesInput"
- data-testid="maxSaveMessagesInput"
- type="string"
- value={newMaxSavedMessages}
+ <Input
+ id="maxSaveMessagesInput"
+ data-testid="maxSaveMessagesInput"
+ type="number"
+ min={5}
+ max={10}
+ value={newMaxSavedMessages}🤖 Prompt for AI Agents
In `@app/settings/browserSettings.tsx` at line 276, The input currently uses the
invalid attribute type="string"; locate the JSX input element that sets
type="string" and change it to type="number", add min="5" and max="10" (and an
appropriate step if needed), and ensure the component parses/handles the numeric
value (e.g., convert string event.target.value to a number before validation or
state updates) so the field enforces the 5..10 range in the UI and logic.
| <div key={category} className="bg-muted/40 rounded-md overflow-x-auto flex-1 grid grid-flow-col auto-cols-[10%] gap-2 items-center"> | ||
| {/* Category Label */} | ||
| <div className="flex items-center gap-2"> | ||
| {getCategoryIcon(category)} | ||
| <Tooltip> | ||
| <TooltipTrigger asChild> | ||
| <h3 className="text-sm font-bold text-foreground tracking-wide truncate"> | ||
| {category} | ||
| </h3> | ||
| </TooltipTrigger> | ||
| <TooltipContent> | ||
| <p>{category}</p> | ||
| </TooltipContent> | ||
| </Tooltip> | ||
| </div> | ||
| <h3 className="text-sm font-bold text-foreground tracking-wide"> | ||
| {category} | ||
| </h3> | ||
| </div> | ||
|
|
||
| {/* Models in Category */} | ||
| <div className="space-y-1"> | ||
| {categoryModels.map((model) => { | ||
| const isSelected = model === selectedModel; | ||
| return ( | ||
| <button | ||
| key={model} | ||
| type="button" | ||
| data-testid={`selectModel${model}`} | ||
| data-selected={isSelected} | ||
| onClick={() => handleModelClick(model)} | ||
| disabled={disabled} | ||
| className={cn( | ||
| "w-full group relative flex items-center justify-between px-3 py-2.5 rounded-md text-sm transition-all duration-150", | ||
| "hover:bg-muted/80 active:scale-[0.98]", | ||
| isSelected && "bg-primary/10 hover:bg-primary/15 ring-1 ring-primary/30", | ||
| disabled && "opacity-50 cursor-not-allowed" | ||
| )} | ||
| > | ||
| <span className={cn( | ||
| "font-medium truncate", | ||
| isSelected ? "text-primary" : "text-foreground" | ||
| )}> | ||
| {formatModelDisplayName(model)} | ||
| </span> | ||
|
|
||
| {isSelected && ( | ||
| <Check className="h-4 w-4 text-primary flex-shrink-0 ml-2" /> | ||
| )} | ||
| </button> | ||
| <Tooltip key={model}> | ||
| <TooltipTrigger asChild> | ||
| <button | ||
| type="button" | ||
| data-testid={`selectModel${model}`} | ||
| data-selected={isSelected} | ||
| onClick={() => handleModelClick(model)} | ||
| disabled={disabled} | ||
| className={cn( | ||
| "flex items-center justify-between p-2 rounded-md text-sm transition-all duration-150", | ||
| "hover:bg-muted/80 active:scale-[0.98]", | ||
| isSelected && "bg-primary/10 hover:bg-primary/15 ring-1 ring-primary/30", | ||
| disabled && "opacity-50 cursor-not-allowed" | ||
| )} | ||
| > | ||
| <span className={cn( | ||
| "font-medium truncate", | ||
| isSelected ? "text-primary" : "text-foreground" | ||
| )}> | ||
| {formatModelDisplayName(model)} | ||
| </span> | ||
|
|
||
| {isSelected && ( | ||
| <Check className="h-4 w-4 text-primary flex-shrink-0 ml-2" /> | ||
| )} | ||
| </button> | ||
| </TooltipTrigger> | ||
| <TooltipContent> | ||
| <p>{model}</p> | ||
| </TooltipContent> | ||
| </Tooltip> | ||
| ); | ||
| })} | ||
| </div> |
There was a problem hiding this comment.
Horizontal grid with auto-cols-[10%] makes model items extremely narrow.
Each column is only 10% of the container width. For a typical settings panel, that's roughly 30–60px per cell — far too narrow for model names like gpt-4-turbo-preview. Even with tooltips on hover, the truncated text makes it hard to scan and compare models at a glance.
Was this intended to be a vertical list with horizontal overflow only for very long category rows? If so, consider auto-cols-[minmax(120px,1fr)] or a flex flex-wrap layout instead, so items have a usable minimum width.
-<div key={category} className="bg-muted/40 rounded-md overflow-x-auto flex-1 grid grid-flow-col auto-cols-[10%] gap-2 items-center">
+<div key={category} className="bg-muted/40 rounded-md overflow-x-auto flex-1 grid grid-flow-col auto-cols-[minmax(140px,1fr)] gap-2 items-center">🤖 Prompt for AI Agents
In `@app/settings/ModelSelector.tsx` around lines 135 - 186, The horizontal grid
uses auto-cols-[10%] which makes each model cell too narrow; update the
container layout (the div that maps categoryModels and renders each model
button) to give each item a reasonable minimum width — e.g., replace
auto-cols-[10%] with auto-cols-[minmax(120px,1fr)] or switch the container to a
responsive flex layout (flex flex-wrap) so model buttons rendered by
formatModelDisplayName/handleModelClick/selectedModel have usable width and
avoid excessive truncation.
| async waitForEnvironmentSection(): Promise<boolean> { | ||
| return waitForElementToBeVisible(this.environmentSectionHeader); | ||
| return waitForElementToBeVisible(this.chatSectionHeader); | ||
| } |
There was a problem hiding this comment.
Inconsistent rename: waitForEnvironmentSection not updated to waitForChatSection.
All other "Environment" references were renamed to "Chat" (clickChatSectionHeader, expandChatSection), but this method still uses the old terminology. This will confuse callers.
Proposed fix
- async waitForEnvironmentSection(): Promise<boolean> {
+ async waitForChatSection(): Promise<boolean> {
return waitForElementToBeVisible(this.chatSectionHeader);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async waitForEnvironmentSection(): Promise<boolean> { | |
| return waitForElementToBeVisible(this.environmentSectionHeader); | |
| return waitForElementToBeVisible(this.chatSectionHeader); | |
| } | |
| async waitForChatSection(): Promise<boolean> { | |
| return waitForElementToBeVisible(this.chatSectionHeader); | |
| } |
🤖 Prompt for AI Agents
In `@e2e/logic/POM/settingsBrowserPage.ts` around lines 50 - 52, The method
waitForEnvironmentSection is misnamed compared to the other Chat-related APIs;
rename it to waitForChatSection and update any callers to use the new name,
keeping the implementation unchanged (still returning
waitForElementToBeVisible(this.chatSectionHeader)); also search for references
to waitForEnvironmentSection and replace them with waitForChatSection so it
aligns with clickChatSectionHeader and expandChatSection.
| const models = await settings.getAvailableModels(); | ||
| await settings.selectModel(models[0]); // Select a model to enable the input |
There was a problem hiding this comment.
No guard on empty models array — models[0] could be undefined.
If getAvailableModels() returns an empty array (e.g., models haven't loaded yet), models[0] is undefined, and selectModel(undefined) will look for a non-existent test ID. Add a guard or assertion.
Proposed fix
const models = await settings.getAvailableModels();
+ expect(models.length).toBeGreaterThan(0);
await settings.selectModel(models[0]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const models = await settings.getAvailableModels(); | |
| await settings.selectModel(models[0]); // Select a model to enable the input | |
| const models = await settings.getAvailableModels(); | |
| expect(models.length).toBeGreaterThan(0); | |
| await settings.selectModel(models[0]); // Select a model to enable the input |
🤖 Prompt for AI Agents
In `@e2e/tests/chat.spec.ts` around lines 58 - 59, getAvailableModels() may return
an empty array so using models[0] can yield undefined; before calling
settings.selectModel(models[0]) add a guard/assertion and ensure models is
non-empty (e.g., assert expect(models.length).toBeGreaterThan(0) or wait/retry
until getAvailableModels() returns a non-empty array) and only then call
settings.selectModel(models[0]); reference getAvailableModels, models, and
selectModel to find the code to modify.
| try { | ||
| // Get models from each provider | ||
| const openaiModels = await tempApiCall.getModelsByProvider('openai'); | ||
| const anthropicModels = await tempApiCall.getModelsByProvider('anthropic'); | ||
| const geminiModels = await tempApiCall.getModelsByProvider('gemini'); | ||
|
|
||
| // Store first model from each provider | ||
| openaiModel = openaiModels.models[0] || 'gpt-4o-mini'; | ||
| anthropicModel = anthropicModels.models[0] || 'claude-3-5-sonnet'; | ||
| geminiModel = geminiModels.models[0] || 'gemini-2.0-flash-exp'; | ||
|
|
||
| // console.log('Test models:', { openaiModel, anthropicModel, geminiModel }); | ||
| } catch (error) { | ||
| // Fallback to default models if API call fails | ||
| console.warn('Failed to fetch models, using defaults:', error); | ||
| openaiModel = 'gpt-4o-mini'; | ||
| anthropicModel = 'claude-3-5-sonnet'; | ||
| geminiModel = 'gemini-2.0-flash-exp'; | ||
| } |
There was a problem hiding this comment.
Potential TypeError if API response lacks models property.
If getModelsByProvider returns an object without a models array (e.g., {}), accessing .models[0] will throw before the || 'fallback' can kick in. Use optional chaining.
Proposed fix
- openaiModel = openaiModels.models[0] || 'gpt-4o-mini';
- anthropicModel = anthropicModels.models[0] || 'claude-3-5-sonnet';
- geminiModel = geminiModels.models[0] || 'gemini-2.0-flash-exp';
+ openaiModel = openaiModels?.models?.[0] || 'gpt-4o-mini';
+ anthropicModel = anthropicModels?.models?.[0] || 'claude-3-5-sonnet';
+ geminiModel = geminiModels?.models?.[0] || 'gemini-2.0-flash-exp';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| // Get models from each provider | |
| const openaiModels = await tempApiCall.getModelsByProvider('openai'); | |
| const anthropicModels = await tempApiCall.getModelsByProvider('anthropic'); | |
| const geminiModels = await tempApiCall.getModelsByProvider('gemini'); | |
| // Store first model from each provider | |
| openaiModel = openaiModels.models[0] || 'gpt-4o-mini'; | |
| anthropicModel = anthropicModels.models[0] || 'claude-3-5-sonnet'; | |
| geminiModel = geminiModels.models[0] || 'gemini-2.0-flash-exp'; | |
| // console.log('Test models:', { openaiModel, anthropicModel, geminiModel }); | |
| } catch (error) { | |
| // Fallback to default models if API call fails | |
| console.warn('Failed to fetch models, using defaults:', error); | |
| openaiModel = 'gpt-4o-mini'; | |
| anthropicModel = 'claude-3-5-sonnet'; | |
| geminiModel = 'gemini-2.0-flash-exp'; | |
| } | |
| try { | |
| // Get models from each provider | |
| const openaiModels = await tempApiCall.getModelsByProvider('openai'); | |
| const anthropicModels = await tempApiCall.getModelsByProvider('anthropic'); | |
| const geminiModels = await tempApiCall.getModelsByProvider('gemini'); | |
| // Store first model from each provider | |
| openaiModel = openaiModels?.models?.[0] || 'gpt-4o-mini'; | |
| anthropicModel = anthropicModels?.models?.[0] || 'claude-3-5-sonnet'; | |
| geminiModel = geminiModels?.models?.[0] || 'gemini-2.0-flash-exp'; | |
| // console.log('Test models:', { openaiModel, anthropicModel, geminiModel }); | |
| } catch (error) { | |
| // Fallback to default models if API call fails | |
| console.warn('Failed to fetch models, using defaults:', error); | |
| openaiModel = 'gpt-4o-mini'; | |
| anthropicModel = 'claude-3-5-sonnet'; | |
| geminiModel = 'gemini-2.0-flash-exp'; | |
| } |
🤖 Prompt for AI Agents
In `@e2e/tests/settingsBrowser.spec.ts` around lines 25 - 43, The code assumes
tempApiCall.getModelsByProvider(...) returns an object with a models array and
will crash if models is undefined; update the assignments (for openaiModel,
anthropicModel, geminiModel) to use optional chaining and safe defaults (e.g.,
use response?.models?.[0] ?? 'fallback') when reading the first model from the
responses returned by tempApiCall.getModelsByProvider to avoid a TypeError if
models is missing.
| test('@readwrite Verify model search filters models correctly', async () => { | ||
| const settingsBrowserPage = await browser.createNewPage(SettingsBrowserPage, urls.settingsUrl); | ||
|
|
||
| await settingsBrowserPage.expandEnvironmentSection(); | ||
| await settingsBrowserPage.expandChatSection(); | ||
| await settingsBrowserPage.waitForChatApiKeyInputEnabled(); | ||
|
|
||
| // Search for "gpt" - should only show OpenAI models | ||
| await settingsBrowserPage.searchModels("gpt"); | ||
| await settingsBrowserPage.waitForTimeout(300); // Wait for debounce | ||
|
|
||
| // Verify GPT models are visible | ||
| const isGptModelVisible = await settingsBrowserPage.isModelVisible("gpt-4o-mini"); | ||
| const isGptModelVisible = await settingsBrowserPage.isModelVisible(openaiModel); | ||
| expect(isGptModelVisible).toBe(true); | ||
|
|
||
| // Verify non-GPT models are not visible | ||
| const isClaudeVisible = await settingsBrowserPage.isModelVisible("claude-3-5-sonnet"); | ||
| const isClaudeVisible = await settingsBrowserPage.isModelVisible(anthropicModel); | ||
| expect(isClaudeVisible).toBe(false); | ||
| }); |
There was a problem hiding this comment.
Search filter "gpt" may not match the dynamically-fetched openaiModel.
The test searches for "gpt" and then asserts openaiModel is visible. However, OpenAI models like o1-mini or o3-mini don't contain "gpt". The same issue applies at Line 142-147 where "claude" is searched but anthropicModel might not contain "claude". These tests could fail if the first available model doesn't match the hardcoded search term.
Consider deriving the search term from the actual model name, or searching for a term guaranteed to match only the target provider.
🤖 Prompt for AI Agents
In `@e2e/tests/settingsBrowser.spec.ts` around lines 116 - 133, The test assumes
searching for the literal string "gpt"/"claude" will match the dynamically
fetched openaiModel/anthropicModel, which can be false; update the test to
derive the search term from the actual model name instead of hardcoding it: call
settingsBrowserPage.searchModels(...) using a substring or provider-unique token
from openaiModel (e.g., openaiModel.name.slice(0,N) or another
guaranteed-matching fragment) before asserting via
settingsBrowserPage.isModelVisible(openaiModel), and do the same for
anthropicModel, ensuring the search term will always match the target model
returned by the dynamic fixtures.
Summary by CodeRabbit