Conversation
WalkthroughAdds OpenRouter as a supported LLM provider across the LLM hint flow: updates provider types, configures ChatOpenAI for OpenRouter (baseURL and API key handling), adjusts model/temperature resolution, and adds corresponding unit, e2e, and seeding changes. Changes
sequenceDiagram
rect rgba(60,130,200,0.5)
Client->>API_Route: POST /api/llm-hint (provider: "openrouter", model, temp)
end
rect rgba(120,200,80,0.5)
API_Route->>GetChatModel: resolve provider & api key (env or account)
GetChatModel->>Env: read OPENROUTER_API_KEY / account-scoped key
GetChatModel-->>API_Route: ChatOpenAI client configured (baseURL=https://openrouter.ai/api/v1)
end
rect rgba(200,90,140,0.5)
API_Route->>OpenRouter_API: send request via ChatOpenAI client
OpenRouter_API-->>API_Route: LLM response
API_Route-->>Client: LLM hint response
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/unit/llm-hint.test.ts (1)
717-781: Add one test for OpenRouter default model fallback.Please add a case where
provider: "openrouter"andmodelis omitted, then assertmodel: "openai/gpt-4o-mini"is used. That protects the new defaulting logic from regressions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/llm-hint.test.ts` around lines 717 - 781, Add a new test inside the "OpenRouter Provider" describe block that sets mockTestResult.extra_data.llm!.provider = "openrouter" but leaves model undefined, then POST to the API via the existing POST(request) helper and assert response.status === 200 and that mockChatOpenAI was called with an objectContaining({ model: "openai/gpt-4o-mini", apiKey: process.env.OPENROUTER_API_KEY (or account-specific env if you set mockTestResult.extra_data.llm!.account), configuration: { baseURL: "https://openrouter.ai/api/v1" } }); use the same pattern as the other OpenRouter tests and refer to POST, mockTestResult, and mockChatOpenAI so the test verifies the default model fallback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/api/llm-hint/route.ts`:
- Around line 188-200: The temperature fallback currently uses the || operator
which treats 0 as falsy and overrides explicit 0; update each provider branch
("openai", "azure", "anthropic", "openrouter") to use the nullish coalescing
operator (??) for the temperature assignment (e.g., replace temperature || 0.85
with temperature ?? 0.85) in the constructors (e.g., ChatOpenAI / AzureOpenAI /
ChatAnthropic or whichever classes are used in those branches) so an explicit 0
is preserved.
---
Nitpick comments:
In `@tests/unit/llm-hint.test.ts`:
- Around line 717-781: Add a new test inside the "OpenRouter Provider" describe
block that sets mockTestResult.extra_data.llm!.provider = "openrouter" but
leaves model undefined, then POST to the API via the existing POST(request)
helper and assert response.status === 200 and that mockChatOpenAI was called
with an objectContaining({ model: "openai/gpt-4o-mini", apiKey:
process.env.OPENROUTER_API_KEY (or account-specific env if you set
mockTestResult.extra_data.llm!.account), configuration: { baseURL:
"https://openrouter.ai/api/v1" } }); use the same pattern as the other
OpenRouter tests and refer to POST, mockTestResult, and mockChatOpenAI so the
test verifies the default model fallback.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
app/api/llm-hint/route.tsscripts/DatabaseSeedingUtils.tstests/e2e/llm-hint.test.tsxtests/unit/llm-hint.test.tsutils/supabase/DatabaseTypes.d.ts
|
The latest updates on your projects. Learn more about Argos notifications ↗︎ Awaiting the start of a new Argos build… |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
app/api/llm-hint/route.ts (1)
188-200: OpenRouter integration is correct — optional attribution headers can enhance tracking.The implementation correctly uses
ChatOpenAIwith baseURL override, which is the standard approach for OpenAI-compatible APIs. API key handling and error messaging match established patterns.OpenRouter supports optional
HTTP-RefererandX-Titleheaders for attribution and analytics. These are entirely optional and won't affect functionality, but can improve tracking in the OpenRouter dashboard if desired.🔧 Optional: Add attribution headers
return new ChatOpenAI({ model, apiKey: process.env[key_env_name], - configuration: { baseURL: "https://openrouter.ai/api/v1" }, + configuration: { + baseURL: "https://openrouter.ai/api/v1", + defaultHeaders: { + "HTTP-Referer": process.env.OPENROUTER_REFERER || "https://pawtograder.com", + "X-Title": "Pawtograder Feedbot" + } + }, temperature: temperature ?? 0.85, maxTokens: maxTokens, maxRetries: maxRetries || 2 });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/llm-hint/route.ts` around lines 188 - 200, Add optional attribution headers to the OpenRouter client construction: when provider === "openrouter" and creating the ChatOpenAI instance, extend the configuration object passed to ChatOpenAI (currently configuration: { baseURL: "https://openrouter.ai/api/v1" }) to include a headers map with optional "HTTP-Referer" and "X-Title" values (e.g., taken from environment variables or request context); update key_env_name usage remains the same and ensure these headers are only added when present so behavior is unchanged if they are not set.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@app/api/llm-hint/route.ts`:
- Around line 188-200: Add optional attribution headers to the OpenRouter client
construction: when provider === "openrouter" and creating the ChatOpenAI
instance, extend the configuration object passed to ChatOpenAI (currently
configuration: { baseURL: "https://openrouter.ai/api/v1" }) to include a headers
map with optional "HTTP-Referer" and "X-Title" values (e.g., taken from
environment variables or request context); update key_env_name usage remains the
same and ensure these headers are only added when present so behavior is
unchanged if they are not set.
Summary by CodeRabbit
New Features
Behavior
Tests