Skip to content

Conversation

@subtleGradient
Copy link
Contributor

Summary

Fixes an issue where model options passed as the second argument to the provider callable were not being forwarded to the Responses API request.

Problem

When users pass reasoning options:

const model = openrouter('google/gemini-3-flash-preview', {
  reasoning: {
    enabled: true,
    effort: 'medium',
  },
});

The reasoning config was stored in this.settings.modelOptions but never forwarded to the API request. The model would not return reasoning output.

Solution

In openrouter-chat-language-model.ts, extract model options from settings and forward them to the request params in both doGenerate() and doStream():

  • reasoning (enabled, effort, maxTokens, summary)
  • provider (routing config)
  • models (fallback model IDs)
  • transforms (message transforms)
  • plugins (OpenRouter plugins)
  • route (routing strategy)

Testing

Added 6 new tests covering model options forwarding scenarios:

  • Reasoning options in doGenerate
  • Reasoning options in doStream
  • Provider routing options
  • Fallback models
  • Transforms
  • No model options when not provided

All 134 tests pass ✅
Typecheck clean ✅

Related

  • gap-916-reasoning-options-not-forwarded

Copilot AI review requested due to automatic review settings January 7, 2026 18:40
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes an issue where model options (reasoning, provider, models, transforms, plugins, route) passed as the second argument to the provider callable were not being forwarded to OpenRouter API requests. The fix extracts these options from settings.modelOptions and includes them in the request parameters for both streaming and non-streaming calls.

Key changes:

  • Forward model options including reasoning configuration to API requests
  • Add duplicate forwarding logic in both doGenerate() and doStream() methods
  • Include 6 new tests covering reasoning, provider routing, fallback models, and transforms

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
src/chat/openrouter-chat-language-model.ts Adds model options extraction and forwarding logic (lines 85-86, 103-121, 309-310, 327-345) to include reasoning, provider, models, transforms, plugins, and route in API requests
src/tests/chat/openrouter-chat-language-model.test.ts Adds mock SDK setup and 6 tests verifying model options are correctly included in request body for reasoning, provider, models, and transforms

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@subtleGradient
Copy link
Contributor Author

subtleGradient commented Jan 7, 2026

should fix this: #307 (comment)
cc @cpakken @idriss

@subtleGradient subtleGradient self-assigned this Jan 7, 2026
@ldriss
Copy link

ldriss commented Jan 7, 2026

@subtleGradient Thank you so much

Fixes gap-916. Model options passed as second argument to provider callable
are now correctly forwarded to the Responses API request.

- Forward reasoning config (enabled, effort, maxTokens, summary)
- Forward provider routing options (order, allowFallbacks, etc.)
- Forward fallback models array
- Forward transforms array
- Forward plugins array
- Forward route option

Added comprehensive unit tests to verify model options forwarding.
…-time options

Addresses PR feedback:
- Extract model options forwarding into buildModelOptionsParams helper
- Merge call-time providerOptions.openrouter with model-level options
- Call-time options override model-level (per design spec)
- Add tests for plugins, route, and call-time override behavior
Fixes gaps discovered via hyperslice analysis:
- gap-917: extraBody now spread into request params (allows arbitrary
  additional fields to be passed to the API)
- gap-918: Custom fetch now passed to OpenRouter SDK via HTTPClient
  (enables custom logging, proxies, and testing with mock fetch)

Note: gap-919 (usage.include) was investigated but the Responses API
always returns usage information by default - the option is not
applicable. Users needing Chat Completions API-specific options can
use extraBody.

Added test coverage for:
- extraBody being spread into request params (doGenerate/doStream)
- Explicit params override extraBody fields
- Custom fetch creates HTTPClient and passes to SDK
- No HTTPClient when no custom fetch provided
@subtleGradient subtleGradient force-pushed the fix/forward-model-options-to-api branch from c1c04ba to 527fa10 Compare January 7, 2026 19:13
@subtleGradient subtleGradient changed the base branch from feature/pr-snapshots to dev-v6 January 7, 2026 19:13
@pkg-pr-new
Copy link

pkg-pr-new bot commented Jan 7, 2026

Open in StackBlitz

npm i https://pkg.pr.new/OpenRouterTeam/ai-sdk-provider/@openrouter/ai-sdk-provider@321

commit: b025a83

@subtleGradient
Copy link
Contributor Author

cc @cpakken @idriss

try #321 (comment) snapshot and lemme know if it solves your problem or no?

@cpakken
Copy link

cpakken commented Jan 13, 2026

cc @cpakken @idriss

try #321 (comment) snapshot and lemme know if it solves your problem or no?

No, it works with ai v6 generateText but when used with streamText it produces zod errors

https://github.com/cpakken/openrouter-v6-issue

here is a minimal reproduction.

My use case is using gemini-3-flash-preview and streaming object output and image input. But with the latest openrouter provider v1.5.4 using with ai v6, is fails to emit onFinish and fails to resolve the stream with reasoning turned on. Hoping this new openrouter provider with ai v6 will fix the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants