Skip to content

Commit de0aaa8

Browse files
xcpkyzhaobinDouweM
authored
Ensure openrouter_reasoning model setting is sent to API (#3545)
Co-authored-by: zhaobin <[email protected]> Co-authored-by: Douwe Maan <[email protected]>
1 parent b324601 commit de0aaa8

File tree

6 files changed

+63
-14
lines changed

6 files changed

+63
-14
lines changed

docs/api/models/openrouter.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# `pydantic_ai.models.openrouter`
2+
3+
## Setup
4+
5+
For details on how to set up authentication with this model, see [model configuration for OpenRouter](../../models/openrouter.md).
6+
7+
::: pydantic_ai.models.openrouter

docs/models/openrouter.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,3 +52,24 @@ provider=OpenRouterProvider(
5252
),
5353
...
5454
```
55+
56+
## Model Settings
57+
58+
You can customize model behavior using [`OpenRouterModelSettings`][pydantic_ai.models.openrouter.OpenRouterModelSettings]:
59+
60+
```python
61+
from pydantic_ai import Agent
62+
from pydantic_ai.models.openrouter import OpenRouterModel, OpenRouterModelSettings
63+
64+
settings = OpenRouterModelSettings(
65+
openrouter_reasoning={
66+
'effort': 'high',
67+
},
68+
openrouter_usage={
69+
'include': True,
70+
}
71+
)
72+
model = OpenRouterModel('openai/gpt-5')
73+
agent = Agent(model, model_settings=settings)
74+
...
75+
```

docs/thinking.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -144,6 +144,20 @@ agent = Agent(model, model_settings=settings)
144144
...
145145
```
146146

147+
## OpenRouter
148+
149+
To enable thinking, use the [`OpenRouterModelSettings.openrouter_reasoning`][pydantic_ai.models.openrouter.OpenRouterModelSettings.openrouter_reasoning] [model setting](agents.md#model-run-settings).
150+
151+
```python {title="openrouter_thinking_part.py"}
152+
from pydantic_ai import Agent
153+
from pydantic_ai.models.openrouter import OpenRouterModel, OpenRouterModelSettings
154+
155+
model = OpenRouterModel('openai/gpt-5')
156+
settings = OpenRouterModelSettings(openrouter_reasoning={'effort': 'high'})
157+
agent = Agent(model, model_settings=settings)
158+
...
159+
```
160+
147161
## Mistral
148162

149163
Thinking is supported by the `magistral` family of models. It does not need to be specifically enabled.

mkdocs.yml

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -143,22 +143,23 @@ nav:
143143
- api/format_prompt.md
144144
- api/direct.md
145145
- api/ext.md
146-
- api/models/base.md
147-
- api/models/openai.md
148146
- api/models/anthropic.md
147+
- api/models/base.md
149148
- api/models/bedrock.md
150149
- api/models/cohere.md
150+
- api/models/fallback.md
151+
- api/models/function.md
151152
- api/models/google.md
152153
- api/models/groq.md
153154
- api/models/huggingface.md
154155
- api/models/instrumented.md
156+
- api/models/mcp-sampling.md
155157
- api/models/mistral.md
158+
- api/models/openai.md
159+
- api/models/openrouter.md
156160
- api/models/outlines.md
157161
- api/models/test.md
158-
- api/models/function.md
159-
- api/models/fallback.md
160162
- api/models/wrapper.md
161-
- api/models/mcp-sampling.md
162163
- api/profiles.md
163164
- api/providers.md
164165
- api/retries.md

pydantic_ai_slim/pydantic_ai/models/openrouter.py

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -125,14 +125,14 @@ class _OpenRouterMaxPrice(TypedDict, total=False):
125125
See [the OpenRouter API](https://openrouter.ai/docs/api-reference/list-available-providers) for a full list.
126126
"""
127127

128-
_Transforms = Literal['middle-out']
128+
OpenRouterTransforms = Literal['middle-out']
129129
"""Available messages transforms for OpenRouter models with limited token windows.
130130
131131
Currently only supports 'middle-out', but is expected to grow in the future.
132132
"""
133133

134134

135-
class _OpenRouterProviderConfig(TypedDict, total=False):
135+
class OpenRouterProviderConfig(TypedDict, total=False):
136136
"""Represents the 'Provider' object from the OpenRouter API."""
137137

138138
order: list[OpenRouterProviderName]
@@ -166,7 +166,7 @@ class _OpenRouterProviderConfig(TypedDict, total=False):
166166
"""The maximum pricing you want to pay for this request. [See details](https://openrouter.ai/docs/features/provider-routing#max-price)"""
167167

168168

169-
class _OpenRouterReasoning(TypedDict, total=False):
169+
class OpenRouterReasoning(TypedDict, total=False):
170170
"""Configuration for reasoning tokens in OpenRouter requests.
171171
172172
Reasoning tokens allow models to show their step-by-step thinking process.
@@ -187,7 +187,7 @@ class _OpenRouterReasoning(TypedDict, total=False):
187187
"""Whether to enable reasoning with default parameters. Default is inferred from effort or max_tokens."""
188188

189189

190-
class _OpenRouterUsageConfig(TypedDict, total=False):
190+
class OpenRouterUsageConfig(TypedDict, total=False):
191191
"""Configuration for OpenRouter usage."""
192192

193193
include: bool
@@ -204,7 +204,7 @@ class OpenRouterModelSettings(ModelSettings, total=False):
204204
These models will be tried, in order, if the main model returns an error. [See details](https://openrouter.ai/docs/features/model-routing#the-models-parameter)
205205
"""
206206

207-
openrouter_provider: _OpenRouterProviderConfig
207+
openrouter_provider: OpenRouterProviderConfig
208208
"""OpenRouter routes requests to the best available providers for your model. By default, requests are load balanced across the top providers to maximize uptime.
209209
210210
You can customize how your requests are routed using the provider object. [See more](https://openrouter.ai/docs/features/provider-routing)"""
@@ -214,19 +214,19 @@ class OpenRouterModelSettings(ModelSettings, total=False):
214214
215215
Create and manage presets through the OpenRouter web application to control provider routing, model selection, system prompts, and other parameters, then reference them in OpenRouter API requests. [See more](https://openrouter.ai/docs/features/presets)"""
216216

217-
openrouter_transforms: list[_Transforms]
217+
openrouter_transforms: list[OpenRouterTransforms]
218218
"""To help with prompts that exceed the maximum context size of a model.
219219
220220
Transforms work by removing or truncating messages from the middle of the prompt, until the prompt fits within the model's context window. [See more](https://openrouter.ai/docs/features/message-transforms)
221221
"""
222222

223-
openrouter_reasoning: _OpenRouterReasoning
223+
openrouter_reasoning: OpenRouterReasoning
224224
"""To control the reasoning tokens in the request.
225225
226226
The reasoning config object consolidates settings for controlling reasoning strength across different models. [See more](https://openrouter.ai/docs/use-cases/reasoning-tokens)
227227
"""
228228

229-
openrouter_usage: _OpenRouterUsageConfig
229+
openrouter_usage: OpenRouterUsageConfig
230230
"""To control the usage of the model.
231231
232232
The usage config object consolidates settings for enabling detailed usage information. [See more](https://openrouter.ai/docs/use-cases/usage-accounting)
@@ -452,6 +452,8 @@ def _openrouter_settings_to_openai_settings(model_settings: OpenRouterModelSetti
452452
extra_body['preset'] = preset
453453
if transforms := model_settings.pop('openrouter_transforms', None):
454454
extra_body['transforms'] = transforms
455+
if reasoning := model_settings.pop('openrouter_reasoning', None):
456+
extra_body['reasoning'] = reasoning
455457
if usage := model_settings.pop('openrouter_usage', None):
456458
extra_body['usage'] = usage
457459

tests/models/test_openrouter.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,11 @@ async def test_openrouter_stream_with_native_options(allow_model_requests: None,
101101

102102
async def test_openrouter_stream_with_reasoning(allow_model_requests: None, openrouter_api_key: str) -> None:
103103
provider = OpenRouterProvider(api_key=openrouter_api_key)
104-
model = OpenRouterModel('openai/o3', provider=provider)
104+
model = OpenRouterModel(
105+
'openai/o3',
106+
provider=provider,
107+
settings=OpenRouterModelSettings(openrouter_reasoning={'effort': 'high'}),
108+
)
105109

106110
async with model_request_stream(model, [ModelRequest.user_text_prompt('Who are you')]) as stream:
107111
chunks = [chunk async for chunk in stream]

0 commit comments

Comments
 (0)