-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Add tool_choice setting
#3611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add tool_choice setting
#3611
Conversation
Clarify that agents can be produced by a factory function if preferred.
- pending: centralize tests?
|
haven't integrated reviewed the code with #1820 in mind, so putting this back to draft |
Resolve conflict in tests/models/test_anthropic.py by keeping both: - Container tests from upstream (container_id feature) - Tool choice tests from our branch 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
…rarily across the model run
The docstring examples in settings.py use RunContext without importing it, which fails ruff linting. Added test="skip" lint="skip" to skip these examples in the test suite. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
|
||
| If both per-tool `prepare` and agent-wide `prepare_tools` are used, the per-tool `prepare` is applied first to each tool, and then `prepare_tools` is called with the resulting list of tool definitions. | ||
|
|
||
| ## Forcing Tool Use on First Request {#force-first-request} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please split this feature out into a separate PR; we want to keep it in mind as we implement model_settings['tool_choice'], but they're separate features
|
|
||
| A common pattern is to force the model to use specific tools on the first request before allowing it to respond freely. This is particularly useful for RAG (Retrieval Augmented Generation) patterns where you want the model to search or retrieve information before answering. | ||
|
|
||
| Use the `force_first_request` parameter on `@agent.tool` or `@agent.tool_plain` to force the model to use specific tools on the first model request of each [`run()`][pydantic_ai.Agent.run]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can discuss this in the followup PR for this feature, but I'll note that this does not follow my proposal in #2799 (comment):
I'm thinking we could have
tool_choiceonModelSettings, and then add a hook to let the entireModelSettingsbe customized on a per-step basis. But for the very common case of "require tool X to be used before generating output", it'd be nice to have a dedicated argument onagent.runthat doesn't require writing a custom callback that checksctx.run_stepetc. So I was thinking of also adding aagent.run(..., require_tool_use=...)that can takeTrue, astrtool name, aSequence[str]of tool names, or (default)False. Would that be sufficient for your use case?
This currently approach is too inflexible and not centralized enough.
|
|
||
| ## Dynamic Tool Choice {#dynamic-tool-choice} | ||
|
|
||
| For more complex scenarios, you can provide a callable for `tool_choice` in [`model_settings`][pydantic_ai.settings.ModelSettings] that dynamically determines which tools the model can use at each step: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should link to the model settings docs in agents.md (I believe)
|
|
||
| result = agent.run_sync( | ||
| 'Find information about Python and summarize it.', | ||
| model_settings={'tool_choice': my_tool_choice}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Model settings need to be serializable, so this won't work.
Please follow my suggestion from the issue to add a new hook to let the entire model settings be "prepared"/modified ahead of each run step, not just this specific field.
|
|
||
| - `None` — Use default PydanticAI behavior (auto/required based on output type) | ||
| - `'auto'` — Model decides whether to use tools | ||
| - `'required'` — Model must use a tool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the function/output distinction is important here, we should clarify that this means the model must use a function tool
| type='function', | ||
| function={'name': resolved.tool_names[0]}, | ||
| ) | ||
| warnings.warn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, no warnings please
| return 'auto' | ||
|
|
||
| if resolved.mode in ('auto', 'required'): | ||
| return resolved.mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above; we do actually need to handle these in a special way, not just pass them on
|
|
||
| if resolved.mode == 'required': | ||
| if not openai_profile.openai_supports_tool_choice_required: | ||
| warnings.warn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- No warning please
- We should be sending a list of function tool names, so the profile setting may not be relevant anyway
|
|
||
|
|
||
| @dataclass | ||
| class _OpenAIToolChoiceResult: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having 2 classes for intermediate representations of tool choice is too much; I'd rather have the logic inline like we do in the other model classes.
| * `'none'`: Model cannot use function tools (output tools remain available if needed) | ||
| * `list[str]`: Model must use one of the specified function tools (validated against registered tools) | ||
| **Dynamic callable:** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned, we can't support this in this way
Closes #2799
Adds
tool_choicetoModelSettings, letting users control how the model interacts with function tools.Currently Pydantic AI internally decides whether to use
tool_choice='auto'or'required'based on output configuration, but users have no way to override this. The workaround was usingextra_body={'tool_choice': 'none'}which is provider-specific and doesn't work everywhere.This PR allows the user to set
tool_choiceto:One important distinction: this only affects
function tools(the ones you register on the agent), not output tools (used internally for structured output). So if you have an agent withoutput_type=SomeModeland you set
tool_choice='none', the output tool stays available - you'll just get a warning about it.Implementation is spread across all model providers since each has its own API format for tool_choice.
Added a
resolve_tool_choiceutility that handles validation (checking tool names exist, warning aboutconflicts with output tools) and returns a normalized representation that each provider then maps to their specific format.
Bedrock is a bit of a special case - it doesn't support 'none' at all, so we fall back to 'auto' with a warning. Anthropic has a constraint where 'required' and specific tool selection don't work with thinking/extended thinking enabled.
TODO