Skip to content

Conversation

@dsfaccini
Copy link
Collaborator

@dsfaccini dsfaccini commented Dec 2, 2025

Closes #2799

Adds tool_choice to ModelSettings, letting users control how the model interacts with function tools.

Currently Pydantic AI internally decides whether to use tool_choice='auto' or 'required' based on output configuration, but users have no way to override this. The workaround was using extra_body={'tool_choice': 'none'} which is provider-specific and doesn't work everywhere.

This PR allows the user to set tool_choice to:

  • 'auto' - model decides whether to call tools
  • 'required' - model must call a tool
  • 'none' - model can't use function tools
  • ['tool_a', 'tool_b'] - model must use one of these specific tools

One important distinction: this only affects function tools (the ones you register on the agent), not output tools (used internally for structured output). So if you have an agent with output_type=SomeModel
and you set tool_choice='none', the output tool stays available - you'll just get a warning about it.

Implementation is spread across all model providers since each has its own API format for tool_choice.
Added a resolve_tool_choice utility that handles validation (checking tool names exist, warning about
conflicts with output tools) and returns a normalized representation that each provider then maps to their specific format.

Bedrock is a bit of a special case - it doesn't support 'none' at all, so we fall back to 'auto' with a warning. Anthropic has a constraint where 'required' and specific tool selection don't work with thinking/extended thinking enabled.

TODO

  • document this somewhere in the docs

@dsfaccini dsfaccini marked this pull request as ready for review December 9, 2025 01:53
@dsfaccini dsfaccini marked this pull request as draft December 9, 2025 02:44
@dsfaccini
Copy link
Collaborator Author

haven't integrated reviewed the code with #1820 in mind, so putting this back to draft

dsfaccini and others added 3 commits December 9, 2025 09:34
Resolve conflict in tests/models/test_anthropic.py by keeping both:
- Container tests from upstream (container_id feature)
- Tool choice tests from our branch

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@dsfaccini dsfaccini marked this pull request as ready for review December 9, 2025 18:06
dsfaccini and others added 2 commits December 9, 2025 13:19
The docstring examples in settings.py use RunContext without importing it,
which fails ruff linting. Added test="skip" lint="skip" to skip these
examples in the test suite.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>

If both per-tool `prepare` and agent-wide `prepare_tools` are used, the per-tool `prepare` is applied first to each tool, and then `prepare_tools` is called with the resulting list of tool definitions.

## Forcing Tool Use on First Request {#force-first-request}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please split this feature out into a separate PR; we want to keep it in mind as we implement model_settings['tool_choice'], but they're separate features


A common pattern is to force the model to use specific tools on the first request before allowing it to respond freely. This is particularly useful for RAG (Retrieval Augmented Generation) patterns where you want the model to search or retrieve information before answering.

Use the `force_first_request` parameter on `@agent.tool` or `@agent.tool_plain` to force the model to use specific tools on the first model request of each [`run()`][pydantic_ai.Agent.run]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can discuss this in the followup PR for this feature, but I'll note that this does not follow my proposal in #2799 (comment):

I'm thinking we could have tool_choice on ModelSettings, and then add a hook to let the entire ModelSettings be customized on a per-step basis. But for the very common case of "require tool X to be used before generating output", it'd be nice to have a dedicated argument on agent.run that doesn't require writing a custom callback that checks ctx.run_step etc. So I was thinking of also adding a agent.run(..., require_tool_use=...) that can take True, a str tool name, a Sequence[str] of tool names, or (default) False. Would that be sufficient for your use case?

This currently approach is too inflexible and not centralized enough.


## Dynamic Tool Choice {#dynamic-tool-choice}

For more complex scenarios, you can provide a callable for `tool_choice` in [`model_settings`][pydantic_ai.settings.ModelSettings] that dynamically determines which tools the model can use at each step:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should link to the model settings docs in agents.md (I believe)


result = agent.run_sync(
'Find information about Python and summarize it.',
model_settings={'tool_choice': my_tool_choice},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Model settings need to be serializable, so this won't work.

Please follow my suggestion from the issue to add a new hook to let the entire model settings be "prepared"/modified ahead of each run step, not just this specific field.


- `None` — Use default PydanticAI behavior (auto/required based on output type)
- `'auto'` — Model decides whether to use tools
- `'required'` — Model must use a tool
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the function/output distinction is important here, we should clarify that this means the model must use a function tool

type='function',
function={'name': resolved.tool_names[0]},
)
warnings.warn(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, no warnings please

return 'auto'

if resolved.mode in ('auto', 'required'):
return resolved.mode
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above; we do actually need to handle these in a special way, not just pass them on


if resolved.mode == 'required':
if not openai_profile.openai_supports_tool_choice_required:
warnings.warn(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • No warning please
  • We should be sending a list of function tool names, so the profile setting may not be relevant anyway



@dataclass
class _OpenAIToolChoiceResult:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having 2 classes for intermediate representations of tool choice is too much; I'd rather have the logic inline like we do in the other model classes.

* `'none'`: Model cannot use function tools (output tools remain available if needed)
* `list[str]`: Model must use one of the specified function tools (validated against registered tools)
**Dynamic callable:**
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned, we can't support this in this way

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow setting tool_choice

2 participants