-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Description
Describe the bug
When using a custom llm endpoint (via litellm), in a streaming context, the sdk throws an error if you try to specify tool_choice = 'my_required_tool_name'
, when this should indeed be allowed.
Debug information
- Agents SDK version: 0.3.2 and 0.3.3 confirmed to fail
- Python 3.13
Repro steps
With streaming enabled, and using litellm targeting i.e. openrouter/qwen/qwen3-30b-a3b-instruct-2507
, and specifying a name for tool_choice
on the request:
agent = Agent(
name="RespondingAgent",
tools=tools,
handoffs=handoffs, # type: ignore
instructions=_create_prompt(cur_date),
model=get_model(),
model_settings=ModelSettings(
temperature=temperature,
top_p=top_p,
parallel_tool_calls=config.PARALLEL_TOOL_CALLS,
# BUG: uncommenting the below line will throw a pydantic model error, saying the value must be `auto`, `required`, or `none`.
# tool_choice="reason"
),
)
The pydantic model error is thrown from openai-agents-python / src / agents / extensions / models / litellm_model.py L377
.
Looking there, it validates like so:
tool_choice=cast(Literal["auto", "required", "none"], tool_choice)
Note: I pulled the repo and tried naively changing that line to
tool_choice=cast(str, tool_choice)
But that resulted in yet another validation error elsewhere in the sdk.
Expected behavior
It should allow specifying an arbitrary string, since that represents the function name that the caller expects to be invoked.
Note: it seems to work when streaming is disabled. I'm not 100% sure if it's streaming that matters, though, since I'm using it in a slightly different scenario where the error doesn't appear, and the only difference I can think of is that it doesn't use streaming.