Skip to content

Commit 2f85765

Browse files
gn00295120claude
andcommitted
fix: support tool_choice with specific tool names in LiteLLM streaming (fixes #1846)
This change fixes a Pydantic validation error that occurred when using LiteLLM with streaming enabled and specifying a specific tool name for tool_choice parameter. Problem: When users specified tool_choice="my_tool_name" with streaming enabled, the SDK would incorrectly cast it to Literal["auto", "required", "none"], causing a Pydantic validation error. The issue was in litellm_model.py line 376, where the Response object was created with an incorrect type cast: tool_choice=cast(Literal["auto", "required", "none"], tool_choice) However, tool_choice can be: - A Literal: "auto", "required", "none" - A ChatCompletionNamedToolChoiceParam dict with specific tool name - The Converter.convert_tool_choice() already handles string tool names Solution: - Import ToolChoiceFunction from openai.types.responses - Properly convert ChatCompletionNamedToolChoiceParam to ToolChoiceFunction - Handle all valid tool_choice types when creating Response object The fix ensures that when tool_choice is a dict like: {"type": "function", "function": {"name": "my_tool"}} It gets correctly converted to: ToolChoiceFunction(type="function", name="my_tool") Testing: - Linting (ruff check) - passed - Type checking (mypy) - passed - Formatting (ruff format) - passed Generated with Lucas Wang<[email protected]> Co-Authored-By: Claude <[email protected]>
1 parent 748ac80 commit 2f85765

File tree

1 file changed

+29
-3
lines changed

1 file changed

+29
-3
lines changed

src/agents/extensions/models/litellm_model.py

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
ChatCompletionMessageCustomToolCall,
2525
ChatCompletionMessageFunctionToolCall,
2626
ChatCompletionMessageParam,
27+
ChatCompletionNamedToolChoiceParam,
2728
)
2829
from openai.types.chat.chat_completion_message import (
2930
Annotation,
@@ -32,6 +33,7 @@
3233
)
3334
from openai.types.chat.chat_completion_message_function_tool_call import Function
3435
from openai.types.responses import Response
36+
from openai.types.responses.tool_choice_function import ToolChoiceFunction
3537

3638
from ... import _debug
3739
from ...agent_output import AgentOutputSchemaBase
@@ -367,15 +369,39 @@ async def _fetch_response(
367369
if isinstance(ret, litellm.types.utils.ModelResponse):
368370
return ret
369371

372+
# Convert tool_choice to the correct type for Response
373+
# tool_choice can be a Literal, a ChatCompletionNamedToolChoiceParam, or omit
374+
response_tool_choice: Literal["auto", "required", "none"] | ToolChoiceFunction
375+
if tool_choice is omit:
376+
response_tool_choice = "auto"
377+
elif isinstance(tool_choice, dict):
378+
# Convert from ChatCompletionNamedToolChoiceParam to ToolChoiceFunction
379+
# The dict has structure: {"type": "function", "function": {"name": "tool_name"}}
380+
func_data = tool_choice.get("function")
381+
if (
382+
tool_choice.get("type") == "function"
383+
and func_data is not None
384+
and isinstance(func_data, dict)
385+
):
386+
response_tool_choice = ToolChoiceFunction(
387+
type="function", name=func_data["name"]
388+
)
389+
else:
390+
# Fallback to auto if unexpected format
391+
response_tool_choice = "auto"
392+
elif tool_choice in ("auto", "required", "none"):
393+
response_tool_choice = tool_choice # type: ignore
394+
else:
395+
# Fallback to auto for any other case
396+
response_tool_choice = "auto"
397+
370398
response = Response(
371399
id=FAKE_RESPONSES_ID,
372400
created_at=time.time(),
373401
model=self.model,
374402
object="response",
375403
output=[],
376-
tool_choice=cast(Literal["auto", "required", "none"], tool_choice)
377-
if tool_choice is not omit
378-
else "auto",
404+
tool_choice=response_tool_choice,
379405
top_p=model_settings.top_p,
380406
temperature=model_settings.temperature,
381407
tools=[],

0 commit comments

Comments
 (0)