-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Open
Labels
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
The same issue as #2284 but when using Anthropic Provider.
In the following code example, the openai_model works but when run with anthropic_model, the agent early returns.
Here's the output.
Streaming response:
11:32:04.545 agent run
11:32:04.545 chat claude-3-7-sonnet-latest
Logfire project URL: https://logfire-us.pydantic.dev/mithra/api
I'll calculate 10 + 5 for you and provide the answer in a creative way.Streaming all messages:
[
ModelRequest(parts=[SystemPromptPart(content='You are a helpful assistant that can perform math calculations\nUser
will give you a math problem and you need to call calculator tool to solve it.\nAnswer with random text where each line
starts with a number and total lines must be equal to result of the operation.\n/no_think\n',
timestamp=datetime.datetime(2025, 8, 12, 18, 32, 4, 545757, tzinfo=datetime.timezone.utc)), UserPromptPart(content='What
is 10 + 5?', timestamp=datetime.datetime(2025, 8, 12, 18, 32, 4, 545761, tzinfo=datetime.timezone.utc))]),
ModelResponse(parts=[TextPart(content="I'll calculate 10 + 5 for you and provide the answer in a creative way."),
ToolCallPart(tool_name='calculator', args='{"a": 10, "b": 5, "operation": "add"}',
tool_call_id='toolu_015WCNrb2mPDjTthtiDp7fm2')], usage=Usage(request_tokens=477, response_tokens=110, total_tokens=587,
details={'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 477, 'output_tokens': 110}),
model_name='claude-3-7-sonnet-latest', timestamp=datetime.datetime(2025, 8, 12, 18, 32, 5, 500711,
tzinfo=datetime.timezone.utc)),
ModelRequest(parts=[ToolReturnPart(tool_name='calculator', content='Tool not executed - a final result was already
processed.', tool_call_id='toolu_015WCNrb2mPDjTthtiDp7fm2', timestamp=datetime.datetime(2025, 8, 12, 18, 32, 6, 560609,
tzinfo=datetime.timezone.utc))])
]
Example Code
import asyncio
import random
from typing import Literal
import logfire
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.anthropic import AnthropicProvider
from pydantic_ai.providers.openai import OpenAIProvider
from rich import print
from app.config import settings
logfire.configure()
logfire.instrument_pydantic_ai()
def roll_dice() -> str:
"""Roll a six-sided die and return the result."""
return str(random.randint(1, 6))
# --- Agent and Model Configuration ---
# Replace with your api key
anthropic_model = AnthropicModel(
"claude-3-7-sonnet-latest",
provider=AnthropicProvider(api_key=settings.anthropic_api_key),
)
openai_model = OpenAIModel(
"gpt-4o-mini",
provider=OpenAIProvider(api_key=settings.openai_api_key),
)
default_system_prompt = """
You are a helpful assistant.
Do not ask any follow-up questions.
"""
model_settings = {
"temperature": 0.2,
}
tools = [roll_dice]
SYS_PROMPT = """You are a helpful assistant that can perform math calculations
User will give you a math problem and you need to call calculator tool to solve it.
Answer with random text where each line starts with a number and total lines must be equal to result of the operation.
/no_think
"""
agent = Agent(model=anthropic_model, system_prompt=SYS_PROMPT)
@agent.tool
def calculator(ctx, a: int, b: int, operation: Literal['add', 'subtract', 'multiply', 'divide']) -> int:
print(f" > tool calculator: {a} {operation} {b}")
if operation == 'add':
return a + b
elif operation == 'subtract':
return a - b
elif operation == 'multiply':
return a * b
elif operation == 'divide':
return a // b # Integer division
else:
raise ValueError("Invalid operation. Use 'add', 'subtract', 'multiply', or 'divide'.")
async def main():
prompt = 'What is 10 + 5?'
print('Streaming response:')
async with agent.run_stream(prompt) as result:
async for message in result.stream_text(delta=True):
print(message, end='', flush=True)
print('Streaming all messages:')
print(result.all_messages())
print()
print('---' * 10)
print('NON streaming response:')
response = await agent.run(prompt)
print(response.output)
print('NON stream All messages:')
print(response.all_messages())
if __name__ == '__main__':
asyncio.run(main())Python, Pydantic AI & LLM client version
0.6.2