-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
i have for example a gpt 5 output from openai:
"output": [
{
"id": "rs_05171dc28f1341080068e93a69c6a881a3bf19f7ebe720d8ce",
"type": "reasoning",
"summary": []
},
{
"id": "fc_05171dc28f1341080068e93a6ad15481a38a1d43a725e65e89",
"type": "function_call",
"status": "completed",
"arguments": "{\"target_file\":\"/home/workspace/vocabulary.md\",\"text_read_entire_file\":true}",
"call_id": "call_fQMmrLGclr1Y7rof2gQV7gPU",
"name": "read_file"
}
],what i see in pydantic_ai is that the ModelResponse looks like:
ModelResponse(parts=[ThinkingPart(content='', id='rs_0fb0f4a746f80a4a0068e85b79da548190ab73564f27595502',
signature='gAAAAABo6Ft-0Ue9wv8ID93...xvO-2tAKvlQ==', provider_name='openai'), ToolCallPart(tool_name='edit_file_llm', args='{"target_file":"/home/workspace/vocabulary.md","instructions":"Remove all citation markers (e.g., [^1]) and the reference list at the end of the file. Keep the rest of
the content unchanged.","code_edit":"// ... existing code ...\\n# Vocabulary\\n\\n## copper\\n\\nCo...ns.\\n\\n// ... existing code ..."}', tool_call_id='call_re9tQypxSAaAheStLjOqkkeP|fc_0fb0f4a746f80a4a0068e85b7ea0808190a71188a7f98ca3f4')], usage=RequestUsage(input_tokens=18321,
output_tokens=208, details={'reasoning_tokens': 64}), model_name='gpt-5-mini-2025-08-07', timestamp=datetime.datetime(2025, 10, 10, 1, 3, 48, tzinfo=TzInfo(UTC)), provider_name='openai', provider_details={'finish_reason': 'completed'}, provider_response_id='resp_0fb0f4a746f80a4a0068e85b7496bc8190b6912ce4cfce1dcf', finish_reason='stop'),
ModelRequest(parts=[ToolReturnPart(tool_name='edit_file_llm', content='# Vocabulary\n\n## copper\n\nCo... in coins.\n', tool_call_id='call_re9tQypxSAaAheStLjOqkkeP|fc_0fb0f4a746f80a4a0068e85b7ea0808190a71188a7f98ca3f4', timestamp=datetime.datetime(2025, 10, 10, 1, 4, 8, 928898,
tzinfo=TzInfo(UTC)))], instructions=''),notice the tool call id: call_re9tQypxSAaAheStLjOqkkeP|fc_0fb0f4a746f80a4a0068e85b7ea0808190a71188a7f98ca3f4 is a concatenation of the raw fc id and the call_id with a pipe in the middle.
This is fine until you then try to take this history and put it into a new request. Both Anthropic and the gpt4.1 mini api will fail validation errors on the way in:
# api.openai gpt4.1-mini (though gpt5 api weirdly does not complain):
"Invalid 'messages[2].tool_calls[0].id': string too long. Expected a string with maximum length 40, but got a string with length 83 instead."
# api.anthropic sonnet4.5:
"messages.1.content.0.tool_use.id: String should match pattern '^[a-zA-Z0-9_-]+$'"
it feels like we should just be using the call_id or if we need to keep the info around the id we should parse it out before sending it in request bodies
Example Code
Python, Pydantic AI & LLM client version
1.0.15
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working