Skip to content

tool_call_id being set to openai function_call.id|function_call.call_id breaks api validation #3136

@kousun12

Description

@kousun12

Initial Checks

Description

i have for example a gpt 5 output from openai:

  "output": [
    {
      "id": "rs_05171dc28f1341080068e93a69c6a881a3bf19f7ebe720d8ce",
      "type": "reasoning",
      "summary": []
    },
    {
      "id": "fc_05171dc28f1341080068e93a6ad15481a38a1d43a725e65e89",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"target_file\":\"/home/workspace/vocabulary.md\",\"text_read_entire_file\":true}",
      "call_id": "call_fQMmrLGclr1Y7rof2gQV7gPU",
      "name": "read_file"
    }
  ],

what i see in pydantic_ai is that the ModelResponse looks like:

ModelResponse(parts=[ThinkingPart(content='', id='rs_0fb0f4a746f80a4a0068e85b79da548190ab73564f27595502',                                                                                                                                                                                           
           signature='gAAAAABo6Ft-0Ue9wv8ID93...xvO-2tAKvlQ==', provider_name='openai'), ToolCallPart(tool_name='edit_file_llm', args='{"target_file":"/home/workspace/vocabulary.md","instructions":"Remove all citation markers (e.g., [^1]) and the reference list at the end of the file. Keep the rest of                        
           the content unchanged.","code_edit":"// ... existing code ...\\n# Vocabulary\\n\\n## copper\\n\\nCo...ns.\\n\\n// ... existing code ..."}', tool_call_id='call_re9tQypxSAaAheStLjOqkkeP|fc_0fb0f4a746f80a4a0068e85b7ea0808190a71188a7f98ca3f4')], usage=RequestUsage(input_tokens=18321,                                    
           output_tokens=208, details={'reasoning_tokens': 64}), model_name='gpt-5-mini-2025-08-07', timestamp=datetime.datetime(2025, 10, 10, 1, 3, 48, tzinfo=TzInfo(UTC)), provider_name='openai', provider_details={'finish_reason': 'completed'}, provider_response_id='resp_0fb0f4a746f80a4a0068e85b7496bc8190b6912ce4cfce1dcf', finish_reason='stop'),                           
               ModelRequest(parts=[ToolReturnPart(tool_name='edit_file_llm', content='# Vocabulary\n\n## copper\n\nCo... in coins.\n', tool_call_id='call_re9tQypxSAaAheStLjOqkkeP|fc_0fb0f4a746f80a4a0068e85b7ea0808190a71188a7f98ca3f4', timestamp=datetime.datetime(2025, 10, 10, 1, 4, 8, 928898,                                     
           tzinfo=TzInfo(UTC)))], instructions=''),

notice the tool call id: call_re9tQypxSAaAheStLjOqkkeP|fc_0fb0f4a746f80a4a0068e85b7ea0808190a71188a7f98ca3f4 is a concatenation of the raw fc id and the call_id with a pipe in the middle.

This is fine until you then try to take this history and put it into a new request. Both Anthropic and the gpt4.1 mini api will fail validation errors on the way in:

# api.openai gpt4.1-mini (though gpt5 api weirdly does not complain):
"Invalid 'messages[2].tool_calls[0].id': string too long. Expected a string with maximum length 40, but got a string with length 83 instead."

# api.anthropic sonnet4.5:
"messages.1.content.0.tool_use.id: String should match pattern '^[a-zA-Z0-9_-]+$'"

it feels like we should just be using the call_id or if we need to keep the info around the id we should parse it out before sending it in request bodies

Example Code

Python, Pydantic AI & LLM client version

1.0.15

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions