Skip to content

new_messages() include ToolReturnPart in the first user prompt of ModelRequest when output_type is specified #3067

@slkoo-cc

Description

@slkoo-cc

Initial Checks

Description

If an agent is set up with a specific output_type, the output function is called in the last message from the LLM (e.g. ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='<id>')]). This message is leaked into the starting ModelRequest of the second run's new_messages().

Notice the 2nd and 3rd run new_messages() below starts with:

[
  ModelRequest(
    parts=[
      ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='<id>'), # <- shouldn't be here!
      UserPromptPart(content='Roll me a dice again.')
    ]
  ),
  ...
]

When chained in a chat history, this caused two ModelRequest(parts=[ToolReturnPart()]) in a row, which when validated on OpenAI's API will cause an error on the 3rd run with the error message:

pydantic_ai.exceptions.ModelHTTPError: status_code: 400, model_name: gpt-5-mini, body: {'message': "Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.", 'type': 'invalid_request_error', 'param': 'messages.[6].role', 'code': None}

i.e.

# 1st message
[...,
    ModelResponse(
        parts=[
            ToolCallPart()
        ],
    ),
    ModelRequest(
        parts=[
            ToolReturnPart()
        ]
    ),
] + 
# 2nd message
[
    ModelRequest(
        parts=[
            ToolReturnPart(), # <-- this return part does not have a call part before
            UserPromptPart()
        ]
    ),
...]

The bug won't appear when there is no output_type specified as the output tool is not called nor returned.

Example Code

from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessagesTypeAdapter
from pydantic import BaseModel
import random


class Result(BaseModel):
    message: str
    result: int

# change this model between gpt-5-mini and gemini-2.5-flash, bug will still be same
agent = Agent("gpt-5-mini", system_prompt="Be a helpful assistant.", output_type=Result)


@agent.tool_plain
def roll_dice() -> str:
    """Roll a six-sided die and return the result."""
    return str(random.randint(1, 6))


result1 = agent.run_sync("Roll me a dice.")
print(result1.output)
print(result1.new_messages())

# simulate database store and load
msghistory = ModelMessagesTypeAdapter.validate_json(result1.new_messages_json())

result2 = agent.run_sync("Roll me a dice again.", message_history=msghistory)
print(result2.output)
print(result2.new_messages())

# simulate database store and load again for different run
msghistory2 = msghistory + ModelMessagesTypeAdapter.validate_json(
    result2.new_messages_json()
)

result3 = agent.run_sync("I bet on 4.", message_history=msghistory2)
print(result3.output)
print(result3.new_messages())

Using gemini-2.5-flash:

message='The dice rolled a 5.' result=5
[ModelRequest(parts=[SystemPromptPart(content='Be a helpful assistant.', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 50, 545483, tzinfo=datetime.timezone.utc)), UserPromptPart(content='Roll me a dice.', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 50, 545483, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ThinkingPart(content='', signature='CsQCAdHtim/EE7wTU7SQTmrfEJhMbVncoOAcH9O8xcIqwnT8AKrNQOmkshFgSmeWxFiRwlYR568PvmosxoOYvJr2G7hORzPCD95sD0WQMu55d9lwEVbMb2KT3pfAxaIYkjvxb/1pXqZD88lowEmiveUN30GTLhMf561TsSAVmjrgbSZaufXZQrlVDYui8T0CpuNanXlrR6u+x7eemsFePfiH8ye5NB9TpNNfHM1AaXXeOGvcu6Lyx6kOJ1g//Fl92YWEXsTyJ4fCEDWvoGsShciaV8CaYq9W75ecNtzauFuKx4Mw1vTDq4KUlJRsQCLdc8Gfd1JwvOh+n6zYpmg+r/RkE1cp7oEGZjtDcrqnWpAOFD2QEscGYPOPpFclfrCXNfkFv3BDbA4aoo56cUAZg8AiLsqY+mFKVl/V8+IeEw3Mi3o6TLeG', provider_name='google-gla'), ToolCallPart(tool_name='roll_dice', args={}, tool_call_id='pyd_ai_b8fa0062a0974c0cbc54e093bcb315c0')], usage=RequestUsage(input_tokens=94, output_tokens=79, details={'thoughts_tokens': 69, 'text_prompt_tokens': 94}), model_name='gemini-2.5-flash', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 52, 42445, tzinfo=datetime.timezone.utc), provider_name='google-gla', provider_details={'finish_reason': 'STOP'}, provider_response_id='AFbeaOiLA6ic1e8PjpKE2Ag', finish_reason='stop'), ModelRequest(parts=[ToolReturnPart(tool_name='roll_dice', content='5', tool_call_id='pyd_ai_b8fa0062a0974c0cbc54e093bcb315c0', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 52, 43443, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ThinkingPart(content='', signature='CtMCAdHtim84Ii6OxfOI2a3tn9MVW+Vx3QkfmT/87yPL+uyYsxVKe5cpQ2URM3hpAbGaURucB8swj779Ow+eTCoxNxy6ZQI8HQj6UfEF9DycR3XiSUC/82zCRm8jP5qAmkHqjqt7mV0II34TI9PblLE/tR7jg9EbC7aWpClmyk1IVdEIi9zd4KfpNet4tGEzfXWlakggU/8chumcNY5jBrGysTLFOPGgJ0sV1lLMxIMOi67oOlG/3uoSjDvAGqPSm0VvPm94DrA9/KHbyGqN+8jwvfEZoVC8BULVn3AVNA/PI9BQRJgnxkAVBArSTsk3VRsXrM4ocAMAAUOD5c/6oWs1YB4yXr5vuVmGrEtptQV1odFSGMbxGPE5GZrQMhzOqvNERtQKmTtond2Kj54r8IvWZ9yyl8vYyZUPxGHEMTfLBcWsAnH7xiO+3GFezBDMXBeG8tsK', provider_name='google-gla'), ToolCallPart(tool_name='final_result', args={'result': 5, 'message': 'The dice rolled a 5.'}, tool_call_id='pyd_ai_7cf69f84d59e4938ada1c9bb278094ab')], usage=RequestUsage(input_tokens=121, output_tokens=111, details={'thoughts_tokens': 86, 'text_prompt_tokens': 121}), model_name='gemini-2.5-flash', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 53, 403471, tzinfo=datetime.timezone.utc), provider_name='google-gla', provider_details={'finish_reason': 'STOP'}, provider_response_id='AVbeaNyBGfi6vr0PweP92Ag', finish_reason='stop'), ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='pyd_ai_7cf69f84d59e4938ada1c9bb278094ab', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 53, 403471, tzinfo=datetime.timezone.utc))])]
message='The dice rolled a 4.' result=4
[ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='pyd_ai_7cf69f84d59e4938ada1c9bb278094ab', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 53, 403471, tzinfo=TzInfo(UTC))), UserPromptPart(content='Roll me a dice again.', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 53, 418642, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ToolCallPart(tool_name='roll_dice', args={}, tool_call_id='pyd_ai_c8ea5ecbe7e64052b1b93371e9636f51')], usage=RequestUsage(input_tokens=172, output_tokens=10, details={'text_prompt_tokens': 172}), model_name='gemini-2.5-flash', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 54, 179652, tzinfo=datetime.timezone.utc), provider_name='google-gla', provider_details={'finish_reason': 'STOP'}, provider_response_id='AlbeaP-wC-6_vr0Pgun82Ao', finish_reason='stop'), ModelRequest(parts=[ToolReturnPart(tool_name='roll_dice', content='4', tool_call_id='pyd_ai_c8ea5ecbe7e64052b1b93371e9636f51', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 54, 180650, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ThinkingPart(content='', signature='CtsCAdHtim/yLLKElifI6VDDr+tM9InlU9E7UN1t8wCv+BZ8r8Wk6JQtspLNSgru32Z3prtEJ0kGSFAGDi9xczYqKZNYFK3RGKf0Oy8Evbg8K/0X0l6dK4B24cKC75oO/acXS1Ewj5g8dVWOsnsMwKS6a9ozXk5bbwvISCTTfOcy6gSKlmvwnNLGGAQh6O5pGg0TPqvd6EN+pSNsXC7LTT/v86L+F7NRxsDL0mzO3xE+qN+oevUlrDoJRaiY+0FCN6i0NgmH8+V37bTeMvJt0nelS0GHFEvx7SBalVNrpPP4f1Lvcr0u29cqwLIGiaLJg7eH6GZ547WHoYu7JUUd8I9jbqNgIBZcb0bJITQcU2g4fj5MoqSWrGzuN0iTXn6SVJtOb/yRn95uuBQuT2ogzhm8L09Jz84aNKIQUpD8zyrWEwYaKBS8ZlKZj3JU4OL9HqXpBfr0sFu4+eCMJP4=', provider_name='google-gla'), ToolCallPart(tool_name='final_result', args={'result': 4, 'message': 'The dice rolled a 4.'}, tool_call_id='pyd_ai_b6c0268e27084433806801c9e8f8b117')], usage=RequestUsage(input_tokens=199, output_tokens=112, details={'thoughts_tokens': 87, 'text_prompt_tokens': 199}), model_name='gemini-2.5-flash', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 55, 523640, tzinfo=datetime.timezone.utc), provider_name='google-gla', provider_details={'finish_reason': 'STOP'}, provider_response_id='A1beaLbyH7fZ1e8PhpaAwAg', finish_reason='stop'), ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='pyd_ai_b6c0268e27084433806801c9e8f8b117', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 55, 523640, tzinfo=datetime.timezone.utc))])]
message='The dice rolled a 3. You bet on 4, so you lost.' result=3
[ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='pyd_ai_b6c0268e27084433806801c9e8f8b117', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 55, 523640, tzinfo=TzInfo(UTC))), UserPromptPart(content='I bet on 4.', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 55, 524640, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ToolCallPart(tool_name='roll_dice', args={}, tool_call_id='pyd_ai_1bc3237478a94c278f05c1463c00d0fd')], usage=RequestUsage(input_tokens=268, output_tokens=10, details={'text_prompt_tokens': 268}), model_name='gemini-2.5-flash', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 56, 43228, tzinfo=datetime.timezone.utc), provider_name='google-gla', provider_details={'finish_reason': 'STOP'}, provider_response_id='BFbeaP6tA5C91e8P1rDisAo', finish_reason='stop'), ModelRequest(parts=[ToolReturnPart(tool_name='roll_dice', content='3', tool_call_id='pyd_ai_1bc3237478a94c278f05c1463c00d0fd', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 56, 44228, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ThinkingPart(content='', signature='CqYDAdHtim+cTQy4ncn6Yu1xmsR41W8tvEIQZEDw4JrM9UkiTBbEhbsaI3nXffc1I2CfC0SOi7gVkYzfyTw2oEfGbh6CG8DzpbRvfAREKOCxpvQxGWqHjRndOY58ClNjL8L4kBXENw7/YO8X2Mn3HfmmA1cF5q0DOlKJOWG/5z+R1XziZ2G9GW21ovO56RD0dfOoLu05MCfErpUJnDKvl1UQnQRnfnLDBQrKv3QeT841Vpt7tPMgbS1yDn6ohlAK01h6CrgTmNV30uS7Vt3mqASI1Qfi1oOmmiQannDoJaGtGFL7Z8eDt0HWhvUNfmmBK1Gjzj2eV1MTqX1/XFkHhnthoH09uSZz1sxHWBR4lA6dk6Oh84OHdHY6B+Z9XjZEpVWdF48EFTBYp2FCvRkMiQSLm+p+z10dygcyFITFIQbI9YghxzufgABN5e5tMjWSO0bkLEbyEsMaE2D/W9ivPL+YRh/C/ValzBFLJWysEWDzk0Cm5jGPT/bQ7OvZ82BFZuFKgdG4jFldhkV7LAnJuIM1YUlWlMgNJ0BmZDwnNHHdS8pUGGqqzME=', provider_name='google-gla'), ToolCallPart(tool_name='final_result', args={'result': 3, 'message': 'The dice rolled a 3. You bet on 4, so you lost.'}, tool_call_id='pyd_ai_03c53af0c48d4791a207bda332ec7038')], usage=RequestUsage(input_tokens=295, output_tokens=152, details={'thoughts_tokens': 117, 'text_prompt_tokens': 295}), model_name='gemini-2.5-flash', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 57, 397528, tzinfo=datetime.timezone.utc), provider_name='google-gla', provider_details={'finish_reason': 'STOP'}, provider_response_id='BVbeaOHXGO3avr0Pip_44Ag', finish_reason='stop'), ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='pyd_ai_03c53af0c48d4791a207bda332ec7038', timestamp=datetime.datetime(2025, 10, 2, 10, 37, 57, 397528, tzinfo=datetime.timezone.utc))])]

3 runs are completed, but the bug exist.

Using gpt-5-mini:

message='You rolled a 4.' result=4
[ModelRequest(parts=[SystemPromptPart(content='Be a helpful assistant.', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 1, 580280, tzinfo=datetime.timezone.utc)), UserPromptPart(content='Roll me a dice.', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 1, 580280, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ToolCallPart(tool_name='roll_dice', args='{}', tool_call_id='call_8Wsfly9UVyEiSzjMPnoBeqyi')], usage=RequestUsage(input_tokens=162, output_tokens=84, details={'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 64, 'rejected_prediction_tokens': 0}), model_name='gpt-5-mini-2025-08-07', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 2, tzinfo=TzInfo(UTC)), provider_name='openai', provider_details={'finish_reason': 'tool_calls'}, provider_response_id='chatcmpl-CMAuMAnbp2IbCvvPeGHPCgDcqXsrX', finish_reason='tool_call'), ModelRequest(parts=[ToolReturnPart(tool_name='roll_dice', content='4', tool_call_id='call_8Wsfly9UVyEiSzjMPnoBeqyi', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 5, 219298, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ToolCallPart(tool_name='final_result', args='{"message":"You rolled a 4.","result":4}', tool_call_id='call_1zujp4MXfrhDCOMa6N0fMsJK')], usage=RequestUsage(input_tokens=189, output_tokens=223, details={'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 192, 'rejected_prediction_tokens': 0}), model_name='gpt-5-mini-2025-08-07', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 5, tzinfo=TzInfo(UTC)), provider_name='openai', provider_details={'finish_reason': 'tool_calls'}, provider_response_id='chatcmpl-CMAuPTQ8WL7zVIplbXHiEmXD5Guzc', finish_reason='tool_call'), ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='call_1zujp4MXfrhDCOMa6N0fMsJK', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 9, 467906, tzinfo=datetime.timezone.utc))])]
message='You rolled a 5.' result=5
[ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='call_1zujp4MXfrhDCOMa6N0fMsJK', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 9, 467906, tzinfo=TzInfo(UTC))), UserPromptPart(content='Roll me a dice again.', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 9, 488044, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ToolCallPart(tool_name='roll_dice', args='{}', tool_call_id='call_ARPOV1Saff7YrOXgp8CqnHob')], usage=RequestUsage(input_tokens=239, output_tokens=84, details={'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 64, 'rejected_prediction_tokens': 0}), model_name='gpt-5-mini-2025-08-07', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 9, tzinfo=TzInfo(UTC)), provider_name='openai', provider_details={'finish_reason': 'tool_calls'}, provider_response_id='chatcmpl-CMAuT54Xg3JCWeYz74P5OMjs5P9IA', finish_reason='tool_call'), ModelRequest(parts=[ToolReturnPart(tool_name='roll_dice', content='5', tool_call_id='call_ARPOV1Saff7YrOXgp8CqnHob', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 12, 659422, tzinfo=datetime.timezone.utc))]), ModelResponse(parts=[ToolCallPart(tool_name='final_result', args='{"message":"You rolled a 5.","result":5}', tool_call_id='call_2hAovRZPEG7LrBZsPODlWtUo')], usage=RequestUsage(input_tokens=266, output_tokens=25, details={'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}), model_name='gpt-5-mini-2025-08-07', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 13, tzinfo=TzInfo(UTC)), provider_name='openai', provider_details={'finish_reason': 'tool_calls'}, provider_response_id='chatcmpl-CMAuX39JiCfNBXTHxPQsD4hl1K5Qu', finish_reason='tool_call'), ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='call_2hAovRZPEG7LrBZsPODlWtUo', timestamp=datetime.datetime(2025, 10, 2, 10, 33, 14, 469651, tzinfo=datetime.timezone.utc))])]    
Traceback (most recent call last):
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_ai\models\openai.py", line 482, in _completions_create
    return await self.client.chat.completions.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 2585, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\openai\_base_client.py", line 1794, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\openai\_base_client.py", line 1594, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.", 'type': 'invalid_request_error', 'param': 'messages.[6].role', 'code': None}}

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "c:\Users\User\Desktop\Platform-Integration\API\test2.py", line 35, in <module>
    result3 = agent.run_sync("I bet on 4.", message_history=msghistory2)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_ai\agent\abstract.py", line 326, in run_sync
    return get_event_loop().run_until_complete(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\scoop\apps\python\current\Lib\asyncio\base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_ai\agent\abstract.py", line 227, in run
    async for node in agent_run:
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_ai\run.py", line 149, in __anext__
    next_node = await self._graph_run.__anext__()
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_graph\graph.py", line 758, in __anext__
    return await self.next(self._next_node)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_graph\graph.py", line 731, in next
    self._next_node = await node.run(ctx)
                      ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_ai\_agent_graph.py", line 400, in run
    return await self._make_request(ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_ai\_agent_graph.py", line 442, in _make_request
    model_response = await ctx.deps.model.request(message_history, model_settings, model_request_parameters)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_ai\models\openai.py", line 400, in request
    response = await self._completions_create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\User\Desktop\Platform-Integration\API\.venv\Lib\site-packages\pydantic_ai\models\openai.py", line 512, in _completions_create
    raise ModelHTTPError(status_code=status_code, model_name=self.model_name, body=e.body) from e
pydantic_ai.exceptions.ModelHTTPError: status_code: 400, model_name: gpt-5-mini, body: {'message': "Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.", 'type': 'invalid_request_error', 'param': 'messages.[6].role', 'code': None}

Python, Pydantic AI & LLM client version

Python 3.12.9, Pydantic 2.11.9, Pydantic AI 1.0.13

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions