-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Description
Question
Hi,
My team noticed something while working with the vercel adapter. The output from the VercelAIAdapter.dump_messages(result.new_messages()) call after we've finished streaming chunks is different from the message formats in the vercel spec. Specifically tool call parts.
Below is an example the object from dump_messages after calling model_dump(by_alias=True, mode="json"):
{
"id": "a88b8cbf-3d96-48f9-a980-421c9f06d3b5",
"role": "assistant",
"metadata": null,
"parts": [
{
"type": "dynamic-tool",
"toolName": "search_files",
"toolCallId": "call_qp7gUfyGHU5M9oS9cDcLoD5m",
"state": "output-available",
"input": "{\"name_query\":\"test\"}",
"output": "{\"results\":[{\"id\":1,\"name\":\"test_file.txt\",\"size\":1024},{\"id\":2,\"name\":\"test_doc.pdf\",\"size\":2048}],\"total_found\":2}",
"callProviderMetadata": null,
"preliminary": null
}
]
}Based on the vercel spec we'd be expecting something more similar to this:
{
"id": "a88b8cbf-3d96-48f9-a980-421c9f06d3b5",
"role": "assistant",
"metadata": null,
"parts": [
{
"type": "tool-search_files",
"toolCallId": "call_qp7gUfyGHU5M9oS9cDcLoD5m",
"state": "output-available",
"input": {
"name_query": "test"
},
"output": {
"results": [
{
"id": 1,
"name": "test_file.txt",
"size": 1024
},
{
"id": 2,
"name": "test_doc.pdf",
"size": 2048
}
],
"total_found": 2
},
"providerExecuted": null,
"callProviderMetadata": null,
"preliminary": null
}
]
}The reason this is causing us a little bit of friction is that the chunks we're streaming do seem to follow the expected vercel spec for message chunks. So during streaming the vercel client is very plug and play. But when it tries to load messages we've saved from the new_messages call we start getting problems.
I was able to get around this with this function to transform any tool messages to the correct format. But is this the right approach? Am I misunderstanding something, and we be doing something different? Or is this a bug?
def convert_dynamic_tool_to_tool_part(message: UIMessage) -> UIMessage:
converted_parts = []
for part in message.parts:
part_dict = part.model_dump(by_alias=True)
if part_dict.get("type") == "dynamic-tool":
tool_name = part_dict.pop("toolName")
part_dict["type"] = f"tool-{tool_name}" # Set type to tool-{toolname}
# Parse JSON strings in input/output fields
for field in ["input", "output"]:
if field in part_dict and isinstance(part_dict[field], str):
with suppress(json.JSONDecodeError, TypeError):
part_dict[field] = json.loads(part_dict[field])
state = part_dict.get("state")
converted_part: BaseUIPart
if state == "input-streaming":
converted_part = ToolInputStreamingPart.model_validate(part_dict)
elif state == "input-available":
converted_part = ToolInputAvailablePart.model_validate(part_dict)
elif state == "output-available":
converted_part = ToolOutputAvailablePart.model_validate(part_dict)
elif state == "output-error":
converted_part = ToolOutputErrorPart.model_validate(part_dict)
else:
converted_part = part
converted_parts.append(converted_part)
else:
converted_parts.append(part)
message.parts = converted_parts
return messageHere's a link to the vercel docs for the specific message type schema: https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message#tooluipart
Here's a link to the vercel types for the stream chunks: https://github.com/vercel/ai/blob/ai%406.0.57/packages/ai/src/ui-message-stream/ui-message-chunks.ts
Here's a minimum test file as well if that helps:
import asyncio
import json
from typing import Any
from dotenv import load_dotenv
load_dotenv()
from pydantic_ai import Agent
from pydantic_ai.ui.vercel_ai import VercelAIAdapter
from pydantic_ai.ui.vercel_ai.request_types import SubmitMessage, TextUIPart, UIMessage
def search_files(name_query: str, limit: int = 10) -> dict[str, Any]:
"""Search for files by name."""
return {
"results": [
{"id": 1, "name": f"{name_query}_file.txt", "size": 1024},
{"id": 2, "name": f"{name_query}_doc.pdf", "size": 2048},
],
"total_found": 2,
}
agent = Agent(
"gpt-5",
tools=[search_files],
system_prompt="You are a helpful assistant that searches for files.",
)
async def main():
request = SubmitMessage(
trigger="submit-message",
id="test-1",
messages=[
UIMessage(
id="msg-1",
role="user",
parts=[TextUIPart(text="Find files with name 'test'")],
)
],
)
adapter = VercelAIAdapter(agent=agent, run_input=request)
print("=" * 80)
print("STREAMED CHUNKS (as sent to frontend):")
print("=" * 80)
async for chunk in adapter.run_stream():
chunk_dict = chunk.model_dump(by_alias=True, mode="json")
print(json.dumps(chunk_dict, indent=2))
print("-" * 40)
print("\n" + "=" * 80)
print("MESSAGES FROM dump_messages() (used for saving):")
print("=" * 80)
result = await agent.run(request.messages[-1].parts[0].text) # type: ignore
formatted_messages = VercelAIAdapter.dump_messages(result.new_messages())
for message in formatted_messages:
msg_dict = message.model_dump(by_alias=True, mode="json")
print(json.dumps(msg_dict, indent=2))
print("-" * 40)
if __name__ == "__main__":
asyncio.run(main())Additional Context
Pydantic AI Version: 1.53.0
Python Vesion: 3.11.4
Tested using GPT-5. In test script I loaded my API key from a .env