-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Closed
Labels
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
Message history isn't loaded correctly when passing in pure Python list/dict object with json.loads.
Example Code
from pydantic_ai import Agent
import json
agent = Agent("gemini-2.5-flash", system_prompt="Be a helpful assistant.")
result1 = agent.run_sync("Tell me a joke.")
print(result1.output)
# > Why don't scientists trust atoms? Because they make up everything!
# simulate passing history into database and then loaded back out
msghistory = json.loads(result1.new_messages_json())
result2 = agent.run_sync("Explain?", message_history=msghistory)
print(result2.output)
# > I'd be happy to explain! But I need a little more context.
# >
# > What specifically would you like me to explain? For example:
# > * A concept or idea?
# > * A term or word?
# > * A process or how something works?
# > * Something I've said previously?
# > * An event or phenomenon?
# > * A piece of text, an image, or a situation you're looking at?
print(result2.all_messages())
# > [
# > ModelRequest(
# > parts=[
# > SystemPromptPart(
# > content="Be a helpful assistant.",
# > timestamp=datetime.datetime(
# > 2025, 9, 30, 5, 31, 16, 96163, tzinfo=datetime.timezone.utc
# > ),
# > ),
# > UserPromptPart(
# > content="Explain?",
# > timestamp=datetime.datetime(
# > 2025, 9, 30, 5, 31, 16, 96163, tzinfo=datetime.timezone.utc
# > ),
# > ),
# > ]
# > ),
# > ModelResponse(
# > parts=[
# > TextPart(
# > content="I'd be happy to explain! But I need a little more context.\n\nWhat specifically would you like me to explain? For example:\n* A concept or idea?\n* A term or word?\n* A process or how something works?\n* Something I've said previously?\n* An event or phenomenon?\n* A piece of text, an image, or a situation you're looking at?\n\nOnce you tell me what you're curious about, I'll do my best to provide a clear explanation!"
# > )
# > ],
# > usage=RequestUsage(
# > input_tokens=9,
# > output_tokens=819,
# > details={"thoughts_tokens": 702, "text_prompt_tokens": 9},
# > ),
# > model_name="gemini-2.5-flash",
# > timestamp=datetime.datetime(
# > 2025, 9, 30, 5, 31, 24, 921629, tzinfo=datetime.timezone.utc
# > ),
# > provider_name="google-gla",
# > provider_details={"finish_reason": "STOP"},
# > provider_response_id="LWvbaL0ygY3T6Q_K9fHQDQ",
# > finish_reason="stop",
# > ),
# > ]Python, Pydantic AI & LLM client version
Python 3.12.9, Pydantic-AI 1.0.10, LLM not relevant (gpt-4 and gemini-2.5-flash both have the same problem)