Skip to content

agent.run_stream tracing missing final_result (pydantic_ai.final_result) — present in agent.run #3161

@dutchfarao

Description

@dutchfarao

Initial Checks

Description

When using the streaming API (agent.run_stream) the trace output is missing the final_result key/value that appears when running agent.run(). I expected the final_result to be present in the same way once the streamed generation completes. Changing delta=True/False in stream_text did not change the behavior.

What I tried:

Switched stream_text(delta=True) to stream_text(delta=False) (my initial suspicion from docs) — no change.
Compared the full trace/annotation output of agent.run() vs agent.run_stream().
Observed trace outputs (formatted):

agent.run() trace (expected; includes final_result):

{
  "input": { "value": "Hi how are you?" },
  "gen_ai": {
    "agent": { "name": "agent" },
    "usage": { "input_tokens": 403, "output_tokens": 23 }
  },
  "output": { "value": "I'm doing well, thanks — how can I help you today?" },
  "logfire": {
    "msg": "agent run",
    "json_schema": "{\"type\": \"object\", \"properties\": {\"pydantic_ai.all_messages\": {\"type\": \"array\"}, \"final_result\": {\"type\": \"object\"}}}"
  },
  "agent_name": "agent",
  "model_name": "model-router",
  "pydantic_ai": {
    "all_messages": "[{\"role\": \"system\", \"parts\": [{\"type\": \"text\", \"content\": \"You are a helpful assistant.\"}]}, {\"role\": \"user\", \"parts\": [{\"type\": \"text\", \"content\": \"Hi how are you?\"}]}, {\"role\": \"assistant\", \"parts\": [{\"type\": \"text\", \"content\": \"I'm doing well, thanks \\u2014 how can I help you today?\"}], \"finish_reason\": \"stop\"}]"
  },
  "final_result": "I'm doing well, thanks — how can I help you today?",
  "openinference": { "span": { "kind": "AGENT" } }
}

agent.run_stream() trace (actual; final_result missing):

{
  "input": { "value": "Hi how are you?" },
  "gen_ai": {
    "agent": { "name": "agent" },
    "usage": { "input_tokens": 403, "output_tokens": 23 }
  },
  "logfire": {
    "msg": "agent run",
    "json_schema": "{\"type\": \"object\", \"properties\": {\"pydantic_ai.all_messages\": {\"type\": \"array\"}, \"final_result\": {\"type\": \"object\"}}}"
  },
  "agent_name": "agent",
  "model_name": "model-router",
  "pydantic_ai": {
    "all_messages": "[{\"role\": \"system\", \"parts\": [{\"type\": \"text\", \"content\": \"You are a helpful assistant.\"}]}, {\"role\": \"user\", \"parts\": [{\"type\": \"text\", \"content\": \"Hi how are you?\"}]}, {\"role\": \"assistant\", \"parts\": [{\"type\": \"text\", \"content\": \"I'm doing well, thanks \\u2014 how can I help you today?\"}], \"finish_reason\": \"stop\"}]"
  },
  // "final_result" missing here
  "openinference": { "span": { "kind": "AGENT" } }
}

Is this a bug or intended behavior for streaming? If intended, what is the recommended way to add final_result to the tracing for streamed runs? (I could append final_result manually to the pydantic_ai metadata but would prefer a supported approach).

Repro / minimal example:

Start an agent and run a streaming generation as below.
Collect chunks via result.stream_text(delta=True) and create ChoiceDelta objects while streaming.

Example Code

async with (
    self.agent,
    self.agent.run_stream(
        message_history=messages,
    ) as result,
):
    async for chunk in result.stream_text(delta=True):
        delta = ChoiceDelta(
            role="assistant" if not role_sent else None,
            content=chunk,
        )
        # ... (append/send chunk as you normally would)
# No explicit final_result appended after stream completes in tracing

Python, Pydantic AI & LLM client version

Python 3.12.9
pydantic-ai==1.0.18
openai==2.3.0 

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions