Skip to content

Raising ModelRetry in output validator function crashes when streaming #3393

@petersli

Description

@petersli

Initial Checks

Description

When using stream_output, if I raise ModelRetry in my output validator function, instead of seeing another model request (like I do when I raise ModelRetry in a function tool), the exception is unhandled.

Based on the docs and docstrings, I would not expect this to crash.

It works properly with run_sync, which is the case that's tested in tests/test_agent.py and the docs.

Example Code

# add this test to tests/test_agent.py
async def test_output_validator_stream_output():
    """Test that ModelRetry in output validators works correctly with streaming."""

    async def stream_model(messages: list[ModelMessage], info: AgentInfo) -> AsyncIterator[DeltaToolCalls]:
        assert info.output_tools is not None
        if len(messages) == 1:
            # First attempt: return wrong value (a=41)
            yield {0: DeltaToolCall(name=info.output_tools[0].name, json_args='{"a": 41')}
            yield {0: DeltaToolCall(json_args=', "b": "f')}
            yield {0: DeltaToolCall(json_args='oo"}')}
        else:
            # Retry: return correct value (a=42)
            yield {0: DeltaToolCall(name=info.output_tools[0].name, json_args='{"a": 42')}
            yield {0: DeltaToolCall(json_args=', "b": "f')}
            yield {0: DeltaToolCall(json_args='oo"}')}

    agent = Agent(FunctionModel(stream_function=stream_model), output_type=Foo)

    @agent.output_validator
    def validate_output(ctx: RunContext[None], output: Foo) -> Foo:
        assert ctx.tool_name == 'final_result'
        # Only validate on final output, not partial streaming outputs
        if not ctx.partial_output:
            if output.a == 42:
                return output
            else:
                raise ModelRetry('"a" should be 42')
        return output

    async with agent.run_stream('Hello') as result:
        outputs = [output async for output in result.stream_output(debounce_by=None)]

    # Verify final output is correct after retry
    assert outputs[-1] == Foo(a=42, b='foo')
    assert result.output == Foo(a=42, b='foo')

    # Verify retry happened by checking message history
    messages = result.all_messages()
    assert len(messages) == 5  # Request, Response (wrong), Retry, Response (correct), ToolReturn
    assert isinstance(messages[2].parts[0], RetryPromptPart)
    assert messages[2].parts[0].content == '"a" should be 42'
    assert messages[2].parts[0].tool_name == 'final_result'

Python, Pydantic AI & LLM client version

python 3.10
pydantic-ai 1.11.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions