Skip to content

Support custom output validation failure handler as alternative to sending error to model #3410

@davidbernat

Description

@davidbernat

Question

The use case I will present is a smallest-possible specific example of a broad category of general use cases.

As such, my broader question asks which PydanticAI design features of the Agent and BaseModel classes were written with its solution in mind, so as to most conform to the design patterns of the PydanticAI developers; i.e., "the most PydanticAI way to solve this".

Here is a small specific example: Suppose I want to create a simple agent that responds with a list of strings. For the sake of this discussion the list of strings are generated from the following prompt: "Create several short greetings which I can use when answering a phone from an unknown number. Format your response as a JSON of List of Strings. Provide no other information." Now, for the purpose of this question please do not bog down in any optimizing of prompt engineer, etc. We are using this to discuss a broad category of LLM annoying failure modes in large scale streamlines of enterprise application systems.

To begin, we would presumably use BaseModel to construct an output_type to pass as a parameter to Agent:

class ListOfGreetings(BaseModel):
    greetings: list[str]

agent = Agent(llm, output_type=ListOfComments)

To first order, a very common failure pattern is that the LLM does not response with a strictly formatted JSON, or cannot be parsed, and the Validator of ListOfGreetings will fail. A typical example of a response which answers the semantic request but fails the ListOfGreetings validation would be a text string:

Okay, here are a few options for a short, warm, sophisticated, and subtly engaging greeting: 
* Pleasure to answer your call. To whom am I speaking?
* Is there a way we can answer this call within under five minutes? 
* Hello. This is [insert name]. To whom do I have the pleasure of speaking?
These greetings could be improved were you to provide me more information about your personality.

This gets to the core of my question. IF the ListOfGreetings validation fails, we COULD pass through an additional LLM layer to parse the string, an additional custom AI-enabled Validator/Extractor; for instance, the next prompt returns with very high probability the same LLM response data reformatted as JSON and passes ListOfGreetings validation, by appending the verbatim string response from above, and using an LLM model which may or may not be the same Agent.

The paragraph of text below contains a bullet point list of sentences. Extract the sentences as a JSON list. Provide no other response.\n

In essence, this is simply a handler which is prompted IF ListOfGreetings validation fails, runs and modifies the Agent data, and runs the ListOfGreetings validation again. Strictly speaking this is not a Before/After Validator.

It is, I presume, a very simple and straightforward solution; but one PydanticAI has a specific design pattern to do.

What is that?
And, if applicable, what open source projects are leading the way on this endeavor? I would like to participate.

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions