Skip to content

Content Filtering Exception Handling #1035

@mscherrmann

Description

@mscherrmann

I noticed two issues with the Content Filtering behavior of Pydantic AI when used with Azure OpenAI:

  1. Inconsistent error handling for content filtering: Azure OpenAI uses two different types of content filters:

    (Reference: Azure OpenAI Content Filter documentation)

    This translates to two different errors in Pydantic AI:

    • For prompt filtering: ModelHTTPError
    • For completion filtering: UnexpectedModelBehavior error. Here we don't even see the filter type.
  2. Lack of model-agnostic content filter handling: Different providers handle content filtering differently. For example, Vertex AI handles content filters differently, which makes the current content filter handling in Pydantic AI not model-agnostic.

Proposed Solution

  1. Implement a unique content filter exception that can be used to handle only these specific cases. This exception should:

    • Contain filter type information (hate, sexual, violence, and self-harm) to help users identify problematic and sensitive information in their prompts
    • Include child exceptions (prompt filter and completion filter) to help users handle these cases separately if needed
    • Provide a consistent interface for handling content filtering errors
    • If relevant, make sure to not leake any private user information contained in prompt or generation
  2. Ensure that Pydantic AI's behavior on content filters is consistent across different model providers, making the library truly model-agnostic.

These improvements would allow users to better handle content filtering scenarios, potentially by altering prompts or model inputs when content filtering is triggered.

References

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions