|
| 1 | +--- |
| 2 | +title: Pydantic AI |
| 3 | +description: "Learn about using Sentry for Pydantic AI." |
| 4 | +--- |
| 5 | + |
| 6 | +<Alert title="Beta"> |
| 7 | + |
| 8 | +The support for **Pydantic AI** is in beta. Please test locally before using in production. |
| 9 | + |
| 10 | +</Alert> |
| 11 | + |
| 12 | +This integration connects Sentry with the [Pydantic AI](https://ai.pydantic.dev/) library. |
| 13 | +The integration has been confirmed to work with Pydantic AI version 1.0.0+. |
| 14 | + |
| 15 | +Once you've installed this integration, you can use [Sentry AI Agents Insights](https://sentry.io/orgredirect/organizations/:orgslug/insights/ai/agents/), a Sentry dashboard that helps you understand what's going on with your AI agents. |
| 16 | + |
| 17 | +Sentry AI Agents monitoring will automatically collect information about agents, tools, prompts, tokens, and models. |
| 18 | + |
| 19 | +## Install |
| 20 | + |
| 21 | +Install `sentry-sdk` from PyPI: |
| 22 | + |
| 23 | +```bash {tabTitle:pip} |
| 24 | +pip install "sentry-sdk" |
| 25 | +``` |
| 26 | + |
| 27 | +```bash {tabTitle:uv} |
| 28 | +uv add "sentry-sdk" |
| 29 | +``` |
| 30 | + |
| 31 | +## Configure |
| 32 | + |
| 33 | +Add `PydanticAIIntegration()` to your `integrations` list: |
| 34 | + |
| 35 | +```python {tabTitle:OpenAI} |
| 36 | +import sentry_sdk |
| 37 | +from sentry_sdk.integrations.pydantic_ai import PydanticAIIntegration |
| 38 | +from sentry_sdk.integrations.openai import OpenAIIntegration |
| 39 | + |
| 40 | +sentry_sdk.init( |
| 41 | + dsn="___PUBLIC_DSN___", |
| 42 | + traces_sample_rate=1.0, |
| 43 | + # Add data like LLM and tool inputs/outputs; |
| 44 | + # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info |
| 45 | + send_default_pii=True, |
| 46 | + integrations=[ |
| 47 | + PydanticAIIntegration(), |
| 48 | + ], |
| 49 | + # Disable the OpenAI integration to avoid double reporting of chat spans |
| 50 | + disabled_integrations=[OpenAIIntegration()], |
| 51 | +) |
| 52 | +``` |
| 53 | + |
| 54 | +```python {tabTitle:Anthropic} |
| 55 | +import sentry_sdk |
| 56 | +from sentry_sdk.integrations.pydantic_ai import PydanticAIIntegration |
| 57 | + |
| 58 | +sentry_sdk.init( |
| 59 | + dsn="___PUBLIC_DSN___", |
| 60 | + traces_sample_rate=1.0, |
| 61 | + # Add data like LLM and tool inputs/outputs; |
| 62 | + # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info |
| 63 | + send_default_pii=True, |
| 64 | + integrations=[ |
| 65 | + PydanticAIIntegration(), |
| 66 | + ], |
| 67 | +) |
| 68 | +``` |
| 69 | + |
| 70 | +<Alert level="warning"> |
| 71 | + |
| 72 | +When using Pydantic AI with OpenAI models, you must disable the OpenAI integration to avoid double reporting of chat spans. Add `disabled_integrations=[OpenAIIntegration()]` to your `sentry_sdk.init()` call as shown in the OpenAI tab above. |
| 73 | + |
| 74 | +</Alert> |
| 75 | + |
| 76 | +## Verify |
| 77 | + |
| 78 | +Verify that the integration works by running an AI agent. The resulting data should show up in your AI Agents Insights dashboard. In this example, we're creating a customer support agent that analyzes customer inquiries and can optionally look up order information using a tool. |
| 79 | + |
| 80 | +```python {tabTitle:OpenAI} |
| 81 | +import asyncio |
| 82 | + |
| 83 | +import sentry_sdk |
| 84 | +from sentry_sdk.integrations.pydantic_ai import PydanticAIIntegration |
| 85 | +from sentry_sdk.integrations.openai import OpenAIIntegration |
| 86 | +from pydantic_ai import Agent, RunContext |
| 87 | +from pydantic import BaseModel |
| 88 | + |
| 89 | +class SupportResponse(BaseModel): |
| 90 | + message: str |
| 91 | + sentiment: str |
| 92 | + requires_escalation: bool |
| 93 | + |
| 94 | +support_agent = Agent( |
| 95 | + 'openai:gpt-4o-mini', |
| 96 | + name="Customer Support Agent", |
| 97 | + system_prompt=( |
| 98 | + "You are a helpful customer support agent. Analyze customer inquiries, " |
| 99 | + "provide helpful responses, and determine if escalation is needed. " |
| 100 | + "If the customer mentions an order number, use the lookup tool to get details." |
| 101 | + ), |
| 102 | + result_type=SupportResponse, |
| 103 | +) |
| 104 | + |
| 105 | +@support_agent.tool |
| 106 | +async def lookup_order(ctx: RunContext[None], order_id: str) -> dict: |
| 107 | + """Look up order details by order ID. |
| 108 | + |
| 109 | + Args: |
| 110 | + ctx: The context object. |
| 111 | + order_id: The order identifier. |
| 112 | + |
| 113 | + Returns: |
| 114 | + Order details including status and tracking. |
| 115 | + """ |
| 116 | + # In a real application, this would query a database |
| 117 | + return { |
| 118 | + "order_id": order_id, |
| 119 | + "status": "shipped", |
| 120 | + "tracking_number": "1Z999AA10123456784", |
| 121 | + "estimated_delivery": "2024-03-15" |
| 122 | + } |
| 123 | + |
| 124 | +async def main() -> None: |
| 125 | + sentry_sdk.init( |
| 126 | + dsn="___PUBLIC_DSN___", |
| 127 | + traces_sample_rate=1.0, |
| 128 | + # Add data like LLM and tool inputs/outputs; |
| 129 | + # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info |
| 130 | + send_default_pii=True, |
| 131 | + integrations=[ |
| 132 | + PydanticAIIntegration(), |
| 133 | + ], |
| 134 | + # Disable the OpenAI integration to avoid double reporting of chat spans |
| 135 | + disabled_integrations=[OpenAIIntegration()], |
| 136 | + ) |
| 137 | + |
| 138 | + result = await support_agent.run( |
| 139 | + "Hi, I'm wondering about my order #ORD-12345. When will it arrive?" |
| 140 | + ) |
| 141 | + print(result.data) |
| 142 | + |
| 143 | +if __name__ == "__main__": |
| 144 | + asyncio.run(main()) |
| 145 | +``` |
| 146 | + |
| 147 | +```python {tabTitle:Anthropic} |
| 148 | +import asyncio |
| 149 | + |
| 150 | +import sentry_sdk |
| 151 | +from sentry_sdk.integrations.pydantic_ai import PydanticAIIntegration |
| 152 | +from pydantic_ai import Agent, RunContext |
| 153 | +from pydantic import BaseModel |
| 154 | + |
| 155 | +class SupportResponse(BaseModel): |
| 156 | + message: str |
| 157 | + sentiment: str |
| 158 | + requires_escalation: bool |
| 159 | + |
| 160 | +support_agent = Agent( |
| 161 | + 'anthropic:claude-3-5-sonnet-latest', |
| 162 | + name="Customer Support Agent", |
| 163 | + system_prompt=( |
| 164 | + "You are a helpful customer support agent. Analyze customer inquiries, " |
| 165 | + "provide helpful responses, and determine if escalation is needed. " |
| 166 | + "If the customer mentions an order number, use the lookup tool to get details." |
| 167 | + ), |
| 168 | + result_type=SupportResponse, |
| 169 | +) |
| 170 | + |
| 171 | +@support_agent.tool |
| 172 | +async def lookup_order(ctx: RunContext[None], order_id: str) -> dict: |
| 173 | + """Look up order details by order ID. |
| 174 | + |
| 175 | + Args: |
| 176 | + ctx: The context object. |
| 177 | + order_id: The order identifier. |
| 178 | + |
| 179 | + Returns: |
| 180 | + Order details including status and tracking. |
| 181 | + """ |
| 182 | + # In a real application, this would query a database |
| 183 | + return { |
| 184 | + "order_id": order_id, |
| 185 | + "status": "shipped", |
| 186 | + "tracking_number": "1Z999AA10123456784", |
| 187 | + "estimated_delivery": "2024-03-15" |
| 188 | + } |
| 189 | + |
| 190 | +async def main() -> None: |
| 191 | + sentry_sdk.init( |
| 192 | + dsn="___PUBLIC_DSN___", |
| 193 | + traces_sample_rate=1.0, |
| 194 | + # Add data like LLM and tool inputs/outputs; |
| 195 | + # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info |
| 196 | + send_default_pii=True, |
| 197 | + integrations=[ |
| 198 | + PydanticAIIntegration(), |
| 199 | + ], |
| 200 | + ) |
| 201 | + |
| 202 | + result = await support_agent.run( |
| 203 | + "Hi, I'm wondering about my order #ORD-12345. When will it arrive?" |
| 204 | + ) |
| 205 | + print(result.data) |
| 206 | + |
| 207 | +if __name__ == "__main__": |
| 208 | + asyncio.run(main()) |
| 209 | +``` |
| 210 | + |
| 211 | +It may take a couple of moments for the data to appear in [sentry.io](https://sentry.io). |
| 212 | + |
| 213 | +## Behavior |
| 214 | + |
| 215 | +Data on the following will be collected: |
| 216 | + |
| 217 | +- AI agents invocations |
| 218 | +- execution of tools |
| 219 | +- number of input and output tokens used |
| 220 | +- LLM models usage |
| 221 | +- model settings (temperature, max_tokens, etc.) |
| 222 | + |
| 223 | +Sentry considers LLM and tool inputs/outputs as PII and doesn't include PII data by default. If you want to include the data, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts and outputs despite `send_default_pii=True`, configure the integration with `include_prompts=False` as shown in the Options section below. |
| 224 | + |
| 225 | +## Options |
| 226 | + |
| 227 | +By adding `PydanticAIIntegration` to your `sentry_sdk.init()` call explicitly, you can set options for `PydanticAIIntegration` to change its behavior: |
| 228 | + |
| 229 | +```python |
| 230 | +import sentry_sdk |
| 231 | +from sentry_sdk.integrations.pydantic_ai import PydanticAIIntegration |
| 232 | + |
| 233 | +sentry_sdk.init( |
| 234 | + # ... |
| 235 | + # Add data like inputs and responses; |
| 236 | + # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info |
| 237 | + send_default_pii=True, |
| 238 | + integrations=[ |
| 239 | + PydanticAIIntegration( |
| 240 | + include_prompts=False, # LLM and tool inputs/outputs will be not sent to Sentry, despite send_default_pii=True |
| 241 | + ), |
| 242 | + ], |
| 243 | +) |
| 244 | +``` |
| 245 | + |
| 246 | +You can pass the following keyword arguments to `PydanticAIIntegration()`: |
| 247 | + |
| 248 | +- `include_prompts`: |
| 249 | + |
| 250 | + Whether LLM and tool inputs and outputs should be sent to Sentry. Sentry considers this data personal identifiable data (PII) by default. If you want to include the data, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts and outputs despite `send_default_pii=True`, configure the integration with `include_prompts=False`. |
| 251 | + |
| 252 | + The default is `True`. |
| 253 | + |
| 254 | +## Supported Versions |
| 255 | + |
| 256 | +- Pydantic AI: 1.0.0+ |
| 257 | +- Python: 3.9+ |
0 commit comments