-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
Description
When running Pydantic AI in the backend coupled with Vercel AI for the frontend but not exclusive to it, it's possible to interrupt the request stream in the frontend with a stop method, when this happens the request stream is interrupted with an exception asyncio.CancelledError but even wrapping the run with capture_run_messages the messages are still empty, unless the run fails with a model error from Pydantic AI itself apparently.
This leaves our app in a weird state because from the client's perspective it already received delta streams and is showing tool calls and text to the user, but we haven't persisted anything in the database yet. Given that we're running run_stream_events we could build the incomplete results and persist it, but I image that the package itself already has the code to manage the partial output.
It would be used to handle more exceptions gracefully to manage everything that was streamed from the provider prior to the error.
Slack thread: https://pydanticlogfire.slack.com/archives/C083V7PMHHA/p1760989459555679
References
No response