-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Closed
Labels
Feature requestNew feature requestNew feature request
Description
Question
I'm looking for confirmation that Pydantic AI allows for the default behavior from OpenAI regarding Prompt Caching.
I have run some tests using the default APIs from OpenAI and I'm able to see cached tokens being used which reduces the cost for my application. Perfoming the same exact action using the OpenAIResponsesModel does not use the cached prompts even if I try to add the prompt_cache_key as a extra_body parameter.
I reviewed the documentation and also checked current pull requests and issues and couldn't find any mentions to it this being supported.
Could you help me understand if Pydantic AI supports Prompt Caching from OpenAI?
Additional Context
No response
Metadata
Metadata
Assignees
Labels
Feature requestNew feature requestNew feature request