Add promptCacheKey for any provider with npm @ai-sdk/openai#4413
Add promptCacheKey for any provider with npm @ai-sdk/openai#4413shantur wants to merge 1 commit intoanomalyco:devfrom
Conversation
|
Yeah this one should be fine, but im curious what is your need for this? @shantur what provider are you using? |
|
@rekram1-node - openai codex via CLIProxyAPI |
|
Having said that this is needed for any provider that supports openai APIs |
|
@shantur hmm but this will still cause errors for certain people, I know people that have special proxies setup internally that use that openai provider but it goes through a proxy, it will error if you set this across the board. Also a lot of providers handle the caching automatically, like openrouter for example doesn't need that key afaik (could be wrong) |
|
btw you can set this using a plugin if you need to |
9c593ca to
78e74e7
Compare
|
@rekram1-node - As per the official OpenAI and ai-sdk/openai docs this is the expected behaviour - https://ai-sdk.dev/providers/ai-sdk-providers/openai#responses-models If people are using for their setups then it's incorrect setup and they should be using - https://ai-sdk.dev/providers/openai-compatible-providers with "@ai-sdk/openai-compatible" |
|
openai compatible doesnt support responses api, so they cant |
|
What would be your suggestion how this could be handled? |
|
@shantur maybe not the perfect solution but this should work: |
|
@rekram1-node |
|
thats fair, wanna add a setting then and we can do that? |
So "Fireworks.ai" is a provider that is supported by https://ai-sdk.dev/providers/ai-sdk-providers/fireworks and it complains about this I raised the original issue that called for the removal of that code in #4386, which fixed it. This change just adds it back in. |
f1dc981 to
3e15a39
Compare
df8bdf9 to
0dd5039
Compare
|
@rekram1-node - Closing this in favor of #4654 |
v2 for openai-caching when npm uses @ai-sdk/openai
Includes fix for #4386