how to enable context caching in vertex ai models within LiteLLM Router ? #6878
Unanswered
AbhishekRP2002
asked this question in
Q&A
Replies: 1 comment 1 reply
-
See also #6898 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
here is my model list:
i am using LiteLLM router from LangChain, shown as follows:
i want to understand how i can enable context caching for gemini family models in this ?
any help would be really appreciated
cc: @krrishdholakia
Beta Was this translation helpful? Give feedback.
All reactions