Replies: 2 comments
-
This environment variable combination works for me!
The "trick" was wrapping my head around LiteLLM standardizing to present everything as OpenAI...even if (as in this example) the true endpoint is Bedrock. |
Beta Was this translation helpful? Give feedback.
0 replies
-
@gmerritt thanks for coming back to update this Have you had a chance to look at https://docs.litellm.ai/docs/providers/bedrock_vector_store at least using the retrieval aspects LibreChat's perspective? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I would like to use the AWS Bedrock embedding model
amazon.titan-embed-text-v2:0
via LiteLLM, not directly via Bedrock.Is this configurable via rag_api & LibreChat settings? Or would it require custom modification of the rag_api application?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions