Replies: 2 comments
-
I'd also like to know. @danny-avila is this a capability or something that would need to be added? |
Beta Was this translation helpful? Give feedback.
0 replies
-
@rcdailey We were able to get this working using these params in the config
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
At the moment I'm using LiteLLM as a centralized hub for my third party chat services configuration. At the moment I use it with Anthropic and OpenAI for chat based models.
Sometimes I want to be able to attach PDF files and ask GPT or Claude to explain those files. However, this isn't working at the moment.
It looks like a feature called "embeddings" is used when you attach PDF files. I don't know a lot about them, but I tried to set things up the best I could. On the LiteLLM side, I added a model for embeddings:
And in LibreChat, I set these in
.env
:And I have the following in my
librechat.yaml
:At this point I:
docker compose logs -f api rag_api
below.What am I doing wrong? How do I set this up correctly?
Beta Was this translation helpful? Give feedback.
All reactions