Local RAG does not work with custom endpoints #4495
Replies: 2 comments
-
It really depends on well the model interprets the RAG prompt, as I don't see any errors in your logs: |
Beta Was this translation helpful? Give feedback.
-
Hi @danny-avila , your help would be much appreciated. The same PDF document is being uploaded using ollama nomic embedding.
Here is the error: chat-meilisearch | 2024-11-13T01:15:22.532555Z WARN HTTP request{method=GET host="meilisearch:7700" route=/indexes/convos/documents/2812482a-6731-4481-a7d2-1b86364090ab query_parameters= user_agent=node status_code=404 error=Document
Here the config: endpoints: Here is the full logging: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
What happened?
I am currently using a local RAG for creating embeddings, and this process is successfully completing. However, when I attempt to ask a question about a document, the custom endpoint LLM (irrespective of the model) responds by indicating that the document was not supplied and this is the first interaction. In the same message context, when I switch to OpenAI or Anthropic, the document context is correctly sent to the LLM.
Steps to Reproduce
Configure LibreChat for local RAG with "nomic-embed-text"
here is .env config:
RAG_API_URL=http://host.docker.internal:8000
EMBEDDINGS_PROVIDER=ollama
OLLAMA_BASE_URL=http://10.69.240.50:11434
EMBEDDINGS_MODEL=nomic-embed-text
RAG_USE_FULL_CONTEXT=true
Configure local LLM (any model)
Test with PDF
What browsers are you seeing the problem on?
No response
Relevant log output
Screenshots
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions