[Question]: LLM on Ollama endpoints is not responding based on a document imported from the RAG API. #3920
Replies: 5 comments 1 reply
-
Beta Was this translation helpful? Give feedback.
-
I have the exact same issue with a Ollama and uploaded documents. It will only work if I force-prompt, but that's not a solution. |
Beta Was this translation helpful? Give feedback.
-
I’m having same issue. Is there any workaround or fix? |
Beta Was this translation helpful? Give feedback.
-
@parthpat12 Try changing the model first without modifying the prompt. |
Beta Was this translation helpful? Give feedback.
-
Same problem here :-( It is impossible to get rid of openAPI when using RAG_API - impossible to use Ollama for Embeddings. A Perplexity Deep Research shows, that this is a common issue - seems to be hardcoded somewhere. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What is your question?
LLM on Ollama endpoints is not responding based on a document imported from the RAG API.
EMBEDDINGS_PROVIDER=ollama
EMBEDDINGS_MODEL=nomic-embed-text
MODEL=llama3.1:70b
I took a log app\clients\OllamaClient.js,
and on the messages list
It contains two messages.
messages = [
{role: 'system', content:'content from rag api'},
{role: 'user', content:'user input'},
]
More Details
When I ask you to tell me the contents of the attached document, llm's answer says that you don't save or remember the previous conversation, or answers based on existing pre-learned content.
I'd appreciate it if you could give me some ideas on which parts to look at.
What is the main subject of your question?
Endpoints, User System/OAuth, Other
Screenshots
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions