How to increase accuaracy of QA model using Embedder, Retriever and LLM #6800
Unanswered
sagarpaliwal2000
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
There is a requirement to create a QA model, which is capable of answering on the basis of given text files(which can be multiple).
So, we found a solution using LangChain, we will first create embeddings of the text files and store it in a vector db and then whenever a query is being asked, will create the embeddings of that query as well and then retriever will retrive the small part from the large document in which the answer may be present.
Now we have two things in our hand, context in which the answer lies and the question, then we feed both of these to LLMs to get the final answer.
Our retriver is fetching the correct context with an accuracy of 97% but then LLMs is not capable of fetching the correct answer out of it. The accuracy of LLM is only 65%. Though the answer lies in 260 words paragraph but then also LLM is not working up to the mark. I have tried multiple models like Vicuna, Falcon, ChatGPT.
Could anybody help me in solving this ps.
Beta Was this translation helpful? Give feedback.
All reactions