Router with ConversationalRetrievalChain on a VectorStore, and LLMChains. #7957
Replies: 1 comment 2 replies
-
Answer generated by a 🤖 AnswerI understand that you're facing a challenge with routing follow-up questions in your LangChain application. Specifically, you're having trouble when a follow-up question is contextually related to the previous question but is identified as "unrelated" by the model. You've identified that ConversationalRetrievalChain might be a solution, but you're unsure how to integrate it into your existing architecture. Here's how I would suggest you proceed. The ConversationalRetrievalChain is indeed designed to handle the kind of scenario you're describing. It's capable of considering the chat history and the new question to form a "standalone question". This question is then passed to the retriever, in your case, the VectorStore, to fetch relevant documents. The retrieved documents are then passed to an LLM to generate a final response. You can replace your current RetrievalQA chain with the ConversationalRetrievalChain. There's no need to add another agent or make changes to your existing LLMChains. The ConversationalRetrievalChain should be able to manage the context of the conversation and route the queries accordingly. Here's a sample code snippet to illustrate how you can integrate the ConversationalRetrievalChain into your pipeline: from langchain.chains import ConversationalRetrievalChain, LLMChain
from langchain.llms import OpenAI
# Initialize your LLMChains
greetings_chain = LLMChain(...)
goodbyes_chain = LLMChain(...)
unrelated_questions_chain = LLMChain(...)
# Initialize your ConversationalRetrievalChain
conversational_retrieval_chain = ConversationalRetrievalChain(vectorstore.as_retriever(), OpenAI(...))
# Add your chains to your agent
agent.add_chain('greetings', greetings_chain)
agent.add_chain('goodbyes', goodbyes_chain)
agent.add_chain('unrelated_questions', unrelated_questions_chain)
agent.add_chain('conversational_retrieval', conversational_retrieval_chain)
# Now your agent should be able to handle the context of the conversation and route the queries accordingly This approach should help you handle follow-up questions that are contextually related to the previous question. The ConversationalRetrievalChain will consider the chat history and the new question to form a standalone question that can be passed to the VectorStore. Please note that while this is a potential solution, it may not be the only one. You might need to adjust the solution based on the specific requirements and constraints of your project. If you find this solution effective and feel it could benefit others, I encourage you to make a pull request to update the LangChain documentation. This way, other users facing similar challenges can benefit from your experience. SourcesThis response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've been stuck on a task that I was working on, not knowing how to exactly proceed with it.
So I have managed to build the following pipeline using an Agent. I receive a query and according to the context of the query it gets routed either to any of the different LLM Chains defined (answering Greetings, Goodbyes, Unrelated Questions) or it gets routed to a RetrievalQA that searches for the answer in a VectorStore when the input query is related to the context in question.
Now what I noticed, and have been trying to fix was that sometimes, despite adding memory to my agent, I can receive a follow-up question that will not be routed to the VectorStore as it would labeled as "Unrelated". Example:
Supposedly the first question was : "Am I allowed to work remotely on Fridays?". This is related for exampled and the model will reply with no from the VectorStore. Now, if I follow it up with: "How about Mondays?" the model will identify the input query as unrelated as it is not tackling the subject of "Work".
Ideally, I would want it to reformulate the question to: "How about working remotely on Monday?" and to route it accordingly to the VectorStore as it is tackling the subject of "work".
I made my research and found that CoversationalRetrievalChains should be part of my solution, but I do not know how to put it exactly in my architecture.
Should I only replace the retrievalQA chain with CoversationalRetrievalChains for my VectorStore and keep the rest as a LLMChain with the same prompt or no? Should I add another agent or no (In this we are talking about sequential Agents)?
I'm a bit lost and would love some help on the subject.
Beta Was this translation helpful? Give feedback.
All reactions