Routing with different agents and chain types #8005
Replies: 1 comment
-
Answer generated by a 🤖 AnswerI found related issues that might be helpful. I did my best to summarize the solutions, but I recommend looking at the related issues yourself. Open IssuesCan't use ConversationalRetrievalChain with router chainingThe suggested solution is: The error logs indicate that there are several fields that are either missing or not permitted. The proposed solution suggests using the Here's how you can modify your def generate_router_chain(prompt_infos, destination_chains, default_chain):
"""
Generates the router chains from the prompt infos.
:param prompt_infos The prompt informations generated above.
:param destination_chains The LLM chains with different prompt templates
:param default_chain A default chain
"""
retriever_infos = [{'name': p['name'], 'description': p['description'], 'retriever': destination_chains[p['name']]} for p in prompt_infos]
router_chain = MultiRetrievalQAChain.from_retrievers(llm, retriever_infos, default_chain=default_chain)
return router_chain This is just one potential solution and it might need to be adjusted according to your specific needs. If you believe this is a bug that could impact other users, feel free to make a pull request with the necessary changes. If this solution doesn't align with your experience, I would recommend upgrading to the latest version of LangChain to ensure you have all the latest features and bug fixes. Issue: <Combining LLMChain and ConversationalRelationChain in an agent's routes>The suggested solution is: The error logs indicate that there are several fields that are either missing or not permitted. The proposed solution suggests using the Here's how you can modify your def generate_router_chain(prompt_infos, destination_chains, default_chain):
"""
Generates the router chains from the prompt infos.
:param prompt_infos The prompt informations generated above.
:param destination_chains The LLM chains with different prompt templates
:param default_chain A default chain
"""
retriever_infos = [{'name': p['name'], 'description': p['description'], 'retriever': destination_chains[p['name']]} for p in prompt_infos]
router_chain = MultiRetrievalQAChain.from_retrievers(llm, retriever_infos, default_chain=default_chain)
return router_chain This is just one potential solution and it might need to be adjusted according to your specific needs. If you believe this is a bug that could impact other users, feel free to make a pull request with the necessary changes. If this solution doesn't align with your experience, I would recommend upgrading to the latest version of LangChain to ensure you have all the latest features and bug fixes. Allow for routing between agents and llmchainNo solution found. This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I just realized that using routing with different type of agents or chains is simply impossible (at least for now). Was trying to create an agent that has 2 routes (The first one being an LLMChain and the second being a ConversationalRelationChain). The first issue was that each one expected a different type of input. After using LOTR (Lord of all retrievers) I was getting "Saving not supported for this chain type." and so far I have reached a deadend.
A feature should be added or a documentation or anything that would allow us to bypass such an issue.
Beta Was this translation helpful? Give feedback.
All reactions