Using HugginFaceChat instead of ChatOpenAI in load_chat_planner #9170
Replies: 1 comment 3 replies
-
🤖 Hello, From the error message you provided, it seems like the The Here is the relevant code that shows how it handles multiple agents: for step in plan.steps:
_new_inputs = {
"previous_steps": self.step_container,
"current_step": step,
"objective": inputs[self.input_key],
}
new_inputs = {**_new_inputs, **inputs}
response = self.executor.step(
new_inputs,
callbacks=run_manager.get_child() if run_manager else None,
)
self.step_container.add_step(step, response) In your case, it seems like the planner is not able to generate any steps for the given query. This could be due to the model not being able to generate any plans, or the tools not being able to handle the given query. You might want to check the implementation of your As for using HuggingFaceChat instead of ChatOpenAI, you should be able to replace the model used in the I hope this helps! Let me know if you have any other questions. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi community,
I am trying to Plan_and_Execute agent to use multiple agents in a single query. is this possible ? Here is what i have come up with.
`
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True,
output_key='answer',
chat_memory=chat_history
)
`
I am getting this error.
`
Beta Was this translation helpful? Give feedback.
All reactions