Replies: 1 comment 3 replies
-
Hola @zapatacc! Soy Dosu, un bot que puede ayudarte a resolver bugs, responder preguntas y convertirte en colaborador. Estoy aquí para asistirte mientras esperas a un mantenedor humano. To debug the issue where the ReAct agent correctly infers which tool to use but often doesn't execute the function, you should focus on the Here is the relevant part of the def _process_actions(
self,
tools: Sequence[AsyncBaseTool],
output: ChatResponse,
is_streaming: bool = False,
) -> Tuple[List[BaseReasoningStep], bool]:
tools_dict: Dict[str, AsyncBaseTool] = {
tool.metadata.get_name(): tool for tool in tools
}
_, current_reasoning, is_done = self._extract_reasoning_step(
output, is_streaming
)
if is_done:
return current_reasoning, True
# call tool with input
reasoning_step = cast(ActionReasoningStep, current_reasoning[-1])
tool = tools_dict[reasoning_step.action]
with self.callback_manager.event(
CBEventType.FUNCTION_CALL,
payload={
EventPayload.FUNCTION_CALL: reasoning_step.action_input,
EventPayload.TOOL: tool.metadata,
},
) as event:
tool_output = tool.call(**reasoning_step.action_input)
event.on_end(payload={EventPayload.FUNCTION_OUTPUT: str(tool_output)})
self.sources.append(tool_output)
observation_step = ObservationReasoningStep(observation=str(tool_output))
current_reasoning.append(observation_step)
if self._verbose:
print_text(f"{observation_step.get_content()}\n", color="blue")
return current_reasoning, False To see where the ReAct agent must execute the tool, you should look at the part where the tool is called: tool_output = tool.call(**reasoning_step.action_input) If the tool is not being executed, you should check the following:
You can add additional logging or print statements to help debug the issue: print(f"Executing tool: {reasoning_step.action} with input: {reasoning_step.action_input}")
tool_output = tool.call(**reasoning_step.action_input)
print(f"Tool output: {tool_output}") Additionally, you can refer to the test cases provided in the def test_chat_basic(
add_tool: FunctionTool,
) -> None:
mock_llm = MockChatLLM(
responses=[
ChatMessage(
content=MOCK_ACTION_RESPONSE,
role=MessageRole.ASSISTANT,
),
ChatMessage(
content=MOCK_FINAL_RESPONSE,
role=MessageRole.ASSISTANT,
),
]
)
agent = ReActAgent.from_tools(
tools=[add_tool],
llm=mock_llm,
)
response = agent.chat("What is 1 + 1?")
assert isinstance(response, AgentChatResponse)
assert response.response == "2"
chat_history = agent.chat_history
assert chat_history == [
ChatMessage(
content="What is 1 + 1?",
role=MessageRole.USER,
),
ChatMessage(
content="2",
role=MessageRole.ASSISTANT,
),
] By examining these test cases, you can gain insights into the expected behavior and identify any discrepancies in your implementation [1][2][3][4].
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there!
I'm building an Agentic Rag applicaction with Claude 3 Sonnet hosted in Bedrock. I'm following the DeepLearning.ai course where Jerry use several researcher pdf papers.
The agent infers in a well fashion the right tool to use, however, it seems that the agent sometimes (most of the times) doesn't run the function, because the llm hallocionate a lot of, even when the temperature is setted to 0.
I know that sometime the function is executed because I'm using arize phoenix as the observability tool and it can be seen the function execution step in some runs.
This is a snippet of my code.
And here is the ouptput and CoT:
How can I debug this? Can I see where must the react agent execute the tool?
Beta Was this translation helpful? Give feedback.
All reactions