-
Notifications
You must be signed in to change notification settings - Fork 117
Description
Issue Title: [BUG] LLMAgent.async_policy terminates prematurely when wait_tool_result=True
Description
When an LLMAgent is configured with wait_tool_result=True, it correctly intercepts and executes tool calls synchronously during the async_policy turn. However, the current implementation only performs a single LLM invocation. Once the tool results are obtained, the method returns without re-invoking the LLM with the newly acquired information.
In an event-driven setup (e.g., using TaskEventRunner), this causes the DefaultAgentHandler to receive a tool result wrapped in an Observation but without any further instructions. Since the agent hasn't generated a final response, the handler often treats the task as "finished" or fails to route the tool result back to the LLM for a second turn, resulting in a silent failure or an incomplete interaction where the user never receives the final answer.
Steps to Reproduce
- Define an
LLMAgentwithwait_tool_result=True. - Assign a tool (e.g.,
get_departments) to the agent. - Send a request to the agent that triggers that tool.
- Observe that the agent executes the tool, but the output sent to the user is either the raw tool result or an empty response, and the task terminates.
Expected Behavior
If wait_tool_result is True, the agent should:
- Call the LLM.
- Execute the requested tool(s).
- Append the tool results to its memory/history.
- Re-invoke the LLM in a loop until the LLM produces a final text response or indicates completion (
self.finished == True).
Actual Behavior
The agent executes the tool but terminates the async_policy execution immediately after the first tool execution, returning the tool actions as the policy result.
Log Evidence
| INFO | aworld.agents.llm_agent.LLMAgent.invoke_model:849 - LLM Execute response: {"tool_calls": [{"name": "get_departments"}]}
| INFO | aworld.agents.llm_agent.LlmOutputParser.parse:123 - ✅ [Agent:identity_navigator] Parse completed: 1 action(s)
| INFO | ... - Successfully connected to SSE server: science_utils
| INFO | ... - main task identity_task_1 finished ...
# Notice: Task finishes immediately after tool execution without a second LLM turn.
Suggested Fix
Wrap the core logic of LLMAgent.async_policy in a while not self.finished loop. If wait_tool_result is enabled, the agent should update its observation with the tool results and continue to the next iteration of the loop to consult the LLM again.
# Proposed logic change in LLMAgent.async_policy
policy_result = []
while not self.finished:
messages = await self.build_llm_input(observation, ...)
llm_response = await self.invoke_model(messages, ...)
agent_result = await self.model_output_parser.parse(llm_response, ...)
if self.is_agent_finished(llm_response, agent_result):
policy_result = agent_result.actions
else:
if not self.wait_tool_result:
return agent_result.actions # Return to runner for async handling
# Synchronous execution
tool_actions = await self.execution_tools(agent_result.actions, message)
# Update observation for the next loop iteration
observation = Observation(content=tool_actions[0].policy_info, is_tool_result=True)
continue
await self.send_llm_response_output(...)
return policy_resultEnvironment
- Python: 3.10+
- Runner: TaskEventRunner / EventDriven mode