Duplicate tool calls, finally hits recursion limit #3000
Replies: 1 comment
-
Dupilcate |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a simple langgprah implementation where I am using the model to bind tools and the expectation is that the model will identify if a tool call is needed and make the call, evaluate the output and respond. The issue I am facing is that the same tools are called repeatedly and finally hits the recursion limit. Strangely it only happens 50% of the time whereas other times it only calls the tool once as expected and returns the output
`modelBedrock = ChatBedrockConverse(
client=bedrock_rt,
model="us.anthropic.claude-3-5-haiku-20241022-v1:0",
provider="Anthropic",
temperature=0,
max_tokens=None,
).bind_tools(tools=tools, tool_choice=None)
Define the Template with updated input variables
template = """
TOOLS
Assistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:
{tools}
You can use a tool for an input only once but you can use the same tool for another input again
If a tool call was requested and an output is received, don't request the same tool again, go to the next tool or provide the final answer
If the question requires using the tools more than once times please do so by passing multiple tool calls
ACTIONS
The action to take, should be one of [{tool_names}]
FINAL ANSWER
If you think you have the answer to the users question, respond with a Final Answer
USER'S INPUT
Here is the user's input (remember to respond with a markdown format)
{messages}
Thought:{agent_scratchpad}
"""
prompt = PromptTemplate(
template=template,
input_variables=["messages"],
partial_variables={
"agent_scratchpad": [],
"tools": render_text_description(tools),
"tool_names": ", ".join([t.name for t in tools]),
},
)
class AgentState(TypedDict):
messages: Annotated[List[str], add] # Only user's questions as strings
chat_history: Annotated[
List[Union[HumanMessage, AIMessage, ToolMessage]], add
] # Full chat history
agent_outcome: Union[AgentAction, str, None] # Can be AgentAction or description
intermediate_steps: Annotated[
List[Tuple[AgentAction, str]], add
] # Intermediate steps
Create the Langgraph agent executor with the updated prompt
langgraph_AgentAWS = create_react_agent(
modelBedrock, tools=tools, state_modifier=prompt
)
@observe
def run_agent(state: AgentState) -> dict:
# Node that calls the LLM
workflow = StateGraph(AgentState)
workflow.add_node("run_agent", run_agent)
workflow.set_entry_point("run_agent")
appWorkflow = workflow.compile()
input_question = "What is the weather in Mumbai, California and Frankfurt ?"
agent_input = {
"messages": [input_question],
"chat_history": [],
"agent_outcome": "",
"intermediate_steps": [],
}
agent_result = appWorkflow.invoke(agent_input, config={"callbacks": [langfuse_handler]})
`
I can see from langfuse traces the "messages" being appended with AIMessages and ToolMessage outputs but the LLM still decides to make the same tool calls again and again
Beta Was this translation helpful? Give feedback.
All reactions