Replies: 2 comments
-
I am facing the same issue. I think it is because you are sending 2 subsequent assistant messages to the Anthropic API, those get merged together and Anthropic's model decides the assistant turns has reached a "natural ending". I am looking at mitigation now. |
Beta Was this translation helpful? Give feedback.
-
Did you find any solution for this? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear all,
Did anyone encounter an issue with an empty AIMessage being returned when using a tool with langgraph react agent?
I am using the simplest possible code to isolate the issue:
dispatcher_agent = create_react_agent(llm, tools=tools, prompt=prompt) agent_state = {"messages": state.get("messages")} result = await dispatcher_agent.ainvoke(agent_state, stream_mode="values", config={"recursion_limit": 100})
There is a single tool I am using that writes the file on a hard drive.
Assistance would be greatly appreciated.
The following trace shows the result messages (text omitted for brevity).
============================================================
`
{'messages': [
HumanMessage(content=' "Please, do the following:\n* Greet your fellow. \n* Introduce yourself. State what you can help with…....', additional_kwargs={}, response_metadata={}, id='36c21118-b8c2-46e3-983a-b4a4d8aa15a2’),
AIMessage(content=[{'text': "# Hello, Fellow!\n\nI'm your …….What would you like to work on next?", 'type': 'text', 'index': 0}], additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run--e7b5c2fb-9e49-435a-8863-595a79c6afac', usage_metadata={'input_tokens': 1297, 'output_tokens': 143, 'total_tokens': 1440, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}),
HumanMessage(content=[‘analysis'], additional_kwargs={}, response_metadata={}, id='4e74bdb6-6ba5-4615-b80e-c9f6ac2a7f6c’),
AIMessage(content=[{'text': "I understand you'd like to work with our analyst. I'll connect you right away.", 'type': 'text', 'index': 0}, {'id': 'toolu_018drWMVRnS1x3PDtAMEL5Gn', 'input': {}, 'name': 'select_member', 'type': 'tool_use', 'index': 1, 'partial_json': '{"code": “analysis"}'}], additional_kwargs={}, response_metadata={'stop_reason': 'tool_use', 'stop_sequence': None}, id='run--fac5e952-9b53-481b-8e50-315234a71a9d', tool_calls=[{'name': 'select_team_member', 'args': {'team_member_code': 'ux_analyst'}, 'id': 'toolu_018drWMVRnS1x3PDtAMEL5Gn', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1445, 'output_tokens': 88, 'total_tokens': 1533, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}),
ToolMessage(content='Successfully selected a member: analysis', name='select_member', id='e7003aad-4788-41f3-835b-c3ad348bdd2f', tool_call_id='toolu_018drWMVRnS1x3PDtAMEL5Gn'),
AIMessage(content=[], additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run--11560ac7-2d23-474f-9c16-0e5c57f05e92', usage_metadata={'input_tokens': 1554, 'output_tokens': 6, 'total_tokens': 1560, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})
]}
`
Beta Was this translation helpful? Give feedback.
All reactions