Why tool call arguments are streamed in separate AIMessageChunk
?
#2189
Replies: 2 comments 1 reply
-
Hello, AIMessageChunks can in all cases be added across a stream to produce a valid message. See example below: from langgraph.prebuilt import create_react_agent
def get_weather(location: str) -> str:
"""Get weather at a location."""
return "It's sunny."
agent = create_react_agent("openai:gpt-4.1-mini", [get_weather])
input_message = {
"role": "user",
"content": "What's the weather in Boston?",
}
full = None
for chunk, metadata in agent.stream(
{"messages": [input_message]}, stream_mode="messages"
):
if (metadata["langgraph_step"] == 1): # initial tool call
full = chunk if full is None else full + chunk # <-- add message chunks
print(full.tool_calls)
Although tool_calls on a single token are typically meaningless, as you accumulate the message chunks you should recover valid tool calls. The concatenation by arg, index, etc. is done for you during the addition. Hope this helps, let me know if I misunderstood the question. |
Beta Was this translation helpful? Give feedback.
-
@ccurme Thanks for the pointer above. Is that the only way to get the full list of args? In a context of a multi agent graph where multiple tools can be called in the same run (e.g. supervisor agent transferring to sub agent and sub agent calling a tool) adding the message chunks - like in your example - means that the args of different tools get merged together. why args needs to be chunked in the first instance? instead of being immediately available in the very first message? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Description
Hi Team,
Thank you for building
langgraph
, it's a great framework!I have a question related to the how to stream data from within a tool tutorial. In streaming mode I noticed that not only the the
ToolMessage
and the finalAIMessage
is streamed back by tokens inAIMessageChunk
(expected), but also the input arguments of the tool call are also streamed. See below.Message 1:
AIMessageChunk
with empty content and tool_calls[0].get("name") == "get_items"Message 2:
AIMessageChunk
with empty content and empty tool_callsMessage 3:
AIMessageChunk
with empty content and tool_call_chunks[0].get("args")=="place"Message 4:
AIMessageChunk
with empty content and tool_call_chunks[0].get("args")==":"I understand that there are ToolCallChunk that can be merged.
Questions
invalid_tool_calls
? I understand why it's an invalid tool call, but I don't understand why this is the default implementation/interface for it.ToolMessages
are streamed by chunk, but also the full tool message is returned. However, the full tool call is not returned. Is there a standard way to do this other than concatenating the chunks by theindex
field?Thank you!
How to reproduce
Environment
platform: mac (m3)
python:
3.10.14
langchain:
0.3.1
langgraph:
0.2.39
Beta Was this translation helpful? Give feedback.
All reactions