[LangGraph + Ollama] Agent using local model (qwen2.5) returns AIMessage(content='') even when tool responds correctly #4398
matiasdev30
started this conversation in
Discussions
Replies: 1 comment
-
The problem is in how you pass the messages dict to the .invoke method. The .invoke() expects two arguments - an input dict and a config dict. Here is the method signature: def invoke(self, input: dict, config: dict = None) -> dict .When you pass messages as you did, you actually pass an empty dict for input and a config dict with messages :
The key here is to simply pass the messages as the input dict and it will probably work.
or specify the keyword arguments :
Also make sure that the chosen LLM from ollama supports tool calling. As far as I know qwen2.5 supports it but it might be a good idea to switch to a more powerful LLM. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
[LangGraph + Ollama] Agent using local model (qwen2.5) returns
AIMessage(content='')
even when tool responds correctlyI’m using
create_react_agent
fromlanggraph.prebuilt
with a local model served via Ollama (qwen2.5
), and the agent consistently returns anAIMessage
with an emptycontent
field — even though the tool returns a valid string.Code
Output
As shown above, the agent responds with an empty string, even though the
search()
tool clearly returns"It's 60 degrees and foggy."
.Has anyone seen this behavior? Could it be an issue with
qwen2.5
,langgraph.prebuilt
, the Ollama config, or maybe a mismatch somewhere between them?Any insight appreciated.
``
Beta Was this translation helpful? Give feedback.
All reactions