Langgraph did not call tools #3808
Replies: 4 comments 1 reply
-
Hiya - your model isn't using its tools effectively. If i replace with a regular openai, gemini, or anthropic model here it works as expected. Does your model gateway (or whatever you're connecting to there) support the (In other words, this doesn't appear to be an issue with langgraph) |
Beta Was this translation helpful? Give feedback.
-
Will convert this to a discussion, as this is not a langgraph bug. |
Beta Was this translation helpful? Give feedback.
-
The current state of the code. ''' memory = SqliteSaver(memory_db) workflow = StateGraph(State) workflow.add_node("adax", self.llm_response) workflow.add_edge(START, "adax") self.graph = workflow.compile(checkpointer=memory) self.llm = ChatOllama(xxx=xxx).bind_tools(adax_tools) self.whatisaid = self.llm.invoke(messages) It seems, with investigative testing and research, IMHO, tools just aren't executing under langgraph. I went through the phase: So, I believe I have tried every possible code combination. Using START, END, or neither and used As for the models I have tried Mistral (and the type), which is supposed to be known to work Now, qwen2.5, qwen3 and the type (IE Cogito), recognize the tools. According to LangSmith, However, both the 'messages' show a null output and langsmith says the output of the tool Also to put on the table, serpapi's dashboard says there has never been a call made to the So, one place I read, they do a NULL check on the response output of langgraph and if there I tried it, still got a NULL response. That logic didn't feel right anyways because isn't Which brings me to the last part that confuses me. If the tool did give NULL as a response, Anywho, that was my 3 cents, sorry for the extra penny. :) |
Beta Was this translation helpful? Give feedback.
-
@swarnitwayal I figured out what I was doing wrong. Well, not really, I know what it was, just not why. I hope you figured your end out too. As for me... I was needing my agent instance to be an object. So, I was taking all the lessons and examples out there that are functional and translating them into objects or classes. At first, I did it off the cuff and though the logic was there and correct, the agent could chat all day but, the tools just wouldn't fire. Since I didn't start from a working example, I didn't know what a correct trace looked like. I took the prebuild example as-is and of course, it did work. I believe you said the one you supplied wasn't working, but it did. Then I slowly started to move it into a class. At one point, I started to get the same results using the prebuilt. The tool gets called then nothing. So, I would start again until I finally got an agent object. I've been working with AI since the 80's and the whole point of me trying this was so I would stop reinventing the wheel. So, using the prebuilt was perfect for me. I can still add all the bells and whistles and my spin from my decades of research, and I don't have to work so hard keeping up. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
I see that LLM doesn't make any tool calls with above code example.
System Info
System Windows:
langchain==0.3.20
langchain-core==0.3.44
langchain-openai==0.3.8
langchain-text-splitters==0.3.6
langgraph==0.3.7
langgraph-checkpoint==2.0.18
langgraph-prebuilt==0.1.2
langgraph-sdk==0.1.55
langsmith==0.3.13
Python 3.12.9
Beta Was this translation helpful? Give feedback.
All reactions