-
Notifications
You must be signed in to change notification settings - Fork 151
Open
Description
I am trying to get the chatmock model to work in n8n but it seems to get stuck on retrieving data.
From Gemini:
The Problem: Because you are using the OpenAI Chat Model node connected to an AI Agent (Tools Agent), n8n expects to handle this tool call internally. However, if n8n isn't configured to handle the "streaming" reasoning or if the connection between the Agent and the Tool node is failing, it captures the "Stop" signal before it can process the tool's result.
2. The "Streaming" Conflict
Look at this line in your logs:
"stream": true.
When Streaming is enabled on a local model server that also provides Reasoning (the "Thinking" tokens), n8n sometimes gets confused. It sees the reasoning text, but since that isn't the "Final Message," it doesn't display it in the output. Then, when the model switches to the tool_calls phase, the node ends its "turn" to let the tool run.
3. Immediate Fixes to Try
Disable Streaming: In your OpenAI Chat Model node settings, look for an option to turn off Stream. This forces the model to finish its entire "thought" (reasoning + tool call) before sending it to n8n, which is much more stable for local servers.
I have deployed it in a docker environment
I tried with adding:
- CHATGPT_LOCAL_FORCE_NO_STREAM=true
but it doesn't seem to work. Is there a way to disable stream?

Metadata
Metadata
Assignees
Labels
No labels