Error using Tool Agent with ChatOllama and llama3 model #5083
Replies: 2 comments 1 reply
-
It looks like your setup is correct up to the point of embedding and collection, so the failure is happening at the Tool Agent execution step rather than in the data pipeline itself. In other words, the retriever is handing back results, but the Tool Agent isn’t properly relaying them through to the model call. From a troubleshooting perspective, this maps to two common failure points:
To debug, a quick way is: Some teams handle this by adding a semantic firewall step before the agent call — basically normalizing the retriever output into a guaranteed schema. It doesn’t require changing your infra, just an extra transformation node. Would you like me to outline a minimal JSON transformation that normalizes retriever output so you can test whether the mismatch is the root cause? |
Beta Was this translation helpful? Give feedback.
-
Instead of using localhost in your configuration, please try 127.0.0.1 instead. Thank you. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Objective:
I am trying to build a RAG chatbot that uses a Tool Agent to retrieve documents from a Qdrant vector store. The goal is to make the agent use a specific tool to answer questions about public and private employment contracts.
Problem:
When I ask the chatbot a question that should trigger the retrieval tool (e.g., "What documents do you have from the Madrid City Council?"), the conversation fails with a generic error message "Fetch failed". The Tool Agent is not successfully calling the tool. This happens with both the llama3:latest and llama3:8b models.
Context:
Flowise Version: 3.0.4
Environment: Running inside a Docker container using Docker Desktop.
Flow Description:
The ChatOllama node is connected to a Tool Agent as the Tool Calling Chat Model. The Base URL is set to http://ollama-server:11434 and the Model Name is llama3:latest (or llama3:8b).
There are two Retriever Tool nodes connected to the Tool Agent's Tools input. These tools are configured to search for public and private documents.
Each Retriever Tool is connected to a Qdrant node. The Qdrant collection has been successfully created with embeddings from the Ollama Embeddings node.
Troubleshooting steps taken:
Verified that the Ollama server is active and accessible via http://localhost:11434 from the host machine.
Verified that the URL for the Ollama Embeddings and ChatOllama nodes is correct (http://ollama-server:11434).
Corrected a dimension mismatch error by deleting the old Qdrant collections and re-ingesting documents with the correct Vector Dimension (768).
The problem persists even after successfully ingesting the documents and verifying the LLM server is running. The issue seems to be specific to the agent's ability to use the LLM for function calling.
Flow Screenshot:
Beta Was this translation helpful? Give feedback.
All reactions