UI Bug: Cannot connect Vector Store Retriever to Conversational Retrieval QA Chain in v3.0.4 #5048
Replies: 3 comments
-
Here's what I recommend to create a simple QnA flow https://docs.flowiseai.com/tutorials/rag |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hello everyone, I'm updating this thread with the solution that we found. Thanks to the maintainer @HenryHengZJ for pointing me toward the official documentation, which was the key. For anyone on Flowise v3.0.4 or newer who is facing issues with RAG, here is a summary of our findings: The Problem with Canvas-Based Ingestion: We discovered that building a data ingestion pipeline directly on the Chatflow Canvas does not work as expected in this version. We encountered two main issues: "Ending node must be either a Chain or Agent or Engine" Error: You cannot execute a simple ingestion flow (e.g., File Loader -> Text Splitter -> Embeddings -> Vector Store) from the UI chat or the standard API, because it lacks a final Chain or Agent node. Dummy Chains Don't Work: Adding a "dummy" LLM Chain to the end of the ingestion flow to bypass the error also fails. The execution engine only runs the path leading to the final chain, completely ignoring the data ingestion branch. The result is that the vector store collection is created but remains empty (0 vectors are added). The Correct Solution: Using "Document Stores" The official and fully functional method for data ingestion is to use the "Document Stores" feature, which is separate from the Chatflow canvas. The correct workflow is: Navigate to Document Stores: Go to the "Document Stores" section from the main left-side menu. Create a Store: Create a new store for your knowledge base. Upload & Process: Inside the store, use the UI to "+ Add Document Loader" (e.g., PDF File), upload your file, configure a Text Splitter, and click "Process". Configure & Upsert: Click on "Upsert Config" and configure your Embeddings (e.g., Ollama Embeddings) and Vector Store (e.g., Qdrant). Save the configuration and then click "Upsert". This process works perfectly and successfully populates the vector database. Conclusion: The intended architecture is to handle the entire Indexing pipeline via the "Document Stores" UI. Afterward, you can create a completely separate Chatflow that connects to this pre-populated store using an Agent or the Document Store (Vector) node to handle the Retrieval part. This separation avoids all the bugs and limitations of the canvas-based approach for ingestion. I hope this helps anyone else facing similar issues. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am facing an issue with the Flowise UI and would appreciate some help.
My Goal:
I am trying to build a QA chatflow to query documents from a Qdrant vector store. The flow uses a local Ollama instance for both embeddings and the chat model.
The Problem:
I am unable to connect the output of the Vector Store Retriever node to the Vector Store Retriever input on the Conversational Retrieval QA Chain node.
The UI simply refuses the connection.
The tooltip on the chain's input shows it expects a BaseRetriever, which should be compatible with the output of the Vector Store Retriever node.
This problem persists even after successfully connecting the mandatory Chat Model input and performing a hard refresh of the browser.
Context:
Flowise Version: 3.0.4
Environment: Running inside a Docker container with Docker Desktop.
Setup: Using Qdrant, Ollama Embeddings, and ChatOllama nodes.
Flow Screenshot:

Beta Was this translation helpful? Give feedback.
All reactions