Possible to use quivr with preconstructed vector store #3652
Replies: 2 comments
-
|
Yes, you can use pre-built embeddings with Quivr, and no, you do not need the raw files at runtime. How it works: Quivr uses Supabase pgvector for storage. If you pre-compute embeddings externally, you can insert them directly into the vectors table. Steps:
Benefits of pre-computing:
Gotcha: We do similar pre-processing pipelines for enterprise RAG at Revolution AI — pre-computed embeddings work great for production deployments. |
Beta Was this translation helpful? Give feedback.
-
|
Yes, you can use preconstructed embeddings with Quivr! Approach 1: Direct vector store import If your vectors are in a compatible format (Supabase pgvector, Qdrant, etc.): # Pre-embed your documents
from langchain.embeddings import OpenAIEmbeddings
embedder = OpenAIEmbeddings()
vectors = [embedder.embed_query(doc.text) for doc in documents]
# Insert into Quivr-compatible vector store
for doc, vector in zip(documents, vectors):
supabase.table("vectors").insert({
"content": doc.text,
"embedding": vector,
"metadata": doc.metadata
}).execute()Approach 2: Point Quivr to existing store Configure Quivr to use your vector store: SUPABASE_URL=your-existing-supabase
SUPABASE_KEY=your-keyDo you need raw files?
Best practice: {
"content": "chunk text...",
"embedding": [...],
"metadata": {
"source_file": "doc.pdf",
"page": 5,
"chunk_id": 42
}
}Store metadata for traceability, but raw files are optional for inference. We build pre-indexed RAG systems at Revolution AI — pre-embedding speeds up deployment significantly. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I would like to preconstruct the embeddings from a set of files and configure quivr to use it. Would quivr still need the raw files to run in this mode?
Beta Was this translation helpful? Give feedback.
All reactions