You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you're ready to deploy your RAG system, you can export your processed documents to any vector store supported by LlamaIndex. This allows you to use your Kiln-configured chunking and embedding settings in production.
320
+
321
+
### Load a LlamaIndex Vector Store
322
+
323
+
Kiln provides a `VectorStoreLoader` that yields your processed document chunks as LlamaIndex `TextNode` objects. These nodes contain the same metadata, chunking and embedding data as your Kiln Search Tool configuration.
324
+
325
+
```py
326
+
from kiln_ai.datamodel import Project
327
+
from kiln_ai.datamodel.rag import RagConfig
328
+
from kiln_ai.adapters.vector_store_loaders import VectorStoreLoader
asyncfor batch in loader.iter_llama_index_nodes(batch_size=10):
339
+
# Insert into your chosen vector store
340
+
# Examples: LanceDB, Pinecone, Chroma, Qdrant, etc.
341
+
pass
342
+
```
343
+
344
+
**Supported Vector Stores:** LlamaIndex supports 20+ vector stores including LanceDB, Pinecone, Weaviate, Chroma, Qdrant, and more. See the [full list](https://developers.llamaindex.ai/python/framework/module_guides/storing/vector_stores/).
345
+
346
+
### Example: LanceDB Cloud
347
+
348
+
Internally Kiln uses LanceDB. By using LanceDB cloud you'll get the same indexing behaviour as in app.
349
+
350
+
Here's a complete example using LanceDB Cloud:
351
+
352
+
```py
353
+
from kiln_ai.datamodel import Project
354
+
from kiln_ai.datamodel.rag import RagConfig
355
+
from kiln_ai.datamodel.vector_store import VectorStoreConfig
356
+
from kiln_ai.adapters.vector_store_loaders import VectorStoreLoader
357
+
from kiln_ai.adapters.vector_store.lancedb_adapter import lancedb_construct_from_config
asyncfor batch in loader.iter_llama_index_nodes(batch_size=100):
378
+
await lancedb_store.async_add(batch)
379
+
380
+
print("Documents successfully exported to LanceDB!")
381
+
```
382
+
383
+
After export, query your data using [LlamaIndex](https://developers.llamaindex.ai/python/framework-api-reference/storage/vector_store/lancedb/) or the [LanceDB client](https://lancedb.github.io/lancedb/).
384
+
385
+
### Deploy RAG without LlamaIndex
386
+
387
+
While Kiln is designed for deploying to LlamaIndex, you don't need to use it. The `iter_llama_index_nodes` returns a `TextNode` object which includes all the data you need to build a RAG index in any stack: embedding, text, document name, chunk ID, etc.
388
+
313
389
## Full API Reference
314
390
315
391
The library can do a lot more than the examples we've shown here.
0 commit comments