We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
2 parents 40ccb00 + f658910 commit b0ca50cCopy full SHA for b0ca50c
README.md
@@ -231,6 +231,12 @@ exact same path** that was specified during database creation.
231
232
### Llama-Stack Faiss
233
234
+> [!IMPORTANT]
235
+> When using the `--auto-chunking` flag, chunking happens within llama-stack using the
236
+> OpenAI-compatible Files API. This makes vector stores significantly larger than manual
237
+> chunking because the Files API stores a redundant copy of the embeddings.
238
+> Manual chunking results in smaller database files.
239
+
240
The process is basically the same as in the
241
[Llama-Index Faiss Vector Store](#faiss-vector-store) but passing the
242
`--vector-store-type` parameter; so you generate the documentation using the
0 commit comments