Skip to content

Commit 16b2f60

Browse files
v1.11: Binary quantization usage recommendation (#3027)
1 parent 8fb4fd6 commit 16b2f60

File tree

2 files changed

+13
-1
lines changed

2 files changed

+13
-1
lines changed

learn/indexing/indexing_best_practices.mdx

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,3 +60,13 @@ If you have followed the previous tips in this guide and are still experiencing
6060
Indexing is a memory-intensive and multi-threaded operation. The more memory and processor cores available, the faster Meilisearch will index new documents. When trying to improve indexing speed, using a machine with more processor cores is more effective than increasing RAM.
6161

6262
Due to how Meilisearch works, it is best to avoid HDDs (Hard Disk Drives) as they can easily become performance bottlenecks.
63+
64+
## Enable binary quantization when using AI-powered search
65+
66+
If you are experiencing performance issues when indexing documents for AI-powered search, consider enabling [binary quantization](/reference/api/settings#binaryquantized) for your embedders. Binary quantization compresses vectors by representing each dimension with 1-bit values. This reduces the relevancy of semantic search results, but greatly improves performance.
67+
68+
Binary quantization works best with large datasets containing more than 1M documents and using models with more than 1400 dimensions.
69+
70+
<Capsule intent="danger" title="Binary quantization is an irreversible process">
71+
**Activating binary quantization is irreversible.** Once enabled, Meilisearch converts all vectors and discards all vector data that does fit within 1-bit. The only way to recover the vectors' original values is to re-vectorize the whole index in a new embedder.
72+
</Capsule>

reference/api/settings.mdx

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2483,7 +2483,9 @@ This field is incompatible with all other embedders.
24832483

24842484
##### `binaryQuantized`
24852485

2486-
When set to `true`, compresses vectors by representing each of its dimensions with 1-bit values. This reduces relevancy of semantic search results, but greatly reduces database size.
2486+
When set to `true`, compresses vectors by representing each dimension with 1-bit values. This reduces the relevancy of semantic search results, but greatly reduces database size.
2487+
2488+
This option can be useful when working with large Meilisearch projects. Consider activating it if your project contains more than one million documents and uses models with more than 1400 dimensions.
24872489

24882490
<Capsule intent="danger" title="Binary quantization is an irreversible process">
24892491
**Activating `binaryQuantized` is irreversible.** Once enabled, Meilisearch converts all vectors and discards all vector data that does fit within 1-bit. The only way to recover the vectors' original values is to re-vectorize the whole index in a new embedder.

0 commit comments

Comments
 (0)