You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/mapping/types/dense-vector.asciidoc
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -118,7 +118,7 @@ The three following quantization strategies are supported:
118
118
119
119
* `int8` - Quantizes each dimension of the vector to 1-byte integers. This reduces the memory footprint by 75% (or 4x) at the cost of some accuracy.
120
120
* `int4` - Quantizes each dimension of the vector to half-byte integers. This reduces the memory footprint by 87% (or 8x) at the cost of accuracy.
121
-
* `bbq` - experimental:[] Better binary quantization which reduces each dimension to a single bit precision. This reduces the memory footprint by 96% (or 32x) at a larger cost of accuracy. Generally, oversampling during query time and reranking can help mitigate the accuracy loss.
121
+
* `bbq` - Better binary quantization which reduces each dimension to a single bit precision. This reduces the memory footprint by 96% (or 32x) at a larger cost of accuracy. Generally, oversampling during query time and reranking can help mitigate the accuracy loss.
122
122
123
123
124
124
When using a quantized format, you may want to oversample and rescore the results to improve accuracy. See <<dense-vector-knn-search-rescoring, oversampling and rescoring>> for more information.
@@ -133,7 +133,7 @@ This means disk usage will increase by ~25% for `int8`, ~12.5% for `int4`, and ~
133
133
134
134
NOTE: `int4` quantization requires an even number of vector dimensions.
135
135
136
-
NOTE: experimental:[] `bbq` quantization only supports vector dimensions that are greater than 64.
136
+
NOTE: `bbq` quantization only supports vector dimensions that are greater than 64.
137
137
138
138
Here is an example of how to create a byte-quantized index:
139
139
@@ -325,15 +325,15 @@ by 4x at the cost of some accuracy. See <<dense-vector-quantization, Automatical
325
325
* `int4_hnsw` - This utilizes the https://arxiv.org/abs/1603.09320[HNSW algorithm] in addition to automatically scalar
326
326
quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint
327
327
by 8x at the cost of some accuracy. See <<dense-vector-quantization, Automatically quantize vectors for kNN search>>.
328
-
* experimental:[] `bbq_hnsw` - This utilizes the https://arxiv.org/abs/1603.09320[HNSW algorithm] in addition to automatically binary
328
+
* `bbq_hnsw` - This utilizes the https://arxiv.org/abs/1603.09320[HNSW algorithm] in addition to automatically binary
329
329
quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint
330
330
by 32x at the cost of accuracy. See <<dense-vector-quantization, Automatically quantize vectors for kNN search>>.
331
331
* `flat` - This utilizes a brute-force search algorithm for exact kNN search. This supports all `element_type` values.
332
332
* `int8_flat` - This utilizes a brute-force search algorithm in addition to automatically scalar quantization. Only supports
333
333
`element_type` of `float`.
334
334
* `int4_flat` - This utilizes a brute-force search algorithm in addition to automatically half-byte scalar quantization. Only supports
335
335
`element_type` of `float`.
336
-
* experimental:[] `bbq_flat` - This utilizes a brute-force search algorithm in addition to automatically binary quantization. Only supports
336
+
* `bbq_flat` - This utilizes a brute-force search algorithm in addition to automatically binary quantization. Only supports
0 commit comments