You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/elasticsearch/mapping-reference/dense-vector.md
+17-20Lines changed: 17 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -115,23 +115,23 @@ To retrieve vector values explicitly, you can use:
115
115
116
116
* The `fields` option to request specific vector fields directly:
117
117
118
-
```console
119
-
POST my-index-2/_search
120
-
{
121
-
"fields": ["my_vector"]
122
-
}
123
-
```
118
+
```console
119
+
POST my-index-2/_search
120
+
{
121
+
"fields": ["my_vector"]
122
+
}
123
+
```
124
124
125
125
- The `_source.exclude_vectors` flag to re-enable vector inclusion in `_source` responses:
126
126
127
-
```console
128
-
POST my-index-2/_search
129
-
{
130
-
"_source": {
131
-
"exclude_vectors": false
127
+
```console
128
+
POST my-index-2/_search
129
+
{
130
+
"_source": {
131
+
"exclude_vectors": false
132
+
}
132
133
}
133
-
}
134
-
```
134
+
```
135
135
136
136
### Storage behavior and `_source`
137
137
@@ -309,7 +309,7 @@ $$$dense-vector-similarity$$$
309
309
`l2_norm`
310
310
: Computes similarity based on the L2 distance (also known as Euclidean distance) between the vectors. The document `_score` is computed as `1 / (1 + l2_norm(query, vector)^2)`.
311
311
312
-
For `bit` vectors, instead of using `l2_norm`, the `hamming` distance between the vectors is used. The `_score` transformation is `(numBits - hamming(a, b)) / numBits`
312
+
For `bit` vectors, instead of using `l2_norm`, the `hamming` distance between the vectors is used. The `_score` transformation is `(numBits - hamming(a, b)) / numBits`
313
313
314
314
`dot_product`
315
315
: Computes the dot product of two unit vectors. This option provides an optimized way to perform cosine similarity. The constraints and computed score are defined by `element_type`.
: (Required, string) The type of kNN algorithm to use. Can be either any of:
343
343
* `hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) for scalable approximate kNN search. This supports all `element_type` values.
344
-
* `int8_hnsw` - The default index type for some float vectors:
345
-
344
+
* `int8_hnsw` - The default index type for some float vectors:
346
345
* {applies_to}`stack: ga 9.1` Default for float vectors with less than 384 dimensions.
347
346
* {applies_to}`stack: ga 9.0` Default for float all vectors.
348
-
349
347
This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically scalar quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 4x at the cost of some accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
350
348
* `int4_hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically scalar quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 8x at the cost of some accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
351
349
* `bbq_hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically binary quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 32x at the cost of accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
352
-
350
+
353
351
{applies_to}`stack: ga 9.1``bbq_hnsw` is the default index type for float vectors with greater than or equal to 384 dimensions.
354
352
* `flat` - This utilizes a brute-force search algorithm for exact kNN search. This supports all `element_type` values.
355
353
* `int8_flat` - This utilizes a brute-force search algorithm in addition to automatically scalar quantization. Only supports `element_type` of `float`.
: (Optional, float) Only applicable to `int8_hnsw`, `int4_hnsw`, `int8_flat`, and `int4_flat` index types. The confidence interval to use when quantizing the vectors. Can be any value between and including `0.90` and `1.0` or exactly `0`. When the value is `0`, this indicates that dynamic quantiles should be calculated for optimized quantization. When between `0.90` and `1.0`, this value restricts the values used when calculating the quantization thresholds. For example, a value of `0.95` will only use the middle 95% of the values when calculating the quantization thresholds (e.g. the highest and lowest 2.5% of values will be ignored). Defaults to `1/(dims + 1)` for `int8` quantized vectors and `0` for `int4` for dynamic quantile calculation.
367
365
368
-
369
366
`rescore_vector` {applies_to}`stack: preview 9.0, ga 9.1`
370
367
: (Optional, object) An optional section that configures automatic vector rescoring on knn queries for the given field. Only applicable to quantized index types.
`dense_vector` fields support [synthetic `_source`](/reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source) .
387
384
388
385
389
-
## Indexing & Searching bit vectors [dense-vector-index-bit]
386
+
## Indexing and searching bit vectors [dense-vector-index-bit]
390
387
391
388
When using `element_type: bit`, this will treat all vectors as bit vectors. Bit vectors utilize only a single bit per dimension and are internally encoded as bytes. This can be useful for very high-dimensional vectors or models.
0 commit comments